Follow
Ernie Chang
Ernie Chang
Research Scientist, Meta
Verified email at fb.com
Title
Cited by
Cited by
Year
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
10192022
Llm-qat: Data-free quantization aware training for large language models
Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ...
arXiv preprint arXiv:2305.17888, 2023
1462023
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
X Shen, E Chang, H Su, J Zhou, D Klakow
arXiv preprint arXiv:2005.01096, 2020
602020
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
X Shen, E Chang, H Su, J Zhou, D Klakow
Proceedings of ACL 2020, 2020
602020
Neural Data-to-text Generation with LM-based Text Augmentation
E Chang, X Shen, D Zhu, V Demberg, H Su
Proceedings of EACL 2021, 2021
492021
On Training Instance Selection for Few-Shot Neural Text Generation
E Chang, X Shen, HS Yeh, V Demberg
Proceedings of ACL 2021, 2021
362021
A few thousand translations go a long way! leveraging pre-trained models for african news translation
DI Adelani, JO Alabi, A Fan, J Kreutzer, X Shen, M Reid, D Ruiter, ...
arXiv preprint arXiv:2205.02022, 2022
352022
MovieChats: Chat like Humans in a Closed Domain
H Su, X Shen, Z Xiao, Z Zhang, E Chang, C Zhang, C Niu, J Zhou
Proceedings of EMNLP 2020, 6605-6619, 2020
352020
Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
E Chang, HS Yeh, V Demberg
Proceedings of EACL 2021, 2021
322021
Mobilellm: Optimizing sub-billion parameter language models for on-device use cases
Z Liu, C Zhao, F Iandola, C Lai, Y Tian, I Fedorov, Y Xiong, E Chang, ...
arXiv preprint arXiv:2402.14905, 2024
312024
Generating e-commerce product titles and predicting their quality
JGC de Souza, M Kozielski, P Mathur, E Chang, M Guerini, M Negri, ...
Proceedings of INLG, 233-243, 2018
282018
Jointly Improving Language Understanding and Generation with Quality-Weighted Weak Supervision of Automatic Labeling
E Chang, V Demberg, A Marin
Proceedings of EACL 2021, 2021
242021
Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training
E Chang, D Adelani, X Shen, V Demberg
In Proceedings of Workshop at ICLR, 2020
222020
Neobility at SemEval-2017 Task 1: An attention-based sentence similarity model.
WL Zhuang, E Chang
In Proceedings of SemEval-2017 at ACL 2017., 2017
192017
DART: A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
E Chang, J Caplinger, A Marin, X Shen, V Demberg
Proceedings of COLING 2020 (Best Demo Paper Award), 12-17, 2020
182020
DART: A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
E Chang, J Caplinger, A Marin, X Shen, V Demberg
arXiv preprint arXiv:2010.04141, 2020
182020
Guyo Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir
D Adelani, J Alabi, A Fan, J Kreutzer, X Shen, M Reid, D Ruiter, D Klakow, ...
132022
Time-Aware Ancient Chinese Text Translation and Inference
E Chang, YT Shiue, HS Yeh, V Demberg
LChange @ ACL 2021, 2021
132021
Improving language generation from feature-rich tree-structured data with relational graph convolutional encoders
X Hong, E Chang, V Demberg
Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR …, 2019
132019
Mdia: A benchmark for multilingual dialogue generation in 46 languages
Q Zhang, X Shen, E Chang, J Ge, P Chen
arXiv preprint arXiv:2208.13078, 2022
122022
The system can't perform the operation now. Try again later.
Articles 1–20