A survey on in-context learning Q Dong, L Li, D Dai, C Zheng, Z Wu, B Chang, X Sun, J Xu, Z Sui arXiv preprint arXiv:2301.00234, 2022 | 614 | 2022 |
Multilingual machine translation with large language models: Empirical results and analysis W Zhu, H Liu, Q Dong, J Xu, S Huang, L Kong, J Chen, L Li arXiv preprint arXiv:2304.04675, 2023 | 116* | 2023 |
Calibrating Factual Knowledge in Pretrained Language Models Q Dong, D Dai, Y Song, J Xu, Z Sui, L Li Findings of EMNLP 2022, 2022 | 63 | 2022 |
Can We Edit Factual Knowledge by In-Context Learning? C Zheng, L Li, Q Dong, Y Fan, Z Wu, J Xu, B Chang EMNLP 2023, 2023 | 50 | 2023 |
ParaSCI: A large scientific paraphrase dataset for longer paraphrase generation Q Dong, X Wan, Y Cao The 16th Conference of the European Chapter of the Association for …, 2021 | 29 | 2021 |
Large language model for science: A study on p vs. np Q Dong, L Dong, K Xu, G Zhou, Y Hao, Z Sui, F Wei arXiv preprint arXiv:2309.05689, 2023 | 11 | 2023 |
Cuge: A chinese language understanding and generation evaluation benchmark Y Yao, Q Dong, J Guan, B Cao, Z Zhang, C Xiao, X Wang, F Qi, J Bao, ... arXiv preprint arXiv:2112.13610, 2021 | 11 | 2021 |
Neural knowledge bank for pretrained transformers D Dai, W Jiang, Q Dong, Y Lyu, Z Sui CCF International Conference on Natural Language Processing and Chinese …, 2023 | 10 | 2023 |
Extrapolating large language models to non-english by aligning languages W Zhu, Y Lv, Q Dong, F Yuan, J Xu, S Huang, L Kong, J Chen, L Li arXiv preprint arXiv:2308.04948, 2023 | 9 | 2023 |
Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding H Xia, Z Yang, Q Dong, P Wang, Y Li, T Ge, T Liu, W Li, Z Sui arXiv preprint arXiv:2401.07851, 2024 | 8 | 2024 |
Premise-based Multimodal Reasoning: Conditional Inference on Joint Textual and Visual Clues Q Dong, Z Qin, H Xia, T Feng, S Tong, H Meng, L Xu, W Zhan, S Li, Z Wei, ... Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 8 | 2022 |
Imagenetvc: Zero-shot visual commonsense evaluation on 1000 imagenet categories H Xia, Q Dong, L Li, J Xu, Z Qin, Z Sui Findings of EMNLP 2023, 2023 | 5 | 2023 |
Statistical dataset evaluation: Reliability, difficulty, and validity C Wang, Q Dong, X Wang, H Wang, Z Sui arXiv preprint arXiv:2212.09272, 2022 | 4* | 2022 |
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization X Meng, D Dai, W Luo, Z Yang, S Wu, X Wang, P Wang, Q Dong, L Chen, ... arXiv preprint arXiv:2402.16141, 2024 | 2 | 2024 |
Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models H Li, Q Dong, Z Tang, C Wang, X Zhang, H Huang, S Huang, X Huang, ... arXiv preprint arXiv:2402.13064, 2024 | 2 | 2024 |
Statistical Knowledge Assessment for Large Language Models Q Dong, J Xu, L Kong, Z Sui, L Li Advances in Neural Information Processing Systems 36, 2024 | 2* | 2024 |
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances S Tong, Q Dong, D Dai, T Liu, B Chang, Z Sui Proceedings of the Thirty-First International Joint Conference on Artificial …, 2022 | 2 | 2022 |
Can Language Models Understand Physical Concepts? L Li, J Xu, Q Dong, C Zheng, Q Liu, L Kong, X Sun arXiv preprint arXiv:2305.14057, 2023 | 1 | 2023 |
A Challenging Benchmark for Low-Resource Learning Y Wang, C Ma, Q Dong, L Kong, J Xu arXiv preprint arXiv:2303.03840, 2023 | 1 | 2023 |
Go-tuning: Improving zero-shot learning abilities of smaller language models J Xu, Q Dong, H Liu, L Li arXiv preprint arXiv:2212.10461, 2022 | 1 | 2022 |