Incoder: A generative model for code infilling and synthesis D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ... arXiv preprint arXiv:2204.05999, 2022 | 372 | 2022 |
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ... arXiv preprint arXiv:2201.05966, 2022 | 234* | 2022 |
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections R Zhong, K Lee, Z Zhang, D Klein EMNLP 2021, Findings, 2021 | 148 | 2021 |
DS-1000: A natural and reliable benchmark for data science code generation Y Lai, C Li, Y Wang, T Zhang, R Zhong, L Zettlemoyer, W Yih, D Fried, ... International Conference on Machine Learning, 18319-18345, 2023 | 91 | 2023 |
Meta-learning via language model in-context tuning Y Chen, R Zhong, S Zha, G Karypis, H He arXiv preprint arXiv:2110.07814, 2021 | 91 | 2021 |
Semantic evaluation for text-to-sql with distilled test suites R Zhong, T Yu, D Klein EMNLP 2020, 2020 | 79 | 2020 |
Fine-grained sentiment analysis with faithful attention R Zhong, S Shao, K McKeown arXiv preprint arXiv:1908.06870, 2019 | 48 | 2019 |
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level R Zhong, D Ghosh, D Klein, J Steinhardt ACL 2021, Findings, 2021 | 41 | 2021 |
Subspace embedding and linear regression with orlicz norm A Andoni, C Lin, Y Sheng, P Zhong, R Zhong International Conference on Machine Learning, 224-233, 2018 | 36 | 2018 |
Approximating how single head attention learns C Snell, R Zhong, D Klein, J Steinhardt arXiv preprint arXiv:2103.07601, 2021 | 25 | 2021 |
Learning by distilling context C Snell, D Klein, R Zhong arXiv preprint arXiv:2209.15189, 2022 | 24 | 2022 |
Describing differences between text distributions with natural language R Zhong, C Snell, D Klein, J Steinhardt International Conference on Machine Learning, 27099-27116, 2022 | 23* | 2022 |
Detecting gang-involved escalation on social media using context S Chang, R Zhong, E Adams, FT Lee, S Varia, D Patton, W Frey, C Kedzie, ... EMNLP 2018, 2018 | 20 | 2018 |
Do models explain themselves? counterfactual simulatability of natural language explanations Y Chen, R Zhong, N Ri, C Zhao, H He, J Steinhardt, Z Yu, K McKeown arXiv preprint arXiv:2307.08678, 2023 | 17 | 2023 |
Goal driven discovery of distributional differences via language descriptions R Zhong, P Zhang, S Li, J Ahn, D Klein, J Steinhardt Advances in Neural Information Processing Systems 36, 2024 | 15 | 2024 |
Semantic scaffolds for pseudocode-to-code generation R Zhong, M Stern, D Klein ACL 2020, 2020 | 15 | 2020 |
GAIA-A Multi-media Multi-lingual Knowledge Extraction and Hypothesis Generation System. T Zhang, A Subburathinam, G Shi, L Huang, D Lu, X Pan, M Li, B Zhang, ... TAC, 2018 | 14 | 2018 |
Goal-driven explainable clustering via language descriptions Z Wang, J Shang, R Zhong arXiv preprint arXiv:2305.13749, 2023 | 9 | 2023 |
Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL R Zhong, C Snell, D Klein, J Eisner EMNLP 2023, 2023 | 7* | 2023 |
The effect of model size on worst-group generalization A Pham, E Chan, V Srivatsa, D Ghosh, Y Yang, Y Yu, R Zhong, ... arXiv preprint arXiv:2112.04094, 2021 | 4 | 2021 |