Glue: A multi-task benchmark and analysis platform for natural language understanding A Wang arXiv preprint arXiv:1804.07461, 2018 | 7255 | 2018 |
Superglue: A stickier benchmark for general-purpose language understanding systems A Wang, Y Pruksachatkun, N Nangia, A Singh, J Michael, F Hill, O Levy, ... Advances in neural information processing systems 32, 2019 | 2201 | 2019 |
What do you learn from context? probing for sentence structure in contextualized word representations I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ... arXiv preprint arXiv:1905.06316, 2019 | 890 | 2019 |
On measuring social biases in sentence encoders C May, A Wang, S Bordia, SR Bowman, R Rudinger arXiv preprint arXiv:1903.10561, 2019 | 619 | 2019 |
Asking and answering questions to evaluate the factual consistency of summaries A Wang, K Cho, M Lewis arXiv preprint arXiv:2004.04228, 2020 | 420 | 2020 |
BERT has a mouth, and it must speak: BERT as a Markov random field language model A Wang, K Cho arXiv preprint arXiv:1902.04094, 2019 | 380 | 2019 |
QuestEval: Summarization asks for fact-based evaluation T Scialom, PA Dray, P Gallinari, S Lamprier, B Piwowarski, J Staiano, ... arXiv preprint arXiv:2103.12693, 2021 | 241 | 2021 |
Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling A Wang, J Hula, P Xia, R Pappagari, RT McCoy, R Patel, N Kim, I Tenney, ... arXiv preprint arXiv:1812.10860, 2018 | 145* | 2018 |
Probing what different NLP tasks teach machines about function word comprehension N Kim, R Patel, A Poliak, A Wang, P Xia, RT McCoy, I Tenney, A Ross, ... arXiv preprint arXiv:1904.11544, 2019 | 104 | 2019 |
Univtg: Towards unified video-language temporal grounding KQ Lin, P Zhang, J Chen, S Pramanick, D Gao, AJ Wang, R Yan, MZ Shou Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 65 | 2023 |
jiant 1.2: A software toolkit for research on general-purpose text understanding models A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ... Note: http://jiant. info/Cited by: footnote 4, 2019 | 53 | 2019 |
When less is more: Investigating data pruning for pretraining llms at scale M Marion, A Üstün, L Pozzobon, A Wang, M Fadaee, S Hooker arXiv preprint arXiv:2309.04564, 2023 | 50 | 2023 |
A generalized framework of sequence generation with application to undirected sequence models E Mansimov, A Wang, S Welleck, K Cho arXiv preprint arXiv:1905.12790, 2019 | 45 | 2019 |
Squality: Building a long-document summarization dataset the hard way A Wang, RY Pang, A Chen, J Phang, SR Bowman arXiv preprint arXiv:2205.11465, 2022 | 40 | 2022 |
jiant: A software toolkit for research on general-purpose text understanding models Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ... arXiv preprint arXiv:2003.02249, 2020 | 39 | 2020 |
Position-guided text prompt for vision-language pre-training J Wang, P Zhou, MZ Shou, S Yan Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 30 | 2023 |
What do nlp researchers believe? results of the nlp community metasurvey J Michael, A Holtzman, A Parrish, A Mueller, A Wang, A Chen, D Madaan, ... arXiv preprint arXiv:2208.12852, 2022 | 29 | 2022 |
Jen-1: Text-guided universal music generation with omnidirectional diffusion models PP Li, B Chen, Y Yao, Y Wang, A Wang, A Wang 2024 IEEE Conference on Artificial Intelligence (CAI), 762-769, 2024 | 23 | 2024 |
Learning linguistic descriptors of user roles in online communities A Wang, WL Hamilton, J Leskovec Proceedings of the first workshop on nlp and computational social science, 76-85, 2016 | 23 | 2016 |
Too large; data reduction for vision-language pre-training AJ Wang, KQ Lin, DJ Zhang, SW Lei, MZ Shou Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 17 | 2023 |