Are NLP Models really able to Solve Simple Math Word Problems? A Patel, S Bhattamishra, N Goyal NAACL, 2021 | 365 | 2021 |
On the computational power of transformers and its implications in sequence modeling S Bhattamishra, A Patel, N Goyal CoNLL, 2020 | 49 | 2020 |
Vehiclechain: blockchain-based vehicular data transmission scheme for smart city A Patel, N Shah, T Limbasiya, D Das IEEE - SMC, 2019 | 23 | 2019 |
Revisiting the Compositional Generalization Abilities of Neural Sequence Models A Patel, S Bhattamishra, P Blunsom, N Goyal ACL, 2022 | 20 | 2022 |
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions S Bhattamishra, A Patel, V Kanade, P Blunsom ACL, 2023 | 16 | 2023 |
Understanding in-context learning in transformers and llms by learning to learn discrete functions S Bhattamishra, A Patel, P Blunsom, V Kanade ICLR, 2023 | 9 | 2023 |
When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks A Sikarwar, A Patel, N Goyal EMNLP, 2022 | 7 | 2022 |
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations A Patel, S Bhattamishra, S Reddy, D Bahdanau EMNLP, 2023 | 3 | 2023 |
Evaluating In-Context Learning of Libraries for Code Generation A Patel, S Reddy, D Bahdanau, P Dasigi NAACL, 2024 | 1 | 2024 |
Universal Adversarial Triggers Are Not Universal N Meade, A Patel, S Reddy arXiv preprint arXiv:2404.16020, 2024 | | 2024 |