Follow
Arkil Patel
Arkil Patel
Grad Student, Mila and McGill University
Verified email at mila.quebec - Homepage
Title
Cited by
Cited by
Year
Are NLP Models really able to Solve Simple Math Word Problems?
A Patel, S Bhattamishra, N Goyal
NAACL, 2021
3652021
On the computational power of transformers and its implications in sequence modeling
S Bhattamishra, A Patel, N Goyal
CoNLL, 2020
492020
Vehiclechain: blockchain-based vehicular data transmission scheme for smart city
A Patel, N Shah, T Limbasiya, D Das
IEEE - SMC, 2019
232019
Revisiting the Compositional Generalization Abilities of Neural Sequence Models
A Patel, S Bhattamishra, P Blunsom, N Goyal
ACL, 2022
202022
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S Bhattamishra, A Patel, V Kanade, P Blunsom
ACL, 2023
162023
Understanding in-context learning in transformers and llms by learning to learn discrete functions
S Bhattamishra, A Patel, P Blunsom, V Kanade
ICLR, 2023
92023
When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks
A Sikarwar, A Patel, N Goyal
EMNLP, 2022
72022
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
A Patel, S Bhattamishra, S Reddy, D Bahdanau
EMNLP, 2023
32023
Evaluating In-Context Learning of Libraries for Code Generation
A Patel, S Reddy, D Bahdanau, P Dasigi
NAACL, 2024
12024
Universal Adversarial Triggers Are Not Universal
N Meade, A Patel, S Reddy
arXiv preprint arXiv:2404.16020, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–10