Andrew Kyle Lampinen
Andrew Kyle Lampinen
Research Scientist, DeepMind
Verified email at - Homepage
Cited by
Cited by
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
arXiv preprint arXiv:2204.02329, 2022
Data distributional properties drive emergent in-context learning in transformers
S Chan, A Santoro, A Lampinen, J Wang, A Singh, P Richemond, ...
Advances in Neural Information Processing Systems 35, 18878-18891, 2022
Language models show humanlike content effects on reasoning tasks
AK Lampinen, I Dasgupta, SC Chan, HR Sheahan, A Creswell, ...
arXiv preprint arXiv:2207.07051, 2023
What shapes feature representations? exploring datasets, architectures, and training
KL Hermann, AK Lampinen
Advances in Neural Information Processing Systems, 2020
Environmental drivers of systematicity and generalization in a situated agent
F Hill, A Lampinen, R Schneider, S Clark, M Botvinick, JL McClelland, ...
arXiv preprint arXiv:1910.00571, 2019
An analytic theory of generalization dynamics and transfer learning in deep linear networks
AK Lampinen, S Ganguli
7th International Conference on Learning Representations (ICLR 2019), 2018
Automated curricula through setter-solver interactions
S Racaniere, AK Lampinen, A Santoro, DP Reichert, V Firoiu, TP Lillicrap
8th International Conference on Learning Representations (ICLR 2020), 2019
Integration of new information in memory: new insights from a complementary learning systems perspective
JL McClelland, BL McNaughton, AK Lampinen
Philosophical Transactions of the Royal Society B 375 (1799), 20190637, 2020
Semantic exploration from language abstractions and pretrained representations
A Tam, N Rabinowitz, A Lampinen, NA Roy, S Chan, DJ Strouse, J Wang, ...
Advances in neural information processing systems 35, 25377-25389, 2022
Improving the replicability of psychological science through pedagogy
RXD Hawkins, EN Smith, C Au, JM Arias, R Catapano, E Hermann, M Keil, ...
Advances in Methods and Practices in Psychological Science 1 (1), 7-18, 2018
Symbolic behaviour in artificial intelligence
A Santoro, A Lampinen, K Mathewson, T Lillicrap, D Raposo
arXiv preprint arXiv:2102.03406, 2021
Towards mental time travel: a hierarchical memory for reinforcement learning agents
A Lampinen, S Chan, A Banino, F Hill
Advances in Neural Information Processing Systems 34, 28182-28195, 2021
Symbol tuning improves in-context learning in language models
J Wei, L Hou, A Lampinen, X Chen, D Huang, Y Tay, X Chen, Y Lu, ...
arXiv preprint arXiv:2305.08298, 2023
Tell me why! explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
Transformers generalize differently from information stored in context vs in weights
SCY Chan, I Dasgupta, J Kim, D Kumaran, AK Lampinen, F Hill
arXiv preprint arXiv:2210.05675, 2022
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
AK Lampinen
arXiv preprint arXiv:2210.15303, 2022
One-shot and few-shot learning of word embeddings
AK Lampinen, JL McClelland
arXiv preprint arXiv:1710.10280, 2017
Getting aligned on representational alignment
I Sucholutsky, L Muttenthaler, A Weller, A Peng, A Bobu, B Kim, BC Love, ...
arXiv preprint arXiv:2310.13018, 2023
Transforming task representations to perform novel tasks
AK Lampinen, JL McClelland
Proceedings of the National Academy of Sciences 117 (52), 32970-32981, 2020
The system can't perform the operation now. Try again later.
Articles 1–20