Playing repeated games with large language models E Akata, L Schulz, J Coda-Forno, SJ Oh, M Bethge, E Schulz arXiv preprint arXiv:2305.16867, 2023 | 61 | 2023 |
Inducing anxiety in large language models increases exploration and bias J Coda-Forno, K Witte, AK Jagadish, M Binz, Z Akata, E Schulz arXiv preprint arXiv:2304.11111, 2023 | 36 | 2023 |
Meta-in-context learning in large language models J Coda-Forno, M Binz, Z Akata, M Botvinick, J Wang, E Schulz Advances in Neural Information Processing Systems 36, 65189-65201, 2023 | 14 | 2023 |
CogBench: a large language model walks into a psychology lab J Coda-Forno, M Binz, JX Wang, E Schulz arXiv preprint arXiv:2402.18225, 2024 | | 2024 |
Ecologically rational meta-learned inference explains human category learning AK Jagadish, J Coda-Forno, M Thalmann, E Schulz, M Binz arXiv preprint arXiv:2402.01821, 2024 | | 2024 |
Leveraging Episodic Memory to Improve World Models for Reinforcement Learning J Coda-Forno, C Yu, Q Guo, Z Fountas, N Burgess Memory in Artificial and Real Intelligence (MemARI), NeurIPS workshop, 2022 | | 2022 |