Follow
Julian Coda-Forno
Julian Coda-Forno
ELLIS, Helmholtz AI Munich/LMU
Verified email at helmholtz-munich.de - Homepage
Title
Cited by
Cited by
Year
Playing repeated games with large language models
E Akata, L Schulz, J Coda-Forno, SJ Oh, M Bethge, E Schulz
arXiv preprint arXiv:2305.16867, 2023
592023
Inducing anxiety in large language models increases exploration and bias
J Coda-Forno, K Witte, AK Jagadish, M Binz, Z Akata, E Schulz
arXiv preprint arXiv:2304.11111, 2023
332023
Meta-in-context learning in large language models
J Coda-Forno, M Binz, Z Akata, M Botvinick, J Wang, E Schulz
Advances in Neural Information Processing Systems 36, 2024
132024
CogBench: a large language model walks into a psychology lab
J Coda-Forno, M Binz, JX Wang, E Schulz
arXiv preprint arXiv:2402.18225, 2024
2024
Ecologically rational meta-learned inference explains human category learning
AK Jagadish, J Coda-Forno, M Thalmann, E Schulz, M Binz
arXiv preprint arXiv:2402.01821, 2024
2024
Leveraging Episodic Memory to Improve World Models for Reinforcement Learning
J Coda-Forno, C Yu, Q Guo, Z Fountas, N Burgess
Memory in Artificial and Real Intelligence (MemARI), NeurIPS workshop, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–6