Minqi Jiang
Zitiert von
Zitiert von
Prioritized level replay
M Jiang, E Grefenstette, T Rocktäschel
International Conference on Machine Learning, 4940-4950, 2021
Minihack the planet: A sandbox for open-ended reinforcement learning research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
NeurIPS 2021 Datasets and Benchmarks, 2021
Evolving Curricula with Regret-Based Environment Design
J Parker-Holder*, M Jiang*, M Dennis, M Samvelyan, J Foerster, ...
International Conference on Machine Learning,, 2022
Replay-Guided Adversarial Environment Design
M Jiang*, M Dennis*, J Parker-Holder, J Foerster, E Grefenstette, ...
NeurIPS 2021, 2021
Motion responsive user interface for realtime language translation
AJ Cuthbert, JJ Estelle, MR Hughes, S Goyal, MS Jiang
US Patent 9,355,094, 2016
WordCraft: An Environment for Benchmarking Commonsense Agents
M Jiang, J Luketina, N Nardelli, P Minervini, PHS Torr, S Whiteson, ...
Language in Reinforcement Learning Workshop at ICML 2020, 2020
Improving intrinsic exploration with language abstractions
J Mu, V Zhong, R Raileanu, M Jiang, N Goodman, T Rocktäschel, ...
NeurIPS 2022, 2022
Insights from the neurips 2021 nethack challenge
E Hambro, S Mohanty, D Babaev, M Byeon, D Chakraborty, ...
NeurIPS 2021 Competitions and Demonstrations Track, 41-52, 2022
Resolving causal confusion in reinforcement learning via robust exploration
C Lyle, A Zhang, M Jiang, J Pineau, Y Gal
Self-Supervision for Reinforcement Learning Workshop-ICLR 2021, 2021
Exploration via Elliptical Episodic Bonuses
M Henaff, R Raileanu, M Jiang, T Rocktäschel
NeurIPS 2022, 2022
Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Z Jiang, P Minervini, M Jiang, T Rocktäschel
AAMAS 2021 (Oral), 2021
MAESTRO: Open-ended environment design for multi-agent reinforcement learning
M Samvelyan, A Khan, M Dennis, M Jiang, J Parker-Holder, J Foerster, ...
International Conference on Learning Representations 2023, 2023
GriddlyJS: A Web IDE for Reinforcement Learning
C Bamford, M Jiang, M Samvelyan, T Rocktäschel
NeurIPS 2022 Datasets and Benchmarks, 2022
A Study of Off-Policy Learning in Environments with Procedural Content Generation
A Ehrenberg, R Kirk, M Jiang, E Grefenstette, T Rocktäschel
ICLR Workshop on Agent Learning in Open-Endedness, 2022
General Intelligence Requires Rethinking Exploration
M Jiang, T Rocktäschel, E Grefenstette
arXiv preprint arXiv:2211.07819, 2022
Grounding Aleatoric Uncertainty for Unsupervised Environment Design
M Jiang, M Dennis, J Parker-Holder, A Lupu, H Küttler, E Grefenstette, ...
NeurIPS 2022, 2022
Integrating Episodic and Global Bonuses for Efficient Exploration
M Henaff, M Jiang, R Raileanu
International Conference on Machine Learning 2023, 0
Return Dispersion as an Estimator of Learning Potential for Prioritized Level Replay
I Korshunova, M Jiang, J Parker-Holder, T Rocktäschel, E Grefenstette
I (Still) Can't Believe It's Not Better! NeurIPS 2021 Workshop, 0
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–18