Folgen
Nicolas Heess
Nicolas Heess
DeepMind
Bestätigte E-Mail-Adresse bei google.com
Titel
Zitiert von
Zitiert von
Jahr
Continuous control with deep reinforcement learning
TP Lillicrap, JJ Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, ...
arXiv preprint arXiv:1509.02971, 2015
174402015
Deterministic policy gradient algorithms
D Silver, G Lever, N Heess, T Degris, D Wierstra, M Riedmiller
ICML, 2014
53982014
Recurrent models of visual attention
V Mnih, N Heess, A Graves
Advances in neural information processing systems, 2204-2212, 2014
48802014
Relational inductive biases, deep learning, and graph networks
PW Battaglia, JB Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, ...
arXiv preprint arXiv:1806.01261, 2018
38572018
Emergence of locomotion behaviours in rich environments
N Heess, S Sriram, J Lemmon, J Merel, G Wayne, Y Tassa, T Erez, ...
arXiv preprint arXiv:1707.02286, 2017
11462017
Feudal networks for hierarchical reinforcement learning
AS Vezhnevets, S Osindero, T Schaul, N Heess, M Jaderberg, D Silver, ...
Proceedings of the 34th International Conference on Machine Learning-Volume …, 2017
10962017
Sample efficient actor-critic with experience replay
Z Wang, V Bapst, N Heess, V Mnih, R Munos, K Kavukcuoglu, ...
arXiv preprint arXiv:1611.01224, 2016
10262016
A Generalist Agent
S Reed, K Zolna, E Parisotto, SG Colmenarejo, A Novikov, G Barth-Maron, ...
arXiv preprint arXiv:2205.06175, 2022
8782022
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards
M Večerík, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ...
arXiv preprint arXiv:1707.08817, 2017
8222017
Imagination-augmented agents for deep reinforcement learning
T Weber, S Racanière, DP Reichert, L Buesing, A Guez, DJ Rezende, ...
arXiv preprint arXiv:1707.06203, 2017
728*2017
Graph networks as learnable physics engines for inference and control
A Sanchez-Gonzalez, N Heess, JT Springenberg, J Merel, M Riedmiller, ...
arXiv preprint arXiv:1806.01242, 2018
7262018
Learning continuous control policies by stochastic value gradients
N Heess, G Wayne, D Silver, T Lillicrap, T Erez, Y Tassa
Advances in Neural Information Processing Systems, 2944-2952, 2015
6832015
Distributed distributional deterministic policy gradients
G Barth-Maron, MW Hoffman, D Budden, W Dabney, D Horgan, A Muldal, ...
arXiv preprint arXiv:1804.08617, 2018
6622018
Sim-to-real robot learning from pixels with progressive nets
AA Rusu, M Vecerik, T Rothörl, N Heess, R Pascanu, R Hadsell
arXiv preprint arXiv:1610.04286, 2016
6442016
Continuous control with deep reinforcement learning. arXiv 2015
TP Lillicrap, JJ Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, ...
arXiv preprint arXiv:1509.02971, 1935
6331935
Distral: Robust multitask reinforcement learning
Y Teh, V Bapst, WM Czarnecki, J Quan, J Kirkpatrick, R Hadsell, N Heess, ...
Advances in Neural Information Processing Systems, 4496-4506, 2017
6222017
Attend, infer, repeat: Fast scene understanding with generative models
SMA Eslami, N Heess, T Weber, Y Tassa, D Szepesvari, GE Hinton
Advances in Neural Information Processing Systems, 3225-3233, 2016
6032016
Maximum a posteriori policy optimisation
A Abdolmaleki, JT Springenberg, Y Tassa, R Munos, N Heess, ...
arXiv preprint arXiv:1806.06920, 2018
5242018
Learning by playing-solving sparse reward tasks from scratch
M Riedmiller, R Hafner, T Lampe, M Neunert, J Degrave, T Van de Wiele, ...
arXiv preprint arXiv:1802.10567, 2018
4992018
Imagination-augmented agents for deep reinforcement learning
S Racanière, T Weber, D Reichert, L Buesing, A Guez, DJ Rezende, ...
Advances in neural information processing systems, 5690-5701, 2017
4812017
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20