Follow
Thomas Lampe
Thomas Lampe
DeepMind
Verified email at google.com
Title
Cited by
Cited by
Year
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards
M Vecerik, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ...
arXiv preprint arXiv:1707.08817, 2017
4722017
Learning by playing solving sparse reward tasks from scratch
M Riedmiller, R Hafner, T Lampe, M Neunert, J Degrave, T Wiele, V Mnih, ...
International conference on machine learning, 4344-4353, 2018
3112018
Data-efficient deep reinforcement learning for dexterous manipulation
I Popov, N Heess, T Lillicrap, R Hafner, G Barth-Maron, M Vecerik, ...
arXiv preprint arXiv:1704.03073, 2017
2342017
Keep doing what worked: Behavioral modelling priors for offline reinforcement learning
NY Siegel, JT Springenberg, F Berkenkamp, A Abdolmaleki, M Neunert, ...
arXiv preprint arXiv:2002.08396, 2020
1382020
Acquiring visual servoing reaching and grasping skills using neural reinforcement learning
T Lampe, M Riedmiller
The 2013 international joint conference on neural networks (IJCNN), 1-8, 2013
762013
Continuous-discrete reinforcement learning for hybrid control in robotics
M Neunert, A Abdolmaleki, M Wulfmeier, T Lampe, T Springenberg, ...
Conference on Robot Learning, 735-751, 2020
372020
Self-supervised sim-to-real adaptation for visual robotic manipulation
R Jeong, Y Aytar, D Khosid, Y Zhou, J Kay, T Lampe, K Bousmalis, F Nori
2020 IEEE international conference on robotics and automation (ICRA), 2718-2724, 2020
342020
Imagined value gradients: Model-based policy optimization with tranferable latent dynamics models
A Byravan, JT Springenberg, A Abdolmaleki, R Hafner, M Neunert, ...
Conference on Robot Learning, 566-589, 2020
292020
Approximate model-assisted neural fitted Q-iteration
T Lampe, M Riedmiller
2014 International Joint Conference on Neural Networks (IJCNN), 2698-2704, 2014
292014
A brain-computer interface for high-level remote control of an autonomous, reinforcement-learning-based robotic system for reaching and grasping
T Lampe, LDJ Fiederer, M Voelker, A Knorr, M Riedmiller, T Ball
Proceedings of the 19th international conference on Intelligent User …, 2014
292014
Simultaneously learning vision and feature-based control policies for real-world ball-in-a-cup
D Schwab, T Springenberg, MF Martins, T Lampe, M Neunert, ...
arXiv preprint arXiv:1902.04706, 2019
242019
Regularized hierarchical policies for compositional transfer in robotics
M Wulfmeier, A Abdolmaleki, R Hafner, JT Springenberg, M Neunert, ...
202019
Compositional transfer in hierarchical reinforcement learning
M Wulfmeier, A Abdolmaleki, R Hafner, JT Springenberg, M Neunert, ...
arXiv preprint arXiv:1906.11228, 2019
192019
Data-efficient hindsight off-policy option learning
M Wulfmeier, D Rao, R Hafner, T Lampe, A Abdolmaleki, T Hertweck, ...
International Conference on Machine Learning, 11340-11350, 2021
142021
Beyond pick-and-place: Tackling robotic stacking of diverse shapes
AX Lee, CM Devin, Y Zhou, T Lampe, K Bousmalis, JT Springenberg, ...
5th Annual Conference on Robot Learning, 2021
142021
Modeling effects of intrinsic and extrinsic rewards on the competition between striatal learning systems
J Boedecker, T Lampe, M Riedmiller
Frontiers in psychology 4, 739, 2013
142013
Modelling generalized forces with reinforcement learning for sim-to-real transfer
R Jeong, J Kay, F Romano, T Lampe, T Rothorl, A Abdolmaleki, T Erez, ...
arXiv preprint arXiv:1910.09471, 2019
132019
Representation matters: improving perception and exploration for robotics
M Wulfmeier, A Byravan, T Hertweck, I Higgins, A Gupta, T Kulkarni, ...
2021 IEEE International Conference on Robotics and Automation (ICRA), 6512-6519, 2021
62021
CLS2: Closed loop simulation system, 2012
M Riedmiller, M Blum, T Lampe
URL http://ml. informatik. uni-freiburg. de/research/clsquare, 0
5
Data-efficient reinforcement learning for continuous control tasks
M Riedmiller, R Hafner, M Vecerik, TP Lillicrap, T Lampe, I Popov, ...
US Patent 10,664,725, 2020
42020
The system can't perform the operation now. Try again later.
Articles 1–20