Benjamin Eysenbach
Benjamin Eysenbach
CMU, Google
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Diversity is all you need: Learning skills without a reward function
B Eysenbach, A Gupta, J Ibarz, S Levine
International Conference on Learning Representations, 2019
3922019
Clustervision: Visual supervision of unsupervised clustering
BC Kwon, B Eysenbach, J Verma, K Ng, C De Filippi, WF Stewart, A Perer
IEEE transactions on visualization and computer graphics 24 (1), 142-151, 2017
982017
Search on the replay buffer: Bridging planning and reinforcement learning
B Eysenbach, R Salakhutdinov, S Levine
Advances in Neural Information Processing Systems, 15246-15257, 2019
972019
Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings
JD Co-Reyes, YX Liu, A Gupta, B Eysenbach, P Abbeel, S Levine
International Conference on Machine Learning, 2018
872018
Efficient exploration via state marginal matching
L Lee, B Eysenbach, E Parisotto, E Xing, S Levine, R Salakhutdinov
arXiv preprint arXiv:1906.05274, 2019
822019
Unsupervised meta-learning for reinforcement learning
A Gupta, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:1806.04640, 2018
762018
Leave No Trace: Learning to reset for safe and autonomous reinforcement learning
B Eysenbach, S Gu, J Ibarz, S Levine
International Conference on Learning Representations, 2018
742018
Unsupervised curricula for visual meta-reinforcement learning
A Jabri, K Hsu, A Gupta, B Eysenbach, S Levine, C Finn
Advances in Neural Information Processing Systems, 2019
342019
Learning to reach goals without reinforcement learning
D Ghosh, A Gupta, J Fu, A Reddy, C Devin, B Eysenbach, S Levine
23*2019
If MaxEnt RL is the Answer, What is the Question?
B Eysenbach, S Levine
arXiv preprint arXiv:1910.01913, 2019
212019
Learning to be safe: Deep rl with a safety critic
K Srinivasan, B Eysenbach, S Ha, J Tan, C Finn
arXiv preprint arXiv:2010.14603, 2020
182020
Rewriting history with inverse rl: Hindsight inference for policy improvement
B Eysenbach, X Geng, S Levine, R Salakhutdinov
arXiv preprint arXiv:2002.11089, 2020
172020
Maximum entropy rl (provably) solves some robust rl problems
B Eysenbach, S Levine
arXiv preprint arXiv:2103.06257, 2021
132021
Model-Based Visual Planning with Self-Supervised Functional Distances
S Tian, S Nair, F Ebert, S Dasari, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:2012.15373, 2020
92020
C-learning: Learning to achieve goals via recursive classification
B Eysenbach, R Salakhutdinov, S Levine
arXiv preprint arXiv:2011.08909, 2020
92020
f-irl: Inverse reinforcement learning via state marginal matching
T Ni, H Sikchi, Y Wang, T Gupta, L Lee, B Eysenbach
arXiv preprint arXiv:2011.04709, 2020
72020
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
B Eysenbach, S Asawa, S Chaudhari, S Levine, R Salakhutdinov
arXiv preprint arXiv:2006.13916, 2020
72020
Who is mistaken?
B Eysenbach, C Vondrick, A Torralba
arXiv preprint arXiv:1612.01175, 2016
72016
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Y Chebotar, K Hausman, Y Lu, T Xiao, D Kalashnikov, J Varley, A Irpan, ...
arXiv preprint arXiv:2104.07749, 2021
62021
Ving: Learning open-world navigation with visual goals
D Shah, B Eysenbach, G Kahn, N Rhinehart, S Levine
2021 IEEE International Conference on Robotics and Automation (ICRA), 13215 …, 2021
52021
The system can't perform the operation now. Try again later.
Articles 1–20