Leslie Kaelbling
Leslie Kaelbling
Unknown affiliation
Verified email at csail.mit.edu
Title
Cited by
Cited by
Year
Reinforcement learning: A survey
LP Kaelbling, ML Littman, AW Moore
Journal of artificial intelligence research 4, 237-285, 1996
80421996
Planning and acting in partially observable stochastic domains
LP Kaelbling, ML Littman, AR Cassandra
Artificial intelligence 101 (1-2), 99-134, 1998
40761998
Learning in embedded systems
LP Kaelbling
MIT press, 1993
8591993
Acting optimally in partially observable stochastic domains
AR Cassandra, LP Kaelbling, ML Littman
Aaai 94, 1023-1028, 1994
8181994
Learning policies for partially observable environments: Scaling up
ML Littman, AR Cassandra, LP Kaelbling
Machine Learning Proceedings 1995, 362-370, 1995
7991995
Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation
AR Cassandra, LP Kaelbling, JA Kurien
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and …, 1996
6911996
On the complexity of solving Markov decision problems
ML Littman, TL Dean, LP Kaelbling
arXiv preprint arXiv:1302.4971, 2013
6072013
An architecture for intelligent reactive systems
LP Kaelbling
Reasoning about actions and plans, 395-410, 1987
4861987
Hierarchical task and motion planning in the now
LP Kaelbling, T Lozano-Pérez
2011 IEEE International Conference on Robotics and Automation, 1470-1477, 2011
4842011
Effective reinforcement learning for mobile robots
WD Smart, LP Kaelbling
Proceedings 2002 IEEE International Conference on Robotics and Automation …, 2002
4782002
The synthesis of digital machines with provable epistemic properties
SJ Rosenschein, LP Kaelbling
Theoretical aspects of reasoning about knowledge, 83-98, 1986
4661986
To transfer or not to transfer
MT Rosenstein, Z Marx, LP Kaelbling, TG Dietterich
NIPS 2005 workshop on transfer learning 898, 1-4, 2005
3762005
Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons.
D Chapman, LP Kaelbling
IJCAI 91, 726-731, 1991
3661991
Hierarchical solution of Markov decision processes using macro-actions
M Hauskrecht, N Meuleau, LP Kaelbling, TL Dean, C Boutilier
arXiv preprint arXiv:1301.7381, 2013
3532013
Learning to cooperate via policy search
L Peshkin, KE Kim, N Meuleau, LP Kaelbling
arXiv preprint cs/0105032, 2001
3412001
Action and planning in embedded agents
LP Kaelbling, SJ Rosenschein
Robotics and autonomous systems 6 (1-2), 35-48, 1990
3411990
Planning under time constraints in stochastic domains
T Dean, LP Kaelbling, J Kirman, A Nicholson
Artificial Intelligence 76 (1-2), 35-74, 1995
3281995
Practical reinforcement learning in continuous spaces
WD Smart, LP Kaelbling
ICML, 903-910, 2000
3182000
Belief space planning assuming maximum likelihood observations
R Platt Jr, R Tedrake, L Kaelbling, T Lozano-Perez
3022010
Learning topological maps with weak local odometric information
H Shatkay, LP Kaelbling
IJCAI (2), 920-929, 1997
2901997
The system can't perform the operation now. Try again later.
Articles 1–20