Folgen
Harm van Seijen
Harm van Seijen
Microsoft Research
Bestätigte E-Mail-Adresse bei microsoft.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Hybrid reward architecture for reinforcement learning
H Van Seijen, M Fatemi, J Romoff, R Laroche, T Barnes, J Tsang
Advances in Neural Information Processing Systems 30, 2017
2412017
Reducing network agnostophobia
AR Dhamija, M Günther, T Boult
Advances in Neural Information Processing Systems 31, 2018
2402018
A theoretical and empirical analysis of Expected Sarsa
H Van Seijen, H Van Hasselt, S Whiteson, M Wiering
2009 ieee symposium on adaptive dynamic programming and reinforcement …, 2009
2332009
True online TD (lambda)
H Seijen, R Sutton
International Conference on Machine Learning, 692-700, 2014
1182014
True online temporal-difference learning
H Van Seijen, AR Mahmood, PM Pilarski, MC Machado, RS Sutton
The Journal of Machine Learning Research 17 (1), 5057-5096, 2016
1062016
Systematic generalisation with group invariant predictions
F Ahmed, Y Bengio, H Van Seijen, A Courville
International Conference on Learning Representations, 2021
662021
A Deeper Look at Planning as Learning from Replay
H van Seijen, RS Sutton
International Conference on Machine Learning, 2015
642015
Planning by prioritized sweeping with small backups
H Van Seijen, R Sutton
International Conference on Machine Learning, 361-369, 2013
53*2013
Using a logarithmic mapping to enable lower discount factors in reinforcement learning
H Van Seijen, M Fatemi, A Tavakoli
Advances in Neural Information Processing Systems 32, 2019
282019
Exploiting Best-Match Equations for Efficient Reinforcement Learning.
H van Seijen, S Whiteson, H van Hasselt, M Wiering
Journal of Machine Learning Research 12 (6), 2011
262011
Multi-advisor reinforcement learning
R Laroche, M Fatemi, J Romoff, H van Seijen
arXiv preprint arXiv:1704.00756, 2017
222017
On value function representation of long horizon problems
L Lehnert, R Laroche, H van Seijen
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
202018
Effective multi-step temporal-difference learning for non-linear function approximation
H van Seijen
arXiv preprint arXiv:1608.05151, 2016
182016
Efficient abstraction selection in reinforcement learning
H van Seijen, S Whiteson, L Kester
Computational Intelligence 30 (4), 657-699, 2014
162014
Modular lifelong reinforcement learning via neural composition
JA Mendez, H van Seijen, E Eaton
arXiv preprint arXiv:2207.00429, 2022
152022
Learning invariances for policy generalization
R Tachet, P Bachman, H van Seijen
arXiv preprint arXiv:1809.02591, 2018
142018
Dead-ends and secure exploration in reinforcement learning
M Fatemi, S Sharma, H Van Seijen, SE Kahou
International Conference on Machine Learning, 1873-1881, 2019
132019
Separation of concerns in reinforcement learning
H van Seijen, M Fatemi, J Romoff, R Laroche
arXiv preprint arXiv:1612.05159, 2016
122016
Forward actor-critic for nonlinear function approximation in reinforcement learning
V Veeriah, H van Seijen, RS Sutton
Proceedings of the 16th Conference on Autonomous Agents and MultiAgent …, 2017
102017
Switching between representations in reinforcement learning
H van Seijen, S Whiteson, L Kester
Interactive Collaborative Information Systems, 65-84, 2010
102010
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20