Wendelin Böhmer
Title
Cited by
Cited by
Year
Autonomous learning of state representations for control: An emerging field aims to autonomously learn state representations for reinforcement learning agents from their real …
W Böhmer, JT Springenberg, J Boedecker, M Riedmiller, K Obermayer
KI-Künstliche Intelligenz 29 (4), 353-362, 2015
522015
Deep coordination graphs
W Böhmer, V Kurin, S Whiteson
International Conference on Machine Learning, 980-991, 2020
352020
Multi-agent common knowledge reinforcement learning
C Schroeder de Witt, J Foerster, G Farquhar, P Torr, W Boehmer, ...
Advances in Neural Information Processing Systems 32, 9927-9939, 2019
312019
Generalized off-policy actor-critic
S Zhang, W Boehmer, S Whiteson
arXiv preprint arXiv:1903.11329, 2019
292019
The effect of novelty on reinforcement learning
A Houillon, RC Lorenz, W Böhmer, MA Rapp, A Heinz, J Gallinat, ...
Progress in brain research 202, 415-439, 2013
292013
Construction of Approximation Spaces for Reinforcement Learning.
W Böhmer, S Grünewälder, Y Shen, M Musial, K Obermayer
Journal of Machine Learning Research 14 (7), 2013
282013
Neural systems for choice and valuation with counterfactual learning signals
MJ Tobia, R Guo, U Schwarze, W Böhmer, J Gläscher, B Finckh, ...
NeuroImage 89, 57-69, 2014
272014
Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control
C Schroeder de Witt, B Peng, PA Kamienny, P Torr, W Böhmer, ...
arXiv e-prints, arXiv: 2003.06709, 2020
22*2020
Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis
W Böhmer, S Grünewälder, H Nickisch, K Obermayer
Machine Learning 89 (1-2), 67-86, 2012
202012
Multi-agent common knowledge reinforcement learning
JN Foerster, CAS de Witt, G Farquhar, PHS Torr, W Boehmer, S Whiteson
arXiv preprint arXiv:1810.11702, 51, 2018
182018
Regularized sparse kernel slow feature analysis
W Böhmer, S Grünewälder, H Nickisch, K Obermayer
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2011
172011
Optimistic exploration even with a pessimistic initialisation
T Rashid, B Peng, W Boehmer, S Whiteson
arXiv preprint arXiv:2002.12174, 2020
132020
Autonomous learning of state representations for control
W Böhmer, JT Springenberg, J Boedecker, M Riedmiller, K Obermayer
KI-Künstliche Intelligenz, 1-10, 2015
132015
Exploration with unreliable intrinsic reward in multi-agent reinforcement learning
W Böhmer, T Rashid, S Whiteson
arXiv preprint arXiv:1906.02138, 2019
122019
Interaction of instrumental and goal-directed learning modulates prediction error representations in the ventral striatum
R Guo, W Böhmer, M Hebart, S Chien, T Sommer, K Obermayer, ...
Journal of Neuroscience 36 (50), 12650-12660, 2016
122016
Deep residual reinforcement learning
S Zhang, W Boehmer, S Whiteson
arXiv preprint arXiv:1905.01072, 2019
92019
Multitask soft option learning
M Igl, A Gambardella, J He, N Nardelli, N Siddharth, W Böhmer, ...
Conference on Uncertainty in Artificial Intelligence, 969-978, 2020
82020
The impact of non-stationarity on generalisation in deep reinforcement learning
M Igl, G Farquhar, J Luketina, W Boehmer, S Whiteson
arXiv e-prints, arXiv: 2006.05826, 2020
82020
Multi-agent hierarchical reinforcement learning with dynamic termination
D Han, W Boehmer, M Wooldridge, A Rogers
Pacific Rim International Conference on Artificial Intelligence, 80-92, 2019
72019
AI-QMIX: attention and imagination for dynamic multi-agent reinforcement learning
S Iqbal, CA Schroeder de Witt, B Peng, W Böhmer, S Whiteson, F Sha
arXiv e-prints, arXiv: 2006.04222, 2020
52020
The system can't perform the operation now. Try again later.
Articles 1–20