Richard Klima
Richard Klima
Verified email at liverpool.ac.uk - Homepage
Title
Cited by
Cited by
Year
Online learning methods for border patrol resource allocation
R Klíma, C Kiekintveld, V Lisý
International Conference on Decision and Game Theory for Security, 340-349, 2014
182014
Combining online learning and equilibrium computation in security games
R Klíma, V Lisý, C Kiekintveld
International Conference on Decision and Game Theory for Security, 130-149, 2015
172015
Space debris removal: A game theoretic analysis
R Klima, D Bloembergen, R Savani, K Tuyls, D Hennes, D Izzo
Games 7 (3), 20, 2016
142016
Markov Security Games: Learning in Spatial Security Problems
R Klima, K Tuyls, F Oliehoek
NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016
122016
Robust temporal difference learning for critical domains
R Klima, D Bloembergen, M Kaisers, K Tuyls
arXiv preprint arXiv:1901.08021, 2019
42019
Space debris removal: Learning to cooperate and the price of anarchy
R Klima, D Bloembergen, R Savani, K Tuyls, A Wittig, A Sapera, D Izzo
Frontiers in Robotics and AI 5, 54, 2018
42018
Model-Based Reinforcement Learning under Periodical Observability
R Klima, K Tuyls, F Oliehoek
AAAI Spring Symposium on Learning, Inference, and Control of Multi-Agent Systems, 2018
32018
Learning robust policies when losing control
R Klima, D Bloembergen, M Kaisers, K Tuyls
Adaptive Learning Agents (ALA) workshop at AAMAS, 2018
22018
Game theoretic analysis of the space debris dilemma
R Klima, D Bloembergen, R Savani, K Tuyls, D Hennes, D Izzo, K Tuyls, ...
Tech. rep., Final Report, ESA Ariadna Study 15/ 8401, 2016
22016
Towards Learning to Best Respond when Losing Control
R Klima, D Bloembergen, M Kaisers, K Tuyls
European Workshop on Reinforcement Learning (EWRL), 2018
12018
Multi-Agent Learning for Security and Sustainability
R Klima
University of Liverpool, 2019
2019
Combining online learning and equilibrium computation in security games - Thesis
R Klíma
Master's Thesis, 2015
2015
The system can't perform the operation now. Try again later.
Articles 1–12