Delft University of Technology
Bayesian Reinforcement Learning in Factored POMDPs
Bayesian approaches provide a principled solution to the exploration-exploitation trade-off in Reinforcement Learning. Typical approaches, however, either assume a fully observable environment or scale poorly. This work introduces the Factored Bayes-Adaptive POMDP model, a framework that is able to exploit the underlying structure while learning the dynamics in partially observable systems. We also present a belief tracking method to approximate the joint posterior over state and model variables, and an adaptation of the Monte-Carlo Tree Search solution method, which together are capable of solving the underlying problem near-optimally. Our method is able to learn efficiently given a known factorization or also learn the factorization and the model parameters at the same time. We demonstrate that this approach is able to outperform current methods and tackle problems that were previously infeasible.
Frans A. Oliehoek (1981) is Associate Professor at Delft University of Technology. He received his Ph.D. in Computer Science (2010) and M.Sc. Artificial Intelligence (2005) both from the University of Amsterdam (UvA). He has worked at MIT (2010-2012), Maastricht University (MU, 2012-2013), and UvA (2014-2017), and University of Liverpool (2014-2018). Frans’ research interests lie in the intersection of machine learning, AI, with an emphasis on multiagent systems.