Title: Towards Safe Reinforcement Learning
Speaker: Andreas Krause
Abstract: Reinforcement learning has seen stunning empirical breakthroughs. At its heart is the challenge of trading exploration — collecting data for learning better models — and exploitation — using the estimate to make decisions. In many applications, however, exploration is a potentially dangerous proposition, as it requires experimenting with actions with unknown consequences. Hence, most prior work has confined exploration to simulated environments. In this talk, I will formalize the problem of safe exploration as one of optimizing an unknown function subject to unknown constraints. Both objective and constraints are revealed through noisy experiments, and safety requires that no infeasible action is chosen at any point. I will present an approach that uses Bayesian inference over the objective and constraints, which — under some regularity assumptions — is guaranteed to be both safe and complete, i.e., converge to a natural notion of reachable optimum. I will also show experiments on safe automatic parameter tuning of robotic platforms.
Andreas Krause is an Associate Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow, received an ERC Starting Investigator grant, the Deutscher Mustererkennungspreis, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research on machine learning and adaptive systems has received awards at several premier conferences and journals.