Formal Methods for Safe Reinforcement Learning

An ETAPS Workshop

April 2021 | Luxembourg, LU & Online

Formal Methods for Safe RL

As deep end-to-end reinforcement learning moves from the lab into real-world applications, ensuring the correctness and robustness of learned controllers is a paramount concern. Safe reinforcement learning requires overcoming several fundamental hurdles that are not considered in standard reinforcement learning settings. This ETAPS workshop will bring together academics, industry researchers, and practitioners to delve into the current state of safe reinforcement learning and establish future research objectives for the community.

Call for Presentations

We invite extended abstracts (max two pages) describing proposed presentations. Topics of interest include, but are not limited to:

  • convergence properties of formally constrained RL algorithms;
  • reward construction/reward shaping for formally constrained RL algorithms;
  • benchmarks and environments for evaluating safe reinforcement learning algorithms;
  • formal models of failure modes for computer vision systems;
  • efficient sampling methods for constrained state/action spaces;
  • experiment design for constrained learning;
  • verification, synthesis, and runtime monitoring tools for large and/or probabilistic transition systems;
  • applications of safe reinforcement learning in controls, IoT, finance, cloud computing, etc. and well as educational materials for modules or courses on Safe RL.
Deadline: January 30, 2020
Submission Page

Workshop Organizers

Alessandro Abate
(University of Oxford)
Roderick Bloem
(Graz University of Technology)
Nathan Fulton
(MIT-IBM AI Lab)