Abstract

Ensuring safety is important for the practical deployment of reinforcement learning (RL). Various challenges must be addressed, such as handling stochasticity in the environments, providing rigorous guarantees of persistent state-wise safety satisfaction, and avoiding overly conservative behaviors that sacrifice performance. We propose a new framework, Reachability Estimation for Safe Policy Optimization (RESPO), for safety-constrained RL in general stochastic settings. In the feasible set where there exist violation-free policies, we optimize for rewards while maintaining persistent safety. Outside this feasible set, our optimization produces the safest behavior by guaranteeing entrance into the feasible set whenever possible with the least cumulative discounted violations. We introduce a class of algorithms using our novel reachability estimation function to optimize in our proposed framework and in similar frameworks such as those concurrently handling multiple hard and soft constraints. We theoretically establish that our algorithms almost surely converge to locally optimal policies of our safe optimization framework. We evaluate the proposed methods on a diverse suite of safe RL environments from Safety Gym, PyBullet, and MuJoCo, and show the benefits in improving both reward performance and safety compared with state-of-the-art baselines.

Overall algorithm:

Results:

Safety Gym: Point Button

RESPO (Proposed)
RCRL
PPOLag
FAC
RESPO SG Point Button RCRL SG Point Button PPOLag SG Point Button FAC SG Point Button

Safety Gym: Car Goal

RESPO (Proposed)
RCRL
PPOLag
FAC
RESPO SG Car Goal RCRL SG Car Goal PPOLag SG Car Goal FAC SG Car Goal

Multi-Drone Tunnel Navigation with Multiple Hard and Soft Constraints

RESPO Drone HS RCRL Drone HS
PPOLag Drone HS FAC Drone HS

Bibtex

@inproceedings{ganai2023respo,
  author = {Ganai, Milan and Gong, Zheng and Yu, Chenning and Herbert, Sylvia and Gao, Sicun},
  booktitle = {Advances in Neural Information Processing Systems},
  editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
  pages = {69764--69797},
  publisher = {Curran Associates, Inc.},
  title = {Iterative Reachability Estimation for Safe Reinforcement Learning},
  url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/dca63f2650fe9e88956c1b68440b8ee9-Paper-Conference.pdf},
  volume = {36},
  year = {2023}
}