Abstract

The deployment of Reinforcement Learning to robotics applications faces the difficulty of reward engineering. Therefore, approaches have focused on creating reward functions by Learning from Observations (LfO) which is the task of learning policies from expert trajectories that only contain state sequences. We propose new methods for LfO for the important class of continuous control problems of learning to stabilize, by introducing intermediate proxy models acting as reward functions between the expert and the agent policy based on Lyapunov stability theory. Our LfO training process consists of two steps. The first step attempts to learn a Lyapunov-like landscape proxy model from expert state sequences without access to any kinematics model, and the second step uses the learned landscape model to guide in training the learner's policy. We formulate novel learning objectives for the two steps that are important for overall training success. We evaluate our methods in real automobile robot environments and other simulated stabilization control problems in model-free settings, like Quadrotor control and maintaining upright positions of Hopper in MuJoCo. We compare with state-of-the-art approaches and show the proposed methods can learn efficiently with less expert observations.

Overall algorithm:

Results:

Path Tracking

LSO-LLPM (Proposed)

GAIfO

Quadratic Lyapunov

Lyapunov Risk

Bibtex

@inproceedings{ganai2023lsollpm,
  author={Ganai, Milan and Hirayama, Chiaki and Chang, Ya-Chien and Gao, Sicun},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
  title={Learning Stabilization Control from Observations by Learning Lyapunov-like Proxy Models},
  year={2023},
  pages={2913-2920}
}