Imitation Learning for Stylized Physics-Based Character Control
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
In Computer Graphics, a heavily researched topic is the physical simulation of characters that can exhibit fluid, life-like motions. In recent years, imitation and reinforcement learning techniques have become popular approaches for training such controllers due to their flexibility, generality, and adaptability. One such example is DeepMimic, a data-driven framework that utilizes motion clips along with modern reinforcement learning methods to train control policies for simulated characters that can produce a wide variety of natural notions. Adversarial Motion Priors is an extension upon this framework in which adversarial imitation learning is utilized to enable characters to imitate various motions from a large unstructured dataset of reference motions without the need for explicit synchronization. In this thesis, we adapt the Adversarial Motion Priors framework to be compatible with OpenAI Gym environments. In doing so, a wide variety of RL algorithms can be tested on the framework. Finally, we demonstrate the use of this environment by evaluating Model-based Imitation Learning which is a purely offline imitation learning algorithm that tackles the covariate shift issue common in behavior cloning, a classic offline imitation learning algorithm.