Reinforcement Learning for Humanoid Robots - Policy Gradients and Beyond
View/ Open
Date
07/2004Author
Vijayakumar, Sethu
Peters, Jan
Schaal, Stefan
Metadata
Abstract
Reinforcement learning offers one of the most general frameworks to take traditional robotics towards true autonomy
and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid
robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms
of their applicability in humanoid robotics. Methods can be coarsely classified in to three different categories, i.e.,
greedy methods, ’vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are
not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation.
Vanilla’ policy gradient methods on the other hand have been successfully applied on real-world robots including at
least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural
policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving
that the average policy gradient of Kakade[10] is indeed the true natural gradient. A general algorithm for estimating
the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local
minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm
outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning non-linear dynamic
motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement
learning for truly high-dimensionally continuous state-action systems.