dc.description.abstract | Synthesising motion of human character animations or humanoid robots is vastly complicated
by the large number of degrees of freedom in their kinematics. Control spaces
become so large, that automated methods designed to adaptively generate movements
become computationally infeasible or fail to find acceptable solutions.
In this thesis we investigate how demonstrations of previously successful movements
can be used to inform the production of new movements that are adapted to
new situations. In particular, we evaluate the use of nonlinear dimensionality reduction
techniques to find compact representations of demonstrations, and investigate how
these can simplify the synthesis of new movements.
Our focus lies on the Gaussian Process Latent Variable Model (GPLVM), because it
has proven to capture the nonlinearities present in the kinematics of robots and humans.
We present an in-depth analysis of the underlying theory which results in an alternative
approach to initialise the GPLVM based on Multidimensional Scaling. We show that
the new initialisation is better suited than PCA for nonlinear, synthetic data, but have
to note that its advantage shrinks on motion data.
Subsequently we show that the incorporation of additional structure constraints
leads to low-dimensional representations which are sufficiently regular so that once
learned dynamic movement primitives can be adapted to new situations without need
for relearning. Finally, we demonstrate in a number of experiments where movements
are generated for bimanual reaching, that, through the use of nonlinear dimensionality
reduction, reinforcement learning can be scaled up to optimise humanoid movements. | en |