Online dynamics model learning and control for robotics
Item statusRestricted Access
Embargo end date10/06/2023
Robotic systems, of various sizes and types, are not only becoming more prominent in the research communities but have been further pushed into industrial and business uses. With the advent of more capable hardware able to run longer and withstand these industrial environments, we can start really exploring the uses of these robots in their given workspaces. To have robots performing in these areas, we require the ability to control them on a low level, enabling consistent and successful motion, before being able to achieve more complex tasks with them. Whilst heavy research has been done for low level control typically these rely heavily on a model of the robot. In particular, in the cases where we wish to control the forces on the robot, we require an accurate model of these forces caused by various physical effects on the robot, such as inertia and gravity. Obtaining these models has been a researched topic for many years with various approaches being discussed and tested. These approaches range from: obtaining the model parameters from computer aided design software, to data driven approaches that use the likes of neural networks to learn the mapping from the robot state to the robot forces, to adaptive control frameworks that attempt to learn the model of the dynamics whilst providing stability guarantees. In this thesis, we initially look deeper into the various approaches taken by previous researchers in the field of dynamics learning. In particular we delve into the advantage of Semi-Parametric modelling techniques on robots and we hypothesize the expansion of Semi-Parametric models to adaptive control frameworks allowing for simultaneous control and learning, with potential stability guarantees. Furthermore we predict that we should be able to expand this idea of adaptive control to more easily fit complex constrained robots, such as quadrupeds. In order to prove and support these hypothesis’ we look at each one individually, starting with a deeper analysis into the previous research. With this analysis, we identified a lack of software support for adaptive control which would greatly slow down future progress in the field. Hence we introduced the ARDL library, which has been specifically designed to support our and others work in adaptive control algorithms. We demonstrate the effectiveness of the library through the implementation of various adaptive controllers on a simulated manipulator. In each implementation we are able to show the minimization of tracking and torque errors as appropriate to the controller. ARDL is then used to support the development of the resulting chapters. We then shift our focus onto exploring Semi-Parametric models. We look specifically into creating an online Semi-Parametric model that allows us to control and learn the model simultaneously. We use a mix of a composite adaptive control algorithm to train the Parametric component and use a Gaussian Mixture model as the Non-Parametric component that is trained through an iterative algorithm. We identify a key issue that arises when training both models simultaneously. Which demonstrates an inconsistency between the two models when both are updated simultaneously. We demonstrate the instability that arises without compensating for the inconsistency using a simulated robot. We then identify a general corrective term that we specialize for the Gaussian Mixture Model. By using this new transform we demonstrate, for the second joint, that we go from an unstable behaviour, that generates tracking errors up to +/- 0.2 rad in position and +/- 0.35 rad/s in velocity which increases over time, to a more stable behaviour that reduces and maintains the tracking error to between +/-0.05 rad in position and +/-0.1 rad/s in velocity. We also provide the nMSE results for the whole trajectory for every joint. The result over every metric shows that over every joint we perform better with some metrics and joints having improvements of up to a factor of 100 such as the second joint torque predictions going from an nMSE of 1 to 0.01. We extend the consistency transform framework to a Radial Basis Function Neural Network to further emphasise the generalisability of the consistency correction. In a similar fashion to the Gaussian Mixture model, we are able to update the Radial Basis Function Neural Network through a gradient decent based algorithm. Using this gradient based update we are able to define a consistency transform computed through a gradient update. By updating this transform, alongside the main model, at any step we know the correct transform to adjust the model to a change in Parametric inertial parameters. Using the transform we are able to reconfirm our results with the Gaussian Mixture model such that the Radial Basis Function Neural Network demonstrates very similar online performance on a simulated Kuka LWR IV. In particular our nMSE for the second joint drops down to +/-2.052 x10-7rad in position, +/-6.959 x10-7rad s-1 in velocity, and +/-7.573 x10-7Nm in torque after learning. We also initially explore a rough stability proof for the algorithm, demonstrating where the consistency transform cancels out the inconsistency arising during learning. Finally we look into extending the model learning into a constrained floating base system. Whilst some research has been achieved on holonomically constrained systems, we implement an adaptive controller that needs minimal manual computation. We focus on taking the implementation of the direct adaptive control and extending it to the anymal quadruped platform, and an underactuated two link planar robot. We use constraint projections and constraint consistent errors to achieve this control. Thus, we are able to prove the Lyapunov stability of the task-space target. We then demonstrate the adaptive control on the simulated anymal and planar robot, and the results demonstrate a clear reduction of error, whilst adapting the model in a similar fashion to the direct adaptive control on a fully actuated robot. We specifically drop down to an error of 0.01m and 0.01rad for the pose, and 0:01m/s and 0.01rad/s for the Cartesian velocities whilst learning the inertial parameters.