Learning dynamic motor skills for terrestrial locomotion
dc.contributor.advisor
Li, Zhibin
dc.contributor.advisor
Komura, Taku
dc.contributor.author
Yang, Chuanyu
dc.contributor.sponsor
Engineering and Physical Sciences Research Council (EPSRC)
en
dc.date.accessioned
2021-09-14T15:52:07Z
dc.date.available
2021-09-14T15:52:07Z
dc.date.issued
2020-11-30
dc.description.abstract
The use of Deep Reinforcement Learning (DRL) has received significantly increased attention
from researchers within the robotics field following the success of AlphaGo, which demonstrated
the superhuman capabilities of deep reinforcement algorithms in terms of solving complex
tasks by beating professional GO players. Since then, an increasing number of researchers
have investigated the potential of using DRL to solve complex high-dimensional robotic tasks,
such as legged locomotion, arm manipulation, and grasping, which are difficult tasks to solve
using conventional optimization approaches.
Understanding and recreating various modes of terrestrial locomotion has been of long-standing interest to roboticists. A large variety of applications, such as rescue missions,
disaster responses and science expeditions, strongly demand mobility and versatility in legged
locomotion to enable task completion. In order to create useful physical robots, it is necessary
to design controllers to synthesize the complex locomotion behaviours observed in humans
and other animals.
In the past, legged locomotion was mainly achieved via analytical engineering approaches.
However, conventional analytical approaches have their limitations, as they require relatively
large amounts of human effort and knowledge. Machine learning approaches, such as DRL,
require less human effort compared to analytical approaches. The project conducted for this
thesis explores the feasibility of using DRL to acquire control policies comparable to, or better
than, those acquired through analytical approaches while requiring less human effort.
In this doctoral thesis, we developed a Multi-Expert Learning Architecture (MELA) that
uses DRL to learn multi-skill control policies capable of synthesizing a diverse set of dynamic
locomotion behaviours for legged robots. We first proposed a novel DRL framework for the
locomotion of humanoid robots. The proposed learning framework is capable of acquiring
robust and dynamic motor skills for humanoids, including balancing, walking, standing-up
fall recovery. We subsequently improved upon the learning framework and design a novel
multi-expert learning architecture that is capable of fusing multiple motor skills together in
a seamless fashion and ultimately deploy this framework on a real quadrupedal robot. The
successful deployment of learned control policies on a real quadrupedal robot demonstrates
the feasibility of using an Artificial Intelligence (AI) based approach for real robot motion control.
en
dc.identifier.uri
https://hdl.handle.net/1842/38041
dc.identifier.uri
http://dx.doi.org/10.7488/era/1312
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
ang, Chuanyu, Taku Komura, and Zhibin Li. "Emergence of human-comparable bal ancing behaviours by deep reinforcement learning." 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids). IEEE, 2017.
en
dc.relation.hasversion
Yang, Chuanyu, Kai Yuan, Wolfgang Merkt, Taku Komura, Sethu Vijayakumar, and Zhibin Li. "Learning whole-body motor skills for humanoids." 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids). IEEE, 2018
en
dc.relation.hasversion
Yang, Chuanyu, Kai Yuan, Shuai Heng, Taku Komura, and Zhibin Li. "Learning natural locomotion behaviors for humanoid robots using human bias." 2020 IEEE Robotics and Automation Letters. IEEE, 2020
en
dc.relation.hasversion
ng, Doo Re, Chuanyu Yang, Christopher McGreavy, and Zhibin Li. "Recurrent deter ministic policy gradient method for bipedal locomotion on rough terrain challenge." 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2018.
en
dc.relation.hasversion
Yuan, Kai, Christopher McGreavy, Chuanyu Yang, Wouter Wolfslag, and Zhibin Li. "Decoding Motor Skills of Artificial Intelligence and Human Policies: A Study on Humanoid and Human Balance Control." IEEE Robotics & Automation Magazine (2020).
en
dc.relation.hasversion
Sun, Zhaole, Kai Yuan, Wenbin Hu, Chuanyu Yang, and Zhibin Li. "Learning Pregrasp Manipulation of Objects from Ungraspable Poses." In 2020 IEEE international conference on robotics and automation (ICRA). IEEE, 2020
en
dc.subject
deep reinforcement learning
en
dc.subject
legged locomotion
en
dc.subject
bipedal robot
en
dc.subject
quadrupedal robot
en
dc.subject
robotics
en
dc.title
Learning dynamic motor skills for terrestrial locomotion
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en
Files
Original bundle
1 - 1 of 1
- Name:
- Yang2020.pdf
- Size:
- 43.65 MB
- Format:
- Adobe Portable Document Format
- Description:
This item appears in the following Collection(s)

