Using Dimensionality Reduction to Exploit Constraints in Reinforcement Learning
dc.contributor.author
Bitzer, Sebastian
en
dc.contributor.author
Howard, Matthew
en
dc.contributor.author
Vijayakumar, Sethu
en
dc.date.accessioned
2010-08-18T11:16:34Z
dc.date.available
2010-08-18T11:16:34Z
dc.date.issued
2010
dc.date.updated
2010-08-18T11:16:35Z
dc.description.abstract
Reinforcement learning in the high-dimensional,
continuous spaces typical in robotics, remains a challenging
problem. To overcome this challenge, a popular approach has
been to use demonstrations to find an appropriate initialisation
of the policy in an attempt to reduce the number of iterations
needed to find a solution. Here, we present an alternative
way to incorporate prior knowledge from demonstrations of
individual postures into learning, by extracting the inherent
problem structure to find an efficient state representation.
In particular, we use probabilistic, nonlinear dimensionality
reduction to capture latent constraints present in the data. By
learning policies in the learnt latent space, we are able to solve
the planning problem in a reduced space that automatically
satisfies task constraints. As shown in our experiments, this
reduces the exploration needed and greatly accelerates the
learning. We demonstrate our approach for learning a bimanual
reaching task on the 19-DOF KHR-1HV humanoid.
en
dc.extent.noOfPages
7
en
dc.identifier.uri
http://hdl.handle.net/1842/3644
dc.language.iso
en
dc.subject
Informatics
en
dc.subject
Computer Science
en
dc.subject
Robotics
en
dc.title
Using Dimensionality Reduction to Exploit Constraints in Reinforcement Learning
en
dc.type
Conference Paper
en
rps.title
Proc. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2010), Taiwan (2010).
en
Files
Original bundle
1 - 1 of 1
- Name:
- Using Dimensionality Reduction to Exploit Constraints in Reinforcement Learning.pdf
- Size:
- 1.85 MB
- Format:
- Adobe Portable Document Format
This item appears in the following Collection(s)

