Show simple item record

Proc. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2010), Taiwan (2010).

dc.contributor.authorBitzer, Sebastianen
dc.contributor.authorHoward, Matthewen
dc.contributor.authorVijayakumar, Sethuen
dc.date.accessioned2010-08-18T11:16:34Z
dc.date.available2010-08-18T11:16:34Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/1842/3644
dc.description.abstractReinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations of individual postures into learning, by extracting the inherent problem structure to find an efficient state representation. In particular, we use probabilistic, nonlinear dimensionality reduction to capture latent constraints present in the data. By learning policies in the learnt latent space, we are able to solve the planning problem in a reduced space that automatically satisfies task constraints. As shown in our experiments, this reduces the exploration needed and greatly accelerates the learning. We demonstrate our approach for learning a bimanual reaching task on the 19-DOF KHR-1HV humanoid.en
dc.language.isoen
dc.subjectInformaticsen
dc.subjectComputer Scienceen
dc.subjectRoboticsen
dc.titleUsing Dimensionality Reduction to Exploit Constraints in Reinforcement Learningen
dc.typeConference Paperen
rps.titleProc. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2010), Taiwan (2010).en
dc.extent.noOfPages7en
dc.date.updated2010-08-18T11:16:35Z


Files in this item

This item appears in the following Collection(s)

Show simple item record