On accuracy, robustness and reliability of laser-based localization
Item Status
Embargo End Date
Date
Authors
Abstract
Legged robots are expected to demonstrate autonomous skills in situations which
are not suitable for wheeled platforms, such as cluttered disaster areas and outdoor
trails.
State estimation strategies for legged robots are typically based on inertial sensing
and leg kinematics, and are affected by continuous drift. In a process called localization,
perception sensors such as cameras or laser scanners can be used to compensate
for this drift via scene registration.
The objective of this dissertation is to develop laser-based localization techniques
which are suitable for the safe and continuous operation of wheeled and legged
robots in real-world environments.We explore approaches fusing inertial, kinematics,
stereo vision and laser signal sources in various combinations, in order to enable state
estimation in the presence of challenges such as limited sensor field-of-view, varying
lighting conditions, occlusions and non-uniform spatial overlap during motion.
A number of novel contributions are presented, including: a method for laser-based
localization which is robust to large variations in spatial overlap and initial alignment
error, a state estimation system which fuses multiple sensor sources in a manner
which is suitable for a quadruped’s feedback controller, a module which can predict
localization failure. The final contribution, explores the applicability of deep learning
techniques to laser odometry.
We carry out extensive experimental evaluations running real-time in closed loop
control on the NASA Valkyrie humanoid robot and the IIT quadruped HyQ, while
exploring a variety of test environments. We run further experiments using a dataset
collected with the Boston Dynamics humanoid Atlas during the DARPA Robotics
Challenge finals, as well as on two mobile robots, using ours and publicly available
datasets.
This item appears in the following Collection(s)

