3D data fusion by depth refinement and pose recovery
Abstract
Refining depth maps from different sources to obtain a refined depth map, and aligning
the rigid point clouds from different views, are two core techniques. Existing depth
fusion algorithms do not provide a general framework to obtain a highly accurate depth
map. Furthermore, existing rigid point cloud registration algorithms do not always align
noisy point clouds robustly and accurately, especially when there are many outliers and
large occlusions. In this thesis, we present a general depth fusion framework based on
supervised, semi-supervised, and unsupervised adversarial network approaches. We
show that the refined depth maps are more accurate than the source depth maps by
depth fusion. We develop a new rigid point cloud registration algorithm by aligning two
uncertainty-based Gaussian mixture models, which represent the structures of the two
point clouds. We show that we can register rigid point clouds more accurately over a
larger range of perturbations. Subsequently, the new supervised depth fusion algorithm
and new rigid point cloud registration algorithm are integrated into the ROS system of a
real gardening robot (called TrimBot) for practical usage in real environments. All the
proposed algorithms have been evaluated on multiple existing datasets to show their
superiority compared to prior work in the field.