Visual SLAM for forest environments
dc.contributor.advisor
Webb, Barbara
dc.contributor.advisor
Fisher, Bob
dc.contributor.author
Garforth, James
dc.contributor.sponsor
Engineering and Physical Sciences Research Council (EPSRC)
en
dc.date.accessioned
2024-05-24T13:59:19Z
dc.date.available
2024-05-24T13:59:19Z
dc.date.issued
2024-05-24
dc.description.abstract
The objective of this dissertation is to investigate visual navigation for robotics in the forest domain. Visual Simultaneous Localisation and Mapping (SLAM) is a core technology for deployment of robotic systems in many use cases, allowing a robot to build a map of an unknown environment while also keeping track of its own location. Visual SLAM is particularly valuable in settings where access to external positioning data, such as the Global Positioning System (GPS), is unreliable or entirely unavailable. This is true of indoor environments, such as offices, hence most developments of SLAM systems are evaluated against these. However, other GPS denied natural environments such as forests, where canopy cover can block satellite signals, have been left relatively unexplored.
We seek to determine the factors that set the forest environment apart from those more typically explored in SLAM research, establish the applicability of existing systems to these challenging conditions, and investigate processing techniques which might aid adaptation of visual SLAM to forest scenes. We take an approach that aims to identify and qualify challenging factors absent from classic SLAM datasets, and how these factors influence performance, rather than simply presenting a system well-tuned to the task but no more robust.
The first contribution in this thesis is a study comparing performance of state of the art monocular visual SLAM systems on new and existing forest datasets, which confirms the challenging nature of the task. We also propose a suite of visual scene statistics which aim to measure key traits in video data known to cause difficulties in navigation. We show that real forest data presents significantly more of these challenging traits, while simulated forest data fails to reflect the same traits as the real forest. In the second contribution, we take performance analysis further by investigating the impact of initialisation motions in the data and the tuning of system parameters.
Our third contribution presents an improved forest simulation, with a variety of simulated environmental conditions such as wind and snow. Using this simulation, we are then able to further demonstrate the links between our visual scene statistics and the traits of the environment we proposed they would reflect, as well as independently demonstrate how each trait impacts the performance of a SLAM system.
In the process of this thesis, we also developed a modular computer vision system allowing for rapid configuration of vision pipeline tests, both for evaluating statistics of datasets and for modifying input to SLAM systems. The fourth contribution takes advantage of this pipeline to test the effects of a variety of preprocessing steps on feature matching in forest scenes and overall SLAM performance, opening up potential avenues for improving efficiency.
The final contribution is to assess the performance of place recognition algorithms for SLAM loop closure in forest data. We compare state of the art convolutional neural architecture NetVLAD with popular algorithmic approaches (FABMAP, DBoW2 and VLAD) and find that it generalises better to these new scenes. We integrate this solution with a SLAM system, finding that it proposes correct loop closure locations, but resolving feature matches between these scenes remains a challenge.
en
dc.identifier.uri
https://hdl.handle.net/1842/41816
dc.identifier.uri
http://dx.doi.org/10.7488/era/4539
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
J. and Webb, B. (2019). Visual appearance analysis of forest scenes for monocular slam. In 2019 International Conference on Robotics and Automation (ICRA), pages 1794–1800. IEEE.
en
dc.relation.hasversion
Maciel-Pearson, B. G., Marchegiani, L., Akay, S., Atapour-Abarghouei, A., Garforth, J., and Breckon, T. P. (2019). Online deep reinforcement learning for autonomous uav navigation and exploration of outdoor environments. ArXiv, abs/1912.05684.
en
dc.subject
SLAM
en
dc.subject
Navigation
en
dc.subject
Computer Vision
en
dc.subject
Forestry
en
dc.subject
Robotics
en
dc.title
Visual SLAM for forest environments
en
dc.title.alternative
Analysis of visual SLAM for forest environments
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en
Files
Original bundle
1 - 1 of 1
- Name:
- GarforthJ_2024.pdf
- Size:
- 5.21 MB
- Format:
- Adobe Portable Document Format
- Description:
This item appears in the following Collection(s)

