Neuromorphic mushroom body model learning spatio-temporal memory
Item statusRestricted Access
Embargo end date30/06/2024
The utilisation of bio-robots is well-suited for linking the evaluation of biological findings with the advance of engineering solutions. Designing a mobile navigation robot that can adapt to the real-world natural environment with relatively low-power and efficient onboard computing is a challenging task. Insects, other the other hand, are expert navigators despite their mini-aturised brains and relatively simple neural systems. The visual homing and route-following behaviour of ants have been the subject of both behavioural experiments and computational models due to the fact that ants can robustly find their way home through cluttered natural environments, even without distinctive landmarks and with variability in visual input caused by uneven terrain, changing illumination, and moving vegetation. The mushroom bodies (MBs) are acknowledged as the learning centre in the insect brains and have been modelled for visual pattern learning in navigation. In previous image-matching based MB models, MB learns visual snapshots by reducing Kenyon cell (KC) to mushroom body output neuron (MBON) weights so that reduced output activity symbolises familiarity when a similar visual pattern is presented to the model again. An agent (in simulation for hardware robot) can use this familiarity to follow a route by aligning the head direction at each step to recapture the learned visual pattern. However, the image-matching model for visual navigation overlooks the motion sensing properties of the insect visual system, and such learning in the MB models cut off the temporal memory which might play a significant role in insect visual learning. In this work, we propose that the interconnections between KCs in the MBON could encode spatio-temporal memory of visual motion experienced when moving along a route. Our im-plementation uses an event-based camera (also called DVS) mounted on a robot to sense visual motion. In contrast to previous image-matching models where all memories are stored in parallel, the continuous visual flow is inherently sequential. The continuous visual input can be encoded as temporal memory by altering the KC-KC axo-axonic inhibition weights based on spiking timing dependent plasticity (STDP). We simulated the MB learning in a spiking neural network that incorporates biologically plausible neural circuits and neuron parameters. By running on a neuromorphic computer SpiNNaker, our model can evaluate visual familiarity in real-time. We tested the model in both indoor and outdoor environments and found that it was plausible to support route recognition for visual navigation. Our sequence manipulation test showed that the neural output from the model matched the observed ants’ behaviour when the animal travelled through a familiar visual environment but in a distorted order. Additionally, our model demonstrated greater robustness than SeqSLAM when tested on repeated routes or routes with small lateral offsets. Through our development of a biologically inspired, plausible, and constrained model, as well as the integration of a bio-inspired sensor and a neuromorphic computing system, our work demonstrates how bio-robotics can successfully combine the assessment of biological hypotheses and the innovation of engineering solutions.