Robust representation learning approaches for neural population activity
View/ Open
Date
19/04/2023Author
Jude, Justin
Metadata
Abstract
Understanding communication patterns between different regions of the human brain is key to learning useful spatial representations. Once learned, these representations present a foundation on which new tasks can be learned rapidly.
Moreover, the activity patterns generated by the brain are ultimately relayed to the muscles to generate behaviour. By measuring these action potentials from the relevant source regions of the brain directly, we can capture expected behaviour notwithstanding interruption in the neural pathways to downstream muscles.
Spinal cord injury is an example of interruption in the case of motor control of arm or leg muscles from the motor cortex of the brain. Multiple electrodes recording action potentials from neurons in the motor cortex in conjunction with a plethora of possible modelling techniques can be used to decode this intended movement. Subsequently, soft or hard robotics can be used to bypass the damaged spinal cord in relaying intended movement behaviour to specific limbs.
This thesis is comprised of two main parts. The first part addresses the question of how representation learning in neural networks can benefit the learning of goal-directed behaviour. Using the learning of spatial representations through recurrent neural networks as a model, this work showed that such a representation can be used as a foundation for rapid learning of navigational tasks using reinforcement learning. This learned representation takes the form of spatially modulated units within the neural network, similar to place cells found in the brains of mammals. Furthermore, an analysis of the simulated neurons showed that these place units within the neural network have multiple characteristics replicating those found in biological place cells, such as precursory firing behaviour.
The second part tackles the issue of variability in neural representations, a phenomenon that causes significant deterioration of the decoding of behaviour from neural population activity over time. Using combined neural and behaviour recordings from monkeys performing motor tasks, this work aims to develop stable decoders that are robust to such fluctuations. Two approaches using unsupervised learning were investigated. The first is based on domain adaptation, where decoders were trained to "ignore" all aspects of the data subject to fluctuations, and to instead extract the salient, stable aspects of the neural representation of movements. This representation then allows the decoder to generalise well to a completely unseen recording session, thus accurately predicting behaviour intention withstanding significant neuron non-stationaries present between recording sessions. This generalisation to an unseen recording session without retraining or recalibration of a decoder has not been previously shown.
This first approach performed well for data that was obtained close enough in time to the training data, but required a significant number of recording sessions for successful training. To address these limitations, a contrastive learning approach was used next. In this model, synthetic variations of trials from a single recording session were generated. These variations were similar in type and magnitude to the neuron non-stationaries that exist between recording sessions, and used as training data together with the original data for a model that learns to remove these non-stationaries to recover stable dynamics related to behaviour. This method produced a very stable decoder capable of accurately inferring intended behaviour for up to a week into the future. This training paradigm is an example of self-supervised learning, whereby the model is trained on perturbed versions of data.
Taken together, in this thesis I explore approaches which lead to robust representations being learned within neural networks. These representations are shown to be neurally realistic and robust, allowing for a high degree of generalisation.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Learning representations for speech recognition using artificial neural networks
Swietojanski, Paweł (The University of Edinburgh, 2016-11-29)Learning representations is a central challenge in machine learning. For speech recognition, we are interested in learning robust representations that are stable across different acoustic environments, recording equipment ... -
Energy states of neural systems as featural invariants for conceptual representations of more and less
Pajo, Morgan (The University of Edinburgh, 2015) -
Neural representation of movement tau
Tan, Heng‐Ru May (2008)A fundamental aspect of goal‐directed behaviour concerns the closure of motion‐gaps in a timely fashion. An influential theory about how this can be achieved is provided by the tautheory (Lee, 1998). Tau is defined as ...