Neural computation through geometry and dynamics
Item Status
publication issues
Embargo End Date
2026-10-10
Date
Authors
Pellegrino, Arthur
Abstract
The brain is a dynamical system made of billions of interconnected neurons, and its function is to transform sensory input into behavioural output. Yet, the mechanisms behind the computations performed by large networks of neurons — whether biological or artificial — remain poorly understood. To tackle this challenge, neuroscience has experienced a recent push towards large-scale neural data. These increasingly high-dimensional neural recordings have created a need for new mathematical and artificial intelligence (AI) tools for data analysis and modelling. This has led to the emergence of two lines of research which have evolved in parallel: i) the design of new theoretical tools to model the dynamics of networks of neurons, and ii) the development of new data analysis methods to uncover low-dimensional geometry in neural recordings. Yet, network models often do not work well with the noisy high-dimensional data yielded by experiments, while geometric methods can be fitted to data but lack ways to test specific hypotheses about the computations performed by the recorded neurons.
My doctoral work developed new methods for the study of neural computations which integrate dynamical systems modelling and data-driven geometric tools to provide a means to generate and systematically test neuroscientific hypotheses against high-dimensional data. I start by reviewing the linear algebra and dynamical systems theory relevant to the neural subspace hypothesis. Then, to extend this view, I introduce novel methods based on tensors, differential geometry and dynamical systems. I show that these methods can be used to study how task and behavioural variables are non-linearly represented in the activity of populations of neurons. They also have practical applications, for example by improving the decoding of the kinematics of the arm from neural activity. Going one step further, I probe how biological and artificial neural networks reshape their connectivity to change these representations to learn new tasks. I derive mathematically, in models, and in data, the learning mechanisms underlying the widely observed phenomenon of low-rank neural connectivity. Finally, I provide a new theoretical framework grounded in differential geometry to characterise the manifolds of recurrent neural networks. Together, these results provide a means to study neural computations in biological and artificial neural networks by linking the geometric and dynamical systems perspectives on neural activity.
This item appears in the following Collection(s)

