Edinburgh Research Archive

Spiking neural network model construction, inference, analysis and applications

dc.contributor.advisor
Onken, Arno
dc.contributor.advisor
Hennig, Matthias
dc.contributor.author
Berg, William Peer
dc.date.accessioned
2025-02-07T10:20:46Z
dc.date.available
2025-02-07T10:20:46Z
dc.date.issued
2022-07-11
dc.description.abstract
Computational models have long been used as hypotheses to illuminate and unravel aspects of neural functioning, with seminal works including models such as the Hodgkin-Huxley model, which also exemplifies that model hypotheses may be tested with in vivo or vitro experiments. However, designing high-dimensional biologically realistic spiking models can be an arduous endeavour, and may require a large amount of resources in hand-engineering. Automating this arduous process could potentially greatly accelerate computational research within neuroscience. Therefore, statistical approaches have been used to try and aid in such modelling, with the current state-of-the-art being based on approximate Bayesian computation. However, this approach does not scale well with growing model size and complexity, i.e. both the number of neurons and the number of parameters. In the current data driven era, where deep learning is the prevalent state-of-the-art within machine learning, we investigate whether its key ingredient; gradient based optimisation (GBO) may be leveraged for spiking neural network (SNN) model inference. To this end we implement a modular gradient based optimisation framework on top of PyTorch, a modern ML Python library, and test to what extent GBO may be used for SNN inference, particularly because this approach scales well with network size. GBO is tested for a rate based loss metric, the van Rossum distance which also emphasises the timing of spiking, for the Bernoulli or Poisson negative log-likelihood for probabilistic stochastic models, over different classes of SNNs, including generalised versions of leaky integrate-and-fire, non-leaky integrate-and-fire, and probabilistic general integrate-and-fire spiking models, with extensions to subthreshold synaptic current models as the readout signal used during optimisation instead of a surrogate over the membrane potential, which greatly increases optimisation performance. The results show that due to the temporal state-dependence of the membrane potentials and thus spikes in SNNs, the van Rossum distance metric and emphasis on the precise timing of spiking may obscure the gradient signal. As for the spike trains themselves produced by the models, we test whether higher-order statistics are captured to a greater extent in the SNNs irrespective of the loss variability, and compare model fits to generalised linear models as a baseline model. When factorising the spike trains by using non-negative matrix factorisation (NMF), the resulting ensembles reveal similar functional ensembles for the synthetic data - however, the SNNs capture the functional NMF ensembles to a greater extent, reflected in a higher NMF module similarity. When testing whether this was due to inductive bias by SNN model definition by also fitting to biological data, we found that indeed the similarity was greater for the SNN models than for GLMs, suggesting that GBO may be leveraged to some extent for SNN inference with regards to capturing higher-order spike train statistics. Further, the parameter landscapes formed by the rate-based metric contains no global saddle point, and at best a frontier as the global minima. These observations and results are reflected in the inferred parameters when compared to the ground-truth models for the synthetically generated data, where the inferred SNN likely retrieved a different parameter configuration, while yet capturing the higher-order spike statistics, or solving the task at hand. When introducing a lower-dimensional input-output task and using a readout of a continuous subthreshold synapse current model as the output, we were able to show that optimisation convergence was robust, and that the tasks were solved with low error - albeit yet with completely different final SNN parameter values. This emphasises the point that the biological brain may elegantly solve different tasks with completely different configurations, as is the case in every individual, and may suggest that we should not seek to retrieve some target parameter values, as much as higher-order spike statistics.
en
dc.identifier.uri
https://hdl.handle.net/1842/43087
dc.identifier.uri
https://github.com/williampeer/snn_inference_demo
en
dc.identifier.uri
https://github.com/williampeer/snn_inference
en
dc.identifier.uri
https://github.com/williampeer/gated_synapse_model
en
dc.identifier.uri
http://dx.doi.org/10.7488/era/5633
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.subject
spiking neural network
en
dc.subject
spikes
en
dc.subject
gradient-based optimisation
en
dc.subject
optimisation
en
dc.subject
gradient-descent
en
dc.subject
non-negative
en
dc.subject
matrix factorisation
en
dc.subject
leakay integrate-and-fire
en
dc.subject
van Rossum distance
en
dc.subject
loss metric
en
dc.subject
non-leaky integrate-and-fire
en
dc.subject
stochastic integrate-and-fire
en
dc.subject
maximum likelihood estimation
en
dc.subject
simulation-based inference
en
dc.subject
Izhikevich
en
dc.subject
PyTorch
en
dc.subject
Error landscape
en
dc.subject
parameter landscape
en
dc.subject
SNN
en
dc.subject
continuous synapse model
en
dc.subject
gating synapse model
en
dc.subject
subthreshold continuous synaptic currents
en
dc.subject
encoding-tasks
en
dc.title
Spiking neural network model construction, inference, analysis and applications
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Masters
en
dc.type.qualificationname
MPhil Master of Philosophy
en

Files

Original bundle

Now showing 1 - 1 of 1
Name:
BergWP_2022.pdf
Size:
9.81 MB
Format:
Adobe Portable Document Format
Description:

This item appears in the following Collection(s)