Edinburgh Research Archive

Probabilistic inference in Bayesian neural networks

dc.contributor.advisor
Wade, Sara
dc.contributor.advisor
Anjos, Miguel
dc.contributor.author
Sheinkman, Alisa
dc.date.accessioned
2025-07-29T11:22:07Z
dc.date.available
2025-07-29T11:22:07Z
dc.date.issued
2025-07-29
dc.description.abstract
Despite widespread applicability and the dominant role in machine learning, neural networks remain highly non-transparent and are often regarded as black boxes due to the lack of human-understandable interpretations. Conventional deep models tend to be overconfident in predictions, provide poor uncertainty estimates and are sensitive to adversarial attacks. The Bayesian paradigm takes a step further and provides a natural framework to address these challenges by considering infinite ensembles of differently weighted neural networks. The Bayesian neural networks are capable of capturing the uncertainty, improving the accuracy and controlling the model complexity. Unfortunately, for most real-world problems, the exact probabilistic inference is unavailable, and the asymptotically faultless Markov chain Monte Carlo becomes daunting when dealing with large high-dimensional datasets and multimodal posteriors of neural networks. At the same time, faster and computationally appealing optimization-centric variational inference lacks the theoretical justification of the sampling-based methods and is known to underestimate the uncertainty of the true posterior distribution. This thesis systematically studies different aspects of variational inference, namely, theoretical foundations, challenges and means of dealing with those. Further, the practical questions arising when implementing and comparing Bayesian neural networks are addressed, and the dependency of the predictive performance on the architectural choices and the alignment between the model and the inference algorithm are analysed. Finally, this thesis contributes to the development of variational inference techniques and presents a novel kind of Bayesian neural network called a variational bow tie neural network in which we employ sparsity-promoting priors and consider the improved version of the classical coordinate ascent variational inference algorithm.
en
dc.identifier.uri
https://hdl.handle.net/1842/43738
dc.identifier.uri
http://dx.doi.org/10.7488/era/6271
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
Alisa Sheinkman and Sara Wade. Variational Bayesian bow tie neural networks with shrinkage. arXiv preprint arXiv:2411.11132, 2024. 15, 56
en
dc.relation.hasversion
Alisa Sheinkman and Sara Wade. Understanding the trade-offs in accuracy and uncertainty quantification: Architecture and inference choices in Bayesian neural networks. arXiv preprint arXiv:2503.11808, 2025. 15, 39
en
dc.subject
Bayesian neural networks
en
dc.subject
variational inference
en
dc.subject
probabilistic inference
en
dc.subject
uncertainty quantification
en
dc.title
Probabilistic inference in Bayesian neural networks
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en

Files

Original bundle

Now showing 1 - 1 of 1
Name:
Sheinkman2025.pdf
Size:
24.8 MB
Format:
Adobe Portable Document Format
Description:

This item appears in the following Collection(s)