Edinburgh Research Archive

Parameter estimation in sparse state-space models

dc.contributor.advisor
lvira Arregui, Victor
dc.contributor.advisor
Augustin, Nicole
dc.contributor.author
Cox, Benjamin
dc.date.accessioned
2025-12-10T16:05:56Z
dc.date.available
2025-12-10T16:05:56Z
dc.date.issued
2025-12-10
dc.description.abstract
State-space models are a flexible framework for modelling sequential data in the presence of noise or incomplete observations, within which we model a system via a hidden state process and a related observation process. These processes are described via a pair of distributions encoding the state dynamics and the observation process, with the distribution of the current state depending only on the previous state, and the distribution of the current observation depending only on the current state. In general, the parameters of these distributions are unknown, and are challenging to estimate, with conventional estimation schemes failing due to the temporal dependence of the time series, and the resultant concentration of the likelihood function. In this work we present several methods to estimate the parameters of state-space models, as well as some methods for estimating the form of the model itself when this is unknown. In particular, we focus on methods that admit interpretable estimates via promoting sparsity in the parameters, thereby shrinking many parameter values to zero. In the first contributing chapter of this work, Chapter 3, we propose a method to obtain sparse Bayesian estimates of the transition matrix of a linear-Gaussian statespace model by utilising reversible jump Markov chain Monte Carlo. We discuss the construction of the reversible jump kernel, and how to interpret the sampled sparsity in terms of a Bayesian causality. We demonstrate our method on several synthetic datasets, where we have the ground truth of causality, and on real-world weather data where we do not, comparing the performance to the existing state-of-the-art. In Chapter 4, we propose a method to promote graphical clusters in the transition parameters of a linear-Gaussian state-space model by utilising a sparsity promoting estimation scheme in conjunction with a dynamically adaptive penalty. We design a general framework to construct state clustering methods within state-space models, and then construct a representative method as a case of this general framework, wherein we apply ideas from network analysis to design an iteratively applied cluster promoting penalty function. We test our method on a series of synthetic datasets, and compare the performance to the existing state of the art. In Chapter 5, we propose a method to construct a polynomial representation of a general state-space model, whereby we learn a sparse approximation of the transition function from a basis of polynomial terms. This allows us to infer the connectivity of the hidden states, thereby providing insight into the unknown underlying dynamics. In the final main chapter, Chapter 6, we propose a method to approximate the intractable optimal proposal of a particle filter utilising a shallow neural network which parametrises a Gaussian mixture distribution. We compare this proposal to several standard proposals, and extend the work to simultaneous estimation of the transition and proposal distributions. Finally, we provide some concluding remarks on the techniques developed, and present a number of potential avenues for future research.
en
dc.identifier.uri
https://hdl.handle.net/1842/44303
dc.identifier.uri
http://dx.doi.org/10.7488/era/6823
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
B. Cox and V. Elvira, “Sparse Bayesian Estimation of Parameters in Linear-Gaussian State-Space Models,” IEEE Transactions on Signal Processing, vol. 71, pp. 1922–1937, 2023. [Online]. Available: https://doi.org/10.1109/TSP.2023.3278867
en
dc.relation.hasversion
B. Cox and V. Elvira, “Parameter Estimation in Sparse Linear-Gaussian State-Space Models via Reversible Jump Markov Chain Monte Carlo,” in Proceedings of the 30th European Signal Processing Conference. IEEE, 2022
en
dc.relation.hasversion
B. Cox, E. Chouzenoux, and V. Elvira, “Learning a sparse polynomial approximation to the transition function of general state-space models,” in ICASSP 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025
en
dc.relation.hasversion
B. Cox, E. Chouzenoux, and V. Elvira, “GraphGrad: Efficient Estimation of Sparse Polynomial Representations for General State-Space Models,” IEEE Transactions on Signal Processing, vol. 73, pp. 1562–1576, 2025
en
dc.relation.hasversion
B. Cox, S. P´erez-Vieites, N. Zilberstein, M. Sevilla, S. Segarra, and V. Elvira, “End-to-End Learning of Gaussian Mixture Proposals Using Differentiable Particle Filters and Neural Networks,” in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024, pp. 9701–9705
en
dc.relation.hasversion
B. Cox, S. Segarra, and V. Elvira, “Learning state and proposal dynamics in state-space models using differentiable particle filters and neural networks,” Signal Processing, vol. 234, p. 109998, 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165168425001124
en
dc.subject
State-space models
en
dc.subject
Parameter estimation
en
dc.subject
Sparse
en
dc.subject
Bayesian
en
dc.subject
Transition matrix
en
dc.title
Parameter estimation in sparse state-space models
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en

Files

Original bundle

Now showing 1 - 1 of 1
Name:
CoxB_2025.pdf
Size:
1.73 MB
Format:
Adobe Portable Document Format
Description:
Cover sheet

This item appears in the following Collection(s)