Bayesian inverse problems
View/ Open
Date
27/11/2021Author
Rodrigues, Jenovah
Metadata
Abstract
We consider linear, mildly ill-posed inverse problems in separable Hilbert spaces
under Gaussian noise, whose covariance operator is not identity (i.e. it is not a
white noise problem), and use the Bayesian approach to nd their regularised solution.
Speci cally, our goal is to regularise the prior in such a way that the posterior
distribution achieves the optimal rate of contraction. The object of interest (an
unknown function) is assumed to lie in a Sobolev space. Firstly, we consider the
so-called conjugate setting where the covariance operator of the noise and the covariance
operator of the prior are simultaneously diagonalisable, and the noise has
heterogeneous variance. Note this similar to the work done in [Knapik et al., 2011],
albeit for the homogeneous variance case. Hence, we derive the minimax rate of
convergence, the contraction rate of the posterior distribution and subsequently,
discuss the conditions under which these rates coincide. The results are numerically
illustrated by the problem of recovering a function from noisy observations.
Secondly, motivated by Poisson inverse problems, we consider Gaussian, signaldependent
noise (i.e. non-conjugate setting). Using [Panov and Spokoiny, 2015] we
obtain Bernstein von-Mises results for the posterior distribution, and consequently
derive the contraction rates and conditions for its optimality as well.