Explicit numerical algorithms for stochastic differential equations and their applications in data science and optimization
Files
Item Status
Embargo End Date
Date
Authors
Zhang, Ying
Abstract
In this thesis, we focus on the analysis of a new generation of explicit numerical algorithms
for (non-linear) random systems and their applications in computational statistics, non-convex
optimization, data science, and finance.
In the first part, a tamed (explicit) order 1.5 algorithm is proposed to approximate solutions of stochastic differential equations (SDEs) with super-linear coefficients. Under certain
conditions on the coefficients, a convergence result in L 2
is obtained.
Then, in the second part, a new higher order Langevin Monte Carlo (LMC) algorithm is
constructed, which is based on the application of an order 1.5 scheme to the Langevin SDE. The
proposed LMC algorithm is considered in the context of sampling from a target distribution π
that has a density πˆ on R
d known up to a normalizing constant. To obtain the convergence
results, − log ˆπ is assumed to have a locally Lipschitz gradient and its third derivative is locally
Hölder continuous with exponent α ∈ (0, 1]. Non-asymptotic bounds are established for the
convergence to stationarity of the new sampling method with convergence rate 1 + α/2 in
Wasserstein distance, while it is shown that the rate is 1 in total variation even in the absence
of convexity. In particular, in the case where − log ˆπ is strongly convex and its gradient is
Lipschitz continuous, explicit constants are provided.
Finally, in the last part of the thesis, the stochastic gradient Langevin dynamics (SGLD)
algorithm is considered, which can be viewed as an extension of the unadjusted Langevin
algorithm (ULA). We focus on a stochastic optimization problem via the SGLD algorithm.
Crucially, the aim is to obtain non-asymptotic error bounds of the SGLD algorithm with discontinuous gradient H(θ, x). Theoretical guarantees in Wasserstein distances are provided under
such an assumption for both convex and non-convex objective functions. Moreover, explicit
upper estimates are obtained for the expected excess risk associated with the approximation of
global minimizers of these objective functions. Numerical results for key examples in statistics
and finance are presented which support the main findings.
This item appears in the following Collection(s)

