Randomly assembled cyclic multi-block ADMM: a fast method for large-scale linearly constrained quadratic optimization
Item statusRestricted Access
Embargo end date22/02/2023
This work is motivated by a simple question: how to find a relatively good solution to a very large optimization problem in a limited amount of time. We consider the linearly constrained convex minimization model with an objective function that is the sum of multiple separable functions and a coupled quadratic function. Such problems naturally arise from applications such as machine and statistical learning, image processing, portfolio management, tensor decomposition, matrix completion or decomposition, manifold optimization, data clustering and many other problems of practical importance. This thesis focuses on the development of new algorithms that are based on the alternating direction method of multipliers (ADMM). The first part proposes a randomly assembled multi-block and cyclic alternating direction method of multipliers (RAC-ADMM), a novel ADMM algorithm with which we hope to mitigate both the problem of slow convergence and the issues with divergence that arise when the multi-block ADMM is applied to convex problems with cross-block coupled variables. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts; allowing us to develop criteria for almost sure convergence. The second part utilizes the aforementioned theoretical guidance on RAC-ADMM to investigate solution methods for continuous and mixed integer quadratic problems (QPs, MIQPs). First, we develop a robust and efficient general-purpose QP solver, RACQP, capable of detecting infeasible and unbounded problems. Next, we devise two extensions that enable RACQP to address MIQP – one applicable to situations where a solution needs to be found rapidly but some level of primal infeasibility can be tolerated; and the other that implements a custom branch-and-bound algorithm. Multiple numerical tests, conducted on both randomly-generated and large-scale benchmark instances show that, for most quadratic optimization problems, RACQP matches, and often exceeds, the computational efficiency of commercial solvers and competing ADMM-based solvers.