The final version of “Error bound analysis of the stochastic parareal algorithm” by Kamran Pentland, Massimiliano Tamborrino, and myself has just appeared online in the SIAM Journal on Scientific Computing (SISC).
K. Pentland, M. Tamborrino, and T. J. Sullivan. “Error bound analysis of the stochastic parareal algorithm.” SIAM Journal on Scientific Computing 45(5):A2657–A2678, 2023.
Abstract. Stochastic Parareal (SParareal) is a probabilistic variant of the popular parallel-in-time algorithm known as Parareal. Similarly to Parareal, it combines fine- and coarse-grained solutions to an ODE using a predictor-corrector (PC) scheme. The key difference is that carefully chosen random perturbations are added to the PC to try to accelerate the location of a stochastic solution to the ODE. In this paper, we derive superlinear and linear mean-square error bounds for SParareal applied to nonlinear systems of ODEs using different types of perturbations. We illustrate these bounds numerically on a linear system of ODEs and a scalar nonlinear ODE, showing a good match between theory and numerics.
Tadashi Matsumoto and I have just uploaded a preprint of our note “Images of Gaussian and other stochastic processes under closed, densely-defined, unbounded linear operators” to the arXiv.
The purpose of this note is to provide a self-contained rigorous proof of the well-known formula for the mean and covariance function of a stochastic process — in particular, a Gaussian process — when it is acted upon by an unbounded linear operator such as an ordinary or partial differential operator, as used in probabilistic approaches to the solution of ODEs and PDEs. This result is easy to establish in the case of a bounded operator, but the unbounded case requires a careful application of Hille's theorem for the Bochner integral of a Banach-valued random variable.
Abstract. Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP \(v\) that is the image of another GP \(u\) under a linear transformation \(T\) acting on the sample paths of \(u\) are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when \(T\) is an unbounded operator such as a differential operator, which is common in several modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator \(T\) acting on the sample paths of a square-integrable stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.
The article “GParareal: A time-parallel ODE solver using Gaussian process emulation” by Kamran Pentland, Massimiliano Tamborrino, James Buchanan, Lynton Appel and myself has just been published in its final form in Statistics and Computing. In this paper, we show how a Gaussian process emulator for the difference between coarse/cheap and fine/expensive solvers for a dynamical system can be used to enable rapid and accurate solution of that dynamical system in a way that is parallel in time. This approach extends the now-classical Parareal algorithm in a probabilistic way that allows for efficient use of both runtime and legacy data gathered about the coarse and fine solvers, which may be a critical performance advantage for complex dynamical systems for which the fine solver is too expensive to run in series over the full time domain.
K. Pentland, M. Tamborrino, T. J. Sullivan, J. Buchanan, and L. C. Appel. “GParareal: A time-parallel ODE solver using Gaussian process emulation.” Statistics and Computing 33(1):no. 20, 23pp., 2023.
Abstract. Sequential numerical methods for integrating initial value problems (IVPs) can be prohibitively expensive when high numerical accuracy is required over the entire interval of integration. One remedy is to integrate in a parallel fashion, “predicting” the solution serially using a cheap (coarse) solver and “correcting” these values using an expensive (fine) solver that runs in parallel on a number of temporal subintervals. In this work, we propose a time-parallel algorithm (GParareal) that solves IVPs by modelling the correction term, i.e. the difference between fine and coarse solutions, using a Gaussian process emulator. This approach compares favourably with the classic parareal algorithm and we demonstrate, on a number of IVPs, that GParareal can converge in fewer iterations than parareal, leading to an increase in parallel speed-up. GParareal also manages to locate solutions to certain IVPs where parareal fails and has the additional advantage of being able to use archives of legacy solutions, e.g. solutions from prior runs of the IVP for different initial conditions, to further accelerate convergence of the method - something that existing time-parallel methods do not do.
Kamran Pentland, Massimiliano Tamborrino, and I have just uploaded a preprint of our latest article, “Error bound analysis of the stochastic parareal algorithm”, to the arXiv.
Abstract. Stochastic parareal (SParareal) is a probabilistic variant of the popular parallel-in-time algorithm known as parareal. Similarly to parareal, it combines fine- and coarse-grained solutions to an ordinary differential equation (ODE) using a predictor-corrector (PC) scheme. The key difference is that carefully chosen random perturbations are added to the PC to try to accelerate the location of a stochastic solution to the ODE. In this paper, we derive superlinear and linear mean-square error bounds for SParareal applied to nonlinear systems of ODEs using different types of perturbations. We illustrate these bounds numerically on a linear system of ODEs and a scalar nonlinear ODE, showing a good match between theory and numerics.
The article “Testing whether a learning procedure is calibrated” by Jon Cockayne, Matthew Graham, Chris Oates, Onur Teymur, and myself has just appeared in its final form in the Journal of Machine Learning Research. This article is part of our research on the theoretical foundations of probabilistic numerics and uncertainty quantification, as we seek to explore what it means for the uncertainty associated to a computational result to be “well calibrated”.
J. Cockayne, M. M. Graham, C. J. Oates, T. J. Sullivan, and O. Teymur. “Testing whether a learning procedure is calibrated.” Journal of Machine Learning Research 23(203):1–36, 2022.
Abstract. A learning procedure takes as input a dataset and performs inference for the parameters \(\theta\) of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about \(\theta\) after seeing the dataset. Bayesian inference is a prime example of such a procedure, but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be considered calibrated, in the sense that the true data-generating parameters are plausible as samples from its distributional output. A learning procedure whose inferences and predictions are systematically over- or under-confident will fail to be calibrated. On the other hand, a learning procedure that is calibrated need not be statistically efficient. A hypothesis-testing framework is developed in order to assess, using simulation, whether a learning procedure is calibrated. Several vignettes are presented to illustrate different aspects of the framework.