Tim Sullivan

#publication

Clear Search

An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems

Order-theoretic perspectives on MAP estimation in SIAM/ASA JUQ

The final version of “An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems” by Hefin Lambley and myself has just appeared online in the SIAM/ASA Journal on Uncertainty Quantification.

On a heuristic level, modes and MAP estimators are intended to be the “most probable points” of a space \(X\) with respect to a probability measure \(\mu\). Thus, in some sense, they would seem to be the greatest elements of some order on \(X\), and a rigorous order-theoretic treatment is called for, especially for cases in which \(X\) is, say, an infinite-dimensional function space. Such an order-theoretic perspective opens up some attractive proof strategies for the existence of modes and MAP estimators but also leads to some interesting counterexamples. In particular, because the orders involved are not total, some pairs of points of \(X\) can be incomparable (i.e. neither is more nor less likely than the other). In fact we show that there are examples for which the collection of such mutually incomparable elements is dense in \(X\).

H. Lambley and T. J. Sullivan. “An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems.” SIAM/ASA Journal on Uncertainty Quantification 11(4):1195–1224, 2023. doi:10.1137/22M154243X

More →

Published on Friday 20 October 2023 at 09:00 UTC #publication #modes #order-theory #map-estimators #lambley #juq

Error Bound Analysis of the Stochastic Parareal Algorithm

Error analysis for SParareal in SISC

The final version of “Error bound analysis of the stochastic parareal algorithm” by Kamran Pentland, Massimiliano Tamborrino, and myself has just appeared online in the SIAM Journal on Scientific Computing (SISC).

K. Pentland, M. Tamborrino, and T. J. Sullivan. “Error bound analysis of the stochastic parareal algorithm.” SIAM Journal on Scientific Computing 45(5):A2657–A2678, 2023. doi:10.1137/22M1533062

Abstract. Stochastic Parareal (SParareal) is a probabilistic variant of the popular parallel-in-time algorithm known as Parareal. Similarly to Parareal, it combines fine- and coarse-grained solutions to an ODE using a predictor-corrector (PC) scheme. The key difference is that carefully chosen random perturbations are added to the PC to try to accelerate the location of a stochastic solution to the ODE. In this paper, we derive superlinear and linear mean-square error bounds for SParareal applied to nonlinear systems of ODEs using different types of perturbations. We illustrate these bounds numerically on a linear system of ODEs and a scalar nonlinear ODE, showing a good match between theory and numerics.

Published on Monday 9 October 2023 at 09:00 UTC #publication #prob-num #sparareal #pentland #tamborrino #sisc

GParareal: A time-parallel ODE solver using Gaussian process emulation

GParareal in Statistics and Computing

The article “GParareal: A time-parallel ODE solver using Gaussian process emulation” by Kamran Pentland, Massimiliano Tamborrino, James Buchanan, Lynton Appel and myself has just been published in its final form in Statistics and Computing. In this paper, we show how a Gaussian process emulator for the difference between coarse/cheap and fine/expensive solvers for a dynamical system can be used to enable rapid and accurate solution of that dynamical system in a way that is parallel in time. This approach extends the now-classical Parareal algorithm in a probabilistic way that allows for efficient use of both runtime and legacy data gathered about the coarse and fine solvers, which may be a critical performance advantage for complex dynamical systems for which the fine solver is too expensive to run in series over the full time domain.

K. Pentland, M. Tamborrino, T. J. Sullivan, J. Buchanan, and L. C. Appel. “GParareal: A time-parallel ODE solver using Gaussian process emulation.” Statistics and Computing 33(1):no. 20, 23pp., 2023. doi:10.1007/s11222-022-10195-y

Abstract. Sequential numerical methods for integrating initial value problems (IVPs) can be prohibitively expensive when high numerical accuracy is required over the entire interval of integration. One remedy is to integrate in a parallel fashion, “predicting” the solution serially using a cheap (coarse) solver and “correcting” these values using an expensive (fine) solver that runs in parallel on a number of temporal subintervals. In this work, we propose a time-parallel algorithm (GParareal) that solves IVPs by modelling the correction term, i.e. the difference between fine and coarse solutions, using a Gaussian process emulator. This approach compares favourably with the classic parareal algorithm and we demonstrate, on a number of IVPs, that GParareal can converge in fewer iterations than parareal, leading to an increase in parallel speed-up. GParareal also manages to locate solutions to certain IVPs where parareal fails and has the additional advantage of being able to use archives of legacy solutions, e.g. solutions from prior runs of the IVP for different initial conditions, to further accelerate convergence of the method - something that existing time-parallel methods do not do.

Published on Thursday 22 December 2022 at 12:00 UTC #publication #prob-num #pentland #tamborrino #buchanan #appel

Testing whether a learning procedure is calibrated

Testing whether a learning procedure is calibrated in JMLR

The article “Testing whether a learning procedure is calibrated” by Jon Cockayne, Matthew Graham, Chris Oates, Onur Teymur, and myself has just appeared in its final form in the Journal of Machine Learning Research. This article is part of our research on the theoretical foundations of probabilistic numerics and uncertainty quantification, as we seek to explore what it means for the uncertainty associated to a computational result to be “well calibrated”.

J. Cockayne, M. M. Graham, C. J. Oates, T. J. Sullivan, and O. Teymur. “Testing whether a learning procedure is calibrated.” Journal of Machine Learning Research 23(203):1–36, 2022. https://jmlr.org/papers/volume23/21-1065/21-1065.pdf

Abstract. A learning procedure takes as input a dataset and performs inference for the parameters \(\theta\) of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about \(\theta\) after seeing the dataset. Bayesian inference is a prime example of such a procedure, but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be considered calibrated, in the sense that the true data-generating parameters are plausible as samples from its distributional output. A learning procedure whose inferences and predictions are systematically over- or under-confident will fail to be calibrated. On the other hand, a learning procedure that is calibrated need not be statistically efficient. A hypothesis-testing framework is developed in order to assess, using simulation, whether a learning procedure is calibrated. Several vignettes are presented to illustrate different aspects of the framework.

Published on Friday 5 August 2022 at 14:50 UTC #publication #prob-num #cockayne #graham #oates #teymur

Randomised one-step time integration methods for deterministic operator differential equations

Randomised integration for deterministic operator differential equations in Calcolo

The article “Randomised one-step time integration methods for deterministic operator differential equations” by Han Cheng Lie, Martin Stahn, and myself has just appeared in its final form in Calcolo. In this paper, we extend the analysis of Conrad et al. (2016) and Lie et al. (2019) to the case of evolutionary systems in Banach spaces or even Gel′fand triples, this being the right setting for many evolutionary partial differential equations.

H. C. Lie, M. Stahn, and T. J. Sullivan. “Randomised one-step time integration methods for deterministic operator differential equations.” Calcolo 59(1):13, 33pp., 2022. doi:10.1007/s10092-022-00457-6

Abstract. Uncertainty quantification plays an important role in applications that involve simulating ensembles of trajectories of dynamical systems. Conrad et al. (Stat. Comput., 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to time discretisation. We consider this strategy for systems that are described by deterministic, possibly non-autonomous operator differential equations defined on a Banach space or a Gel′fand triple. We prove pathwise and expected error bounds on the random trajectories, given an assumption on the local truncation error of the underlying deterministic time integration and an assumption that the absolute moments of the random variables decay with the time step. Our analysis shows that the error analysis for differential equations in finite-dimensional Euclidean space carries over to infinite-dimensional settings.

Published on Friday 25 February 2022 at 17:00 UTC #publication #prob-num #lie #stahn