Tim Sullivan

Junior Professor in Applied Mathematics:
Risk and Uncertainty Quantification

Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

Probabilistic numerics for PDE-constained inverse problems in MaxEnt

Jon Cockayne, Chris Oates, Mark Girolami and I have just had our paper “Probabilistic numerical methods for PDE-constrained Bayesian inverse problems” published in the Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. This paper complements our more extensive work “Probabilistic meshless methods for partial differential equations and Bayesian inverse problems” and gives a more concise presentation of the main ideas, aimed at a general audience.

J. Cockayne, C. Oates, T. J. Sullivan & M. Girolami. “Probabilistic Numerical Methods for PDE-constrained Bayesian Inverse Problems” in Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, ed. G. Verdoolaege. AIP Conference Proceedings 1853:060001-1–060001-8, 2017. doi:10.1063/1.4985359

Published on Friday 9 June 2017 at 11:40 UTC #publication #prob-num #inverse-problems

Hanne Kekkonen

UQ Talks: Hanne Kekkonen

In two weeks Hanne Kekkonen (University of Warwick) will give a talk on “Large noise in variational regularisation”.

Time and Place. Monday 12 June 2017, 11:00–12:00, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin

Abstract. We consider variational regularisation methods for inverse problems with large noise, which is in general unbounded in the image space of the forward operator. We introduce a Banach space setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operator. As an example we study the particularly important cases of one- and p-homogeneous regularisation functionals. As a natural further step we study stochastic noise models and in particular white noise, for which we derive error estimates in terms of the expectation of the Bregman distance. As an example we study total variation prior. This is joint work with Martin Burger and Tapio Helin.

Published on Wednesday 31 May 2017 at 16:00 UTC #event #uq-talk #inverse-problems

Visualisation and Uncertainty in Cardiac Modelling

Workshop on Visualisation and Uncertainty in Cardiac Modelling

Next week, Steven Niederer (King's College London) and I will host a two-day workshop at the Zuse Institute Berlin on Visualisation and Uncertainty in Patient-Specific Whole-Heart Modelling; further information can be found by following the link. This workshop has been made possible by generous support from the King's College London – Freie Universität Berlin Funding Programme for Joint Research Workshops.

Published on Friday 26 May 2017 at 17:00 UTC #event

Strong convergence rates of probabilistic integrators for ordinary differential equations

Preprint: Strong convergence rates of probabilistic integrators for ODEs

Han Cheng Lie, Andrew Stuart and I have just uploaded a preprint of our latest paper, “Strong convergence rates of probabilistic integrators for ordinary differential equations” to the arXiv. This paper is a successor to the convergence results presented by Conrad et al. (2016). We consider a generic probabilistic solver for an ODE, using a fixed time step of length \(\tau > 0\), where the mean of the solver has global error of order \(\tau^{q}\) and the variance of the truncation error model has order \(\tau^{1 + 2 p}\). Whereas Conrad et al. showed, for a Lipschitz driving vector field, that the mean-square error between the numerical solution \(U_{k}\) and the true solution \(u_{k}\) is bounded uniformly in time as

\( \displaystyle \max_{0 \leq k \leq T / \tau} \mathbb{E} [ \| U_{k} - u_{k} \|^{2} ] \leq C \tau^{2 \min \{ p, q \}} \)

(i.e. has the same order of convergence as the underlying deterministic method), we are able to relax the regularity assumptions on the vector field / flow an obtain a stronger mode of convergence (mean-square in the uniform norm) with the same convergence rate:

\( \displaystyle \mathbb{E} \left[ \max_{0 \leq k \leq T / \tau} \| U_{k} - u_{k} \|^{2} \right] \leq C \tau^{2 \min \{ p, q \}} \)

Abstract. Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors inroduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al. (Stat. Comput., 2016), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator.

Published on Monday 13 March 2017 at 13:00 UTC #publication #preprint #prob-num

Bayesian probabilistic numerical methods

Preprint: Bayesian probabilistic numerical methods

Jon Cockayne, Chris Oates, Mark Girolami and I have just uploaded a preprint of our latest paper, “Bayesian probabilistic numerical methods” to the arXiv. Following on from our earlier work “Probabilistic meshless methods for partial differential equations and Bayesian inverse problems”, our aim is to provide some rigorous theoretical underpinnings for the emerging field of probabilistic numerics, and in particular to define what it means for such a method to be “Bayesian”, by connecting with the established theories of Bayesian inversion and disintegration of measures.

Abstract. The emergent field of probabilistic numerics has thus far lacked rigorous statistical principals. This paper establishes Bayesian probabilistic numerical methods as those which can be cast as solutions to certain Bayesian inverse problems, albeit problems that are non-standard. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well-defined, encompassing both non-linear and non-Gaussian models. For general computation, a numerical approximation scheme is developed and its asymptotic convergence is established. The theoretical development is then extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, with some illustrative applications presented.

Published on Tuesday 14 February 2017 at 12:00 UTC #publication #preprint #prob-num

← Newer | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Older →