Tim Sullivan


Clear Search

Bayesian probabilistic numerical methods

Bayesian probabilistic numerical methods in SIAM Review

The 2019 Q4 issue of SIAM Review will carry an article by Jon Cockayne, Chris Oates, Mark Girolami, and myself on the Bayesian formulation of probabilistic numerical methods, i.e. the interpretation of deterministic numerical tasks such as quadrature and the solution of ordinary and partial differential equations as (Bayesian) statistical inference tasks.

J. Cockayne, C. J. Oates, T. J. Sullivan, and M. Girolami. “Bayesian probabilistic numerical methods.” SIAM Review 61(4):756–789, 2019. doi:10.1137/17M1139357

Abstract. Over forty years ago average-case error was proposed in the applied mathematics literature as an alternative criterion with which to assess numerical methods. In contrast to worst-case error, this criterion relies on the construction of a probability measure over candidate numerical tasks, and numerical methods are assessed based on their average performance over those tasks with respect to the measure. This paper goes further and establishes Bayesian probabilistic numerical methods as solutions to certain inverse problems based upon the numerical task within the Bayesian framework. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well defined, encompassing both the nonlinear and non-Gaussian contexts. For general computation, a numerical approximation scheme is proposed and its asymptotic convergence established. The theoretical development is extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, and a challenging industrial application is presented.

Published on Thursday 7 November 2019 at 07:00 UTC #publication #bayesian #siam-review #prob-num #cockayne #girolami #oates

On the Brittleness of Bayesian Inference

Bayesian Brittleness in SIAM Review

The 2015 Q4 issue of SIAM Review will carry an article by Houman Owhadi, Clint Scovel, and myself on the brittle dependency of Bayesian posteriors as a function of the prior. This is an abbreviated presentation of results given in full earlier this year in Elec. J. Stat. The PDF is available for free under the terms of the Creative Commons 4.0 licence.

H. Owhadi, C. Scovel, and T. J. Sullivan. “On the brittleness of Bayesian inference.” SIAM Review 57(4):566–582, 2015. doi:10.1137/130938633

Abstract. With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with finite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.

Published on Friday 6 November 2015 at 12:00 UTC #publication #siam-review #ouq #bayesian #owhadi #scovel

Optimal Uncertainty Quantification

Optimal Uncertainty Quantification in SIAM Review

The 2013 Q2 issue of SIAM Review will carry an article by Houman Owhadi, Clint Scovel, Mike McKerns, Michael Ortiz and myself on the optimization approaches to uncertainty quantification in the presence of infinite-dimensional epistemic uncertainties about the probability measures and response functions of interest.

We present both a mathematical framework for the reduction of such infinite-dimensional problems to finite-dimensional effective feasible sets, and apply the methods to practical examples arising in hypervelocity impact and seismic safety certification.

H. Owhadi, C. Scovel, T. J. Sullivan, M. McKerns, and M. Ortiz. “Optimal Uncertainty Quantification.” SIAM Review 55(2):271–345, 2013. doi:10.1137/10080782X

Abstract. We propose a rigorous framework for uncertainty quantification (UQ) in which the UQ objectives and its assumptions/information set are brought to the forefront. This framework, which we call optimal uncertainty quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop optimal concentration inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the nonpropagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained minitutorial on the basic concepts and issues of UQ.

Published on Monday 10 June 2013 at 20:00 UTC #publication #siam-review #ouq #owhadi #scovel #mckerns #ortiz