Tim Sullivan

#juq

Clear Search

An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems

Order-theoretic perspectives on MAP estimation in SIAM/ASA JUQ

The final version of “An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems” by Hefin Lambley and myself has just appeared online in the SIAM/ASA Journal on Uncertainty Quantification.

On a heuristic level, modes and MAP estimators are intended to be the “most probable points” of a space \(X\) with respect to a probability measure \(\mu\). Thus, in some sense, they would seem to be the greatest elements of some order on \(X\), and a rigorous order-theoretic treatment is called for, especially for cases in which \(X\) is, say, an infinite-dimensional function space. Such an order-theoretic perspective opens up some attractive proof strategies for the existence of modes and MAP estimators but also leads to some interesting counterexamples. In particular, because the orders involved are not total, some pairs of points of \(X\) can be incomparable (i.e. neither is more nor less likely than the other). In fact we show that there are examples for which the collection of such mutually incomparable elements is dense in \(X\).

H. Lambley and T. J. Sullivan. “An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems.” SIAM/ASA Journal on Uncertainty Quantification 11(4):1195–1224, 2023. doi:10.1137/22M154243X

More →

Published on Friday 20 October 2023 at 09:00 UTC #publication #modes #order-theory #map-estimators #lambley #juq

SIAM/ASA JUQ

It is a pleasure and an honour to announce that, with effect from today, I will be serving as an Associate Editor for the SIAM/ASA Journal on Uncertainty Quantification.

SIAM/ASA Journal on Uncertainty Quantification (JUQ) publishes research articles presenting significant mathematical, statistical, algorithmic, and application advances in uncertainty quantification, defined as the interface of complex modeling of processes and data, especially characterizations of the uncertainties inherent in the use of such models. The journal also focuses on related fields such as sensitivity analysis, model validation, model calibration, data assimilation, and code verification. The journal also solicits papers describing new ideas that could lead to significant progress in methodology for uncertainty quantification as well as review articles on particular aspects. The journal is dedicated to nurturing synergistic interactions between the mathematical, statistical, computational, and applications communities involved in uncertainty quantification and related areas. JUQ is jointly offered by SIAM and the American Statistical Association.

Published on Tuesday 1 January 2019 at 18:00 UTC #editorial #siam #juq

Random forward models and log-likelihoods in Bayesian inverse problems

Random Bayesian inverse problems in JUQ

The article “Random forward models and log-likelihoods in Bayesian inverse problems” by Han Cheng Lie, Aretha Teckentrup, and myself has now appeared in its final form in the SIAM/ASA Journal on Uncertainty Quantification, volume 6, issue 4. This paper considers the effect of approximating the likelihood in a Bayesian inverse problem by a random surrogate, as frequently happens in applications, with the aim of showing that the perturbed posterior distribution is close to the exact one in a suitable sense. This article considers general randomisation models, and thereby expands upon the previous investigations of Stuart and Teckentrup (2018) in the Gaussian setting.

H. C. Lie, T. J. Sullivan, and A. L. Teckentrup. “Random forward models and log-likelihoods in Bayesian inverse problems.” SIAM/ASA Journal on Uncertainty Quantification 6(4):1600–1629, 2018. doi:10.1137/18M1166523

Abstract. We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Published on Monday 10 December 2018 at 12:00 UTC #publication #bayesian #inverse-problems #juq #prob-num #lie #teckentrup