The 2020 SIAM conference on Uncertainty Quantification (UQ20) will take place from 24 to 27 March 2020, on the Garching campus (near Munich) of the Technical University of Munich (TUM), Germany. UQ20 is being organised in cooperation with the GAMM Activity Group on UQ.
More information about the scientific programme will be added in due course, but the following scientists are already confirmed as plenary speakers:
- David M. Higdon, Virginia Polytechnic Institute and State University, USA
- George Em Karniadakis, Brown University, USA
- Frances Y. Kuo, University of New South Wales, Australia
- Youssef M. Marzouk, Massachusetts Institute of Technology, USA
- Anthony Nouy, École Centrale de Nantes, France
- Elaine Spiller, Marquette University, USA
- Claudia Tebaldi, The Joint Global Change Research Institute, USA
- Karen Veroy-Grepl, RWTH Aachen University, Germany
SIAM/ASA Journal on Uncertainty Quantification (JUQ) publishes research articles presenting significant mathematical, statistical, algorithmic, and application advances in uncertainty quantification, defined as the interface of complex modeling of processes and data, especially characterizations of the uncertainties inherent in the use of such models. The journal also focuses on related fields such as sensitivity analysis, model validation, model calibration, data assimilation, and code verification. The journal also solicits papers describing new ideas that could lead to significant progress in methodology for uncertainty quantification as well as review articles on particular aspects. The journal is dedicated to nurturing synergistic interactions between the mathematical, statistical, computational, and applications communities involved in uncertainty quantification and related areas. JUQ is jointly offered by SIAM and the American Statistical Association.
The fourth SIAM Conference on Uncertainty Quantification (SIAM UQ18) will take place at the Hyatt Regency Orange County, Garden Grove, California, this week, 16–19 April 2018.
As part of this conference, Mark Girolami, Philipp Hennig, Chris Oates and I will organise a mini-symposium on “Probabilistic Numerical Methods for Quantification of Discretisation Error” (MS4, MS17 and MS32).
The third SIAM Conference on Uncertainty Quantification (SIAM UQ16) will take place at the SwissTech Convention Center in Lausanne, Switzerland, this week, 5–8 April 2016.
As part of this conference, Mark Girolami and I will organise a mini-symposium on “Over-confidence in numerical predictions: challenges and solutions” (MS138 and MS153), which will feature a wide range of perspectives, including Bayesian and frequentist (in)consistency, probabilistic numerics, and application fields.
The 2015 Q4 issue of SIAM Review will carry an article by Houman Owhadi, Clint Scovel, and myself on the brittle dependency of Bayesian posteriors as a function of the prior. This is an abbreviated presentation of results given in full earlier this year in Elec. J. Stat. The PDF is available for free under the terms of the Creative Commons 4.0 licence.
H. Owhadi, C. Scovel, and T. J. Sullivan. “On the brittleness of Bayesian inference.” SIAM Review 57(4):566–582, 2015. doi:10.1137/130938633
Abstract. With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with finite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.