Next month, 22–24 August 2018, along with Matt Dunlop (Helsinki), Tapio Helin (Helsinki), and Simo Särkkä (Aalto), I will be giving guest lectures at a Summer School / Workshop on Computational Mathematics and Data Science at the University of Oulu, Finland.
While the the other lecturers will treat aspects such as machine learning using deep Gaussian processes, filtering, and MAP estimation, my lectures will tackle the fundamentals of the Bayesian approach to inverse problems in the function-space context, as increasingly demanded by modern applications.
“Well-posedness of Bayesian inverse problems in function spaces: analysis and algorithms”
The basic formalism of the Bayesian method is easily stated, and appears in every introductory probability and statistics course: the posterior probability is proportional to the prior probability times the likelihood. However, for inference problems in high or even infinite dimension, the Bayesian formula must be carefully formulated and its stability properties mathematically analysed. The paradigm advocated by Andrew Stuart and collaborators since 2010 is that one should study the infinite-dimensional Bayesian inverse problem directly and delay discretisation until the last moment. These lectures will study the role of various choices of prior distribution and likelihood and how they lead to well-posed or ill-posed Bayesian inverse problems. If time permits, we will also consider the implications for algorithms, and how Bayesian posterior are summarised (e.g. by maximum a posteriori estimators).