Time and Place. Friday 24 August 2018, 10:15–11:45, University of Potsdam, Campus Golm, Building 27, Lecture Hall 0.01
Abstract. Many problems in machine learning require the classification of high dimensional data. One methodology to approach such problems is to construct a graph whose vertices are identified with data points, with edges weighted according to some measure of affinity between the data points. Algorithms such as spectral clustering, probit classification and the Bayesian level set method can all be applied in this setting. The goal of the talk is to describe these algorithms for classification, and analyze them in the limit of large data sets. Doing so leads to interesting problems in the calculus of variations, in stochastic partial differential equations and in Monte Carlo Markov Chain, all of which will be highlighted in the talk. These limiting problems give insight into the structure of the classification problem, and algorithms for it.
Next month, 22–24 August 2018, along with Matt Dunlop (Helsinki), Tapio Helin (Helsinki), and Simo Särkkä (Aalto), I will be giving guest lectures at a Summer School / Workshop on Computational Mathematics and Data Science at the University of Oulu, Finland.
While the the other lecturers will treat aspects such as machine learning using deep Gaussian processes, filtering, and MAP estimation, my lectures will tackle the fundamentals of the Bayesian approach to inverse problems in the function-space context, as increasingly demanded by modern applications.
“Well-posedness of Bayesian inverse problems in function spaces: analysis and algorithms”
The basic formalism of the Bayesian method is easily stated, and appears in every introductory probability and statistics course: the posterior probability is proportional to the prior probability times the likelihood. However, for inference problems in high or even infinite dimension, the Bayesian formula must be carefully formulated and its stability properties mathematically analysed. The paradigm advocated by Andrew Stuart and collaborators since 2010 is that one should study the infinite-dimensional Bayesian inverse problem directly and delay discretisation until the last moment. These lectures will study the role of various choices of prior distribution and likelihood and how they lead to well-posed or ill-posed Bayesian inverse problems. If time permits, we will also consider the implications for algorithms, and how Bayesian posterior are summarised (e.g. by maximum a posteriori estimators).
This week's colloquium at the Einstein Center for Mathematics Berlin will be on the topic of “Stochastics meets PDE.” The speakers will be:
- Antoine Gloria (Sorbonne): Stochastic homogenization: regularity, oscillations, and fluctuations
- Peter Friz (TU Berlin and WIAS Berlin): Rough Paths, Stochastics and PDEs
- Nicholas Dirr (Cardiff): Interacting Particle Systems and Gradient Flows
Time and Place. Friday 6 July 2018, 14:00–17:00, Humboldt-Universität zu Berlin, Main Building Room 2094, Unter den Linden 6, 10099 Berlin.
Published on Monday 2 July 2018 at 12:00 UTC #event
The fourth SIAM Conference on Uncertainty Quantification (SIAM UQ18) will take place at the Hyatt Regency Orange County, Garden Grove, California, this week, 16–19 April 2018.
As part of this conference, Mark Girolami, Philipp Hennig, Chris Oates and I will organise a mini-symposium on “Probabilistic Numerical Methods for Quantification of Discretisation Error” (MS4, MS17 and MS32).
Published on Saturday 14 April 2018 at 08:00 UTC #event
Next week Chris Oates and I will host the SAMSI–Lloyds–Turing Workshop on Probabilistic Numerical Methods at the Alan Turing Institute, London, housed in the British Library. The workshop is being held as part of the SAMSI Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics.
The accuracy and robustness of numerical predictions that are based on mathematical models depend critically upon the construction of accurate discrete approximations to key quantities of interest. The exact error due to approximation will be unknown to the analyst, but worst-case upper bounds can often be obtained. This workshop aims, instead, to further the development of Probabilistic Numerical Methods, which provide the analyst with a richer, probabilistic quantification of the numerical error in their output, thus providing better tools for reliable statistical inference.
This workshop has been made possible by the generous support of SAMSI, the Alan Turing Institute, and the Lloyd's Register Foundation Data-Centric Engineering Programme.