# Tim Sullivan

## FU Berlin Research Seminar on Uncertainty Quantification

1. Summer Semester 2018
2. Winter Semester 2017–2018 (Forschungsmodul: Numerische Mathematik)
3. Summer Semester 2017 (High-Dimensional Probability with Applications to Data Science)
4. Winter Semester 2016–2017
5. Summer Semester 2016

### Summer Semester 2018

For the 2018 Summer Semester, the UQ research seminar will focus on particle methods in Bayesian inverse problems and data assimilation, and will meet weekly on Wednesdays, 10:15–11:45, in room 009 of Arnimallee 6. Further details will be announced soon!

Please note that, due to the SIAM Conference on Uncertainty Quantification 2018 taking place in Garden Grove, California, the UQ seminar will not meet on Wednesday 18 April 2018; the first meeting will be on Wednesday 25 April 2018.

### Winter Semester 2017–2018 Forschungsmodul: Numerische Mathematik

For the 2017–2018 Winter Semester, the UQ research seminar counts as the “Forschungsmodul: Numerische Mathematik”, and will meet weekly on Wednesdays, 10:15–11:45, in room 046 of Takustraße 9.

The rough plan for this semester is that about 5/15 meetings will have “lecture style” presentations of general topics of interest in UQ, while the other 10/15 will be presentations of research papers, with discussion. To allow for sufficient depth, each paper will be discussed over two consecutive weeks, and seminar participants are encouraged to present in pairs.

The mathematical prerequisites for the seminar are knowledge of measure or probability theory, linear algebra, and ideally some linear functional analysis.

Note that the seminar does not meet in the first week of the Winter Semester on 18 October 2017; the first meeting will be on 25 October 2017.

### Summer Semester 2017 High-Dimensional Probability with Applications to Data Science

For the Summer Semester of 2017, the UQ research seminar will run as a module on High-Dimensional Probability with Applications to Data Science, following the draft lecture notes by Roman Vershynin.

Data sciences play an increasingly prominent role in modern society and are developing quickly. Probabilistic methods often provide foundation and inspiration for such developments. Particularly in the much-discussed regime of “big data”, the methods draw upon the elegant mathematics of high- and infinite-dimensional probability. Building upon the probability and linear algebra from basic undergraduate courses, this course will cover the key probabilistic methods and results that form an essential toolbox for a mathematical data scientist.

We will follow the draft lecture notes of Roman Vershynin, “High-Dimensional Probability: An Introduction with Applications in Data Science”, 2017, which can be found on the internet. The seminar meetings will summarise sections of the lecture notes. Students taking the course for credit will be required to present one or more sections in class (minimum of one, with additional credit for multiple presentations).

Topics:

1. Preliminaries on random variables
2. Concentration of sums of independent random variables
3. Random vectors in high dimensions
4. Sub-Gaussian random matrices
5. Concentration without independence
6. Quadratic forms, symmetrisation, and contraction
7. Random processes
8. Chaining
9. Deviations of random matrices and geometric consequences
10. Sparse recovery and compressed sensing

### Winter Semester 2016–2017

The research seminar / reading group meets on the indicated days, 10:15–11:45 in Room 046, Takustrasse 9. See also the entry on the Course Catalogue here.

Week Date Topic
1 17 Oct. 2016 Administrative discussions.
N.B. Unusual Time and Room. The seminar will take place on Monday 17 Oct. 2016 in SR 025/026, Arnimallee 6.
2 26 Oct. 2016 MAP estimators for nonparametric BIPs
Presentation: Tim Sullivan. The full posterior for a nonparametric Bayesian inverse problem is an intricate and computationally unwieldy object, and it often makes sense to consider point estimators analogous to the mode (“most likely point”) of a univariate distribution. The definition and analysis of such point estimators in function spaces is a subtle topic. We will discuss this paper of Dashti, Law, Stuart, and Voss (2013) on MAP estimators for Gaussian priors.
3 2 Nov. 2016 Cameron–Martin spaces of Gaussian measures (I)
Presentation: Han Cheng Lie. The Cameron–Martin space of a Gaussian measure on a locally convex topological vector space is a central object of study. Han Lie will give an introduction to the Cameron–Martin space and its various characterisations and properties, following Chapter 3 of V. Bogachev's book “Gaussian Measures”.
4 9 Nov. 2016 Cameron–Martin spaces of Gaussian measures (II)
Presentation: Han Cheng Lie. The Cameron–Martin space of a Gaussian measure on a locally convex topological vector space is a central object of study. Han Lie will give an introduction to the Cameron–Martin space and its various characterisations and properties, following Chapter 3 of V. Bogachev's book “Gaussian Measures”.
5 16 Nov. 2016 Examples
Han Cheng Lie will present further examples to illustrate the technical discussions from the previous two weeks.
6 23 Nov. 2016 RKHSs in machine mearning
Presentation: Ingmar Schuster. To complement the discussion from previous weeks, Ingmar Schuster will give an introduction to how repreducing kernel Hilbert spaces arise in machine learning applications. A Jupyter notebook accompanying the presentation can be found here.
7 30 Nov. 2016 Weak MAP estimators for BIPs (I)
Presentation: Tim Sullivan. Following on from the seminar of 26 Oct. 2016, we will discuss the paper of Helin and Burger (2015) on weak MAP estimators for Bayesian inverse problems.
8 7 Dec. 2016 Weak MAP estimators for BIPs (II)
Presentation: Tim Sullivan. Following on from the seminar of 26 Oct. 2016, we will discuss the paper of Helin and Burger (2015) on weak MAP estimators for Bayesian inverse problems.
9 14 Dec. 2016 Sequential Monte Carlo
Jon Cockayne (Warwick) will give an overview of theory underlying sequential Monte Carlo methods and some algorithmic implementations.
10 4 Jan. 2017

11 11 Jan. 2017 Non-Asyptotic Concentration Inequalities
Presentation: Han Cheng Lie. Some basic non-asymptotic concentration inequalities, including Hoeffding's inequality and Bennett's inequality, as well as the Cramer-Chernoff method. See Chapter 2 of the book Concentration Inequalities: A Nonasymptotic Theory of Independence by Stephane Boucheron, Gabor Lugosi, and Pascal Massart.
12 18 Jan. 2017 Adaptive Multilevel Monte Carlo Methods for Stochastic Variational Inequalities
Presentation: Evgenia Babushkina. While multilevel Monte Carlo (MLMC) methods for the numerical approximation of partial differential equations with uncertain coefficients enjoy great popularity, combinations with spatial adaptivity seem to be rare. We present an adaptive MLMC finite element approach based on deterministic adaptive mesh refinement for the arising “pathwise” problems and outline a convergence theory in terms of desired accuracy and required computational cost. Our theoretical and heuristic reasoning together with the efficiency of our new approach are confirmed by numerical experiments. (arXiv:1611.06012)
13 25 Jan. 2017

14 1 Feb. 2017 Convergence Analysis of the Ensemble Kalman Filter for Fixed Ensemble Size
Presentation: Claudia Schillings (University of Warwick and HU Berlin). The Ensemble Kalman filter (EnKF) has had enormous impact on the applied sciences since its introduction in the 1990s by Evensen and coworkers. The low computational costs, the straightforward implementation and the non-intrusive nature make the EnKF appealing in various areas of application, but, on the downside, the method is underpinned by very limited theoretical understanding. We will present an analysis of the EnKF based on the continuous time scaling limits, which allows us to study the properties of the EnKF for fixed ensemble size.
15 8 Feb. 2017 Empirical Bayes Methods and Information Theory
Presentation: Ilja Klebanov. Typically, regularizations for maximum marginal likelihood estimation lack invariance under transformation of the parameter space (reparametrization), resulting in inconsistent statistical inference. After a short introduction to (a small selection of) information-theoretic concepts, we will discuss how these can help us construct transformation-invariant regularizations.
16 15 Feb. 2017

### Summer Semester 2016

The research seminar / reading group meets on the indicated days, 12:15–13:45 in Seminar Room 005, Arnimallee 3. See also the entry on the Course Catalogue here.

Week Date Topic
1 19 Apr. 2016 Administrative discussions. Optimisation and distributionally-robust UQ, 1/2
Presentation: Tim Sullivan. The treatment will follow Chapter 14 of Introduction to Uncertainty Quantification (doi:10.1007/978-3-319-23395-6) and the papers doi:10.1137/10080782X and doi:10.1051/m2an/2013083.
2 26 Apr. 2016 Optimisation and distributionally-robust UQ, 2/2
Presentation: Tim Sullivan. The treatment will follow Chapter 14 of Introduction to Uncertainty Quantification (doi:10.1007/978-3-319-23395-6) and the papers doi:10.1137/10080782X and doi:10.1051/m2an/2013083.
3 3 May 2016 Concentration of measure
Presentation: Han Cheng Lie. Notes from this seminar are available here. A numerical example, written in Python, showing how a standard Gaussian random vector in $$\mathbb{R}^{n}$$ concentrates around the sphere of radius $$\sqrt{n}$$ is available here.
4 10 May 2016 Comparison of intrusive and non-intrusive UQ
Presentation: Nina Loginova. The discussion will follow the 2010 paper of Constantine, Gleich & Iaccarino (arXiv:1006.3053). Postponed to 21 June 2016.
5 17 May 2016 Frequentist consistency of Bayesian inference
Presentation: Ilja Klebanov. The treatment will follow Section 6.5 of Introduction to Uncertainty Quantification (doi:10.1007/978-3-319-23395-6) and Richard Nickl's Statistical Theory lecture notes. Notes from this seminar are available here.
6 24 May 2016 Bayesian inversion for PDEs
Presentation: Ana Djurdjevac. Well-posedness of Bayesian inverse problems for elliptic PDEs such as Darcy flow and electrical impedance tomography. The discussion will follow Sections 3.5 to 3.7 of Stuart (2010) and Dunlop & Stuart (2015).
7 31 May 2016 Gradient-based Monte Carlo sampling methods
Presentation: Johannes von Lindheim. Evaluations of the gradient of the target density can be used to accelerate the convergence to equilibrium of Markov Chain Monte Carlo schemes in various ways, e.g. the Metropolis-adjusted Langevin algorithm, Hamiltonian Monte Carlo, Riemannian manifold Monte Carlo. The discussion will follow Girolami and Calderhead (2011) and the discussion papers of Girolami et al. and Bornn et al. Notes from this seminar are available here.
8 7 Jun. 2016 Ensemble Kalman filtering for data assimilation and Bayesian inverse problems
Presentation: Han Cheng Lie. The presentation follows the recent preprint of Schillings and Stuart (2016). Notes from this seminar are available here.
9 14 Jun. 2016 UQ in probabilistic ODE solvers
Presentation: Hans Kersting (MPI-Tübingen). See abstract here. Note the unusual room: ZIB Seminar Room, Zuse Institute, Takustrasse 7.
10 21 Jun. 2016 Comparison of intrusive and non-intrusive UQ
Presentation: Nina Loginova. The discussion will follow the 2010 paper of Constantine, Gleich & Iaccarino (arXiv:1006.3053).
11 28 Jun. 2016 Well-posedness of Bayesian inverse problems
Presentation: Tim Sullivan. We will show how to define infinite-dimensional analogues of heavy-tailed measures such as the Cauchy and $$\alpha$$-stable distributions, and establish a well-posedness theory for Bayesian inverse problems that have such measures as their priors, in the spirit of Stuart (2010). The discussion will follow the recent preprint of Sullivan (2016).
12 5 Jul. 2016 Cancelled
Cancelled due to the meeting of the Scientific Advisory Board of the Zuse Institute.
13 12 Jul. 2016 Empirical Bayes methods and the EM algorithm
Presentation: Ilja Klebanov. Notes from this seminar are available here.
14 19 Jul. 2016 Cancelled
Cancelled due to the European Congress of Mathematics.