### Welcome!

I am **Assistant Professor in Predictive Modelling** in the Mathematics Institute and School of Engineering at the University of Warwick and **Research Group Leader for Uncertainty Quantification** at the Zuse Institute Berlin.
I have wide interests in uncertainty quantification the broad sense, understood as the meeting point of numerical analysis, applied probability and statistics, and scientific computation.
On this site you will find information about how to contact me, my research, publications, and teaching activities.

### Randomised integration for deterministic operator differential equations

Han Cheng Lie, Martin Stahn, and I have just uploaded a preprint of our recent work “Randomised one-step time integration methods for deterministic operator differential equations” to the arXiv. In this paper, we extend the analysis of Conrad et al. (2016) and Lie et al. (2019) to the case of evolutionary systems in Banach spaces or even Gel′fand triples, this being the right setting for many evolutionary partial differential equations.

**Abstract.**
Uncertainty quantification plays an important role in applications that involve simulating ensembles of trajectories of dynamical systems.
Conrad et al. (*Stat. Comput.*, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to time discretisation.
We consider this strategy for systems that are described by deterministic, possibly non-autonomous operator differential equations defined on a Banach space or a Gel′fand triple.
We prove pathwise and expected error bounds on the random trajectories, given an assumption on the local truncation error of the underlying deterministic time integration and an assumption that the absolute moments of the random variables decay with the time step. Our analysis shows that the error analysis for differential equations in finite-dimensional Euclidean space carries over to infinite-dimensional settings.

Published on Wednesday 31 March 2021 at 09:00 UTC #ppreprint #prob-num #lie #stahn

### Convergence rates of Gaussian ODE filters in Statistics and Computing

The paper “Convergence rates of Gaussian ODE filters” by Hans Kersting, Philipp Hennig, and myself has just appeared in the journal *Statistics and Computing*.
In this work, we examine the strong convergence rates of probabilistic solvers for ODEs of the form \(\dot{x}(t) = f(x(t))\) that are based upon Gaussian filtering.
In some sense, this work combines the numerical analysis perspective of Conrad et al. (2016) and Lie et al. (2019) with the filtering perspective on probabilistic numerical methods for ODEs of Schober et al. (2014).

H. Kersting, T. J. Sullivan, and P. Hennig. “Convergence rates of Gaussian ODE filters.” *Statistics and Computing* 30(6):1791–1816, 2020.

**Abstract.**
A recently introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems.
These methods model the true solution \(x\) and its first \(q\) derivatives a priori as a Gaussâ€“Markov process \(X\), which is then iteratively conditioned on information about \(\dot{x}\).
This article establishes worst-case local convergence rates of order \(q + 1\) for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order \(q\) in the case of \(q = 1\) and an integrated Brownian motion prior, and analyses how inaccurate information on \(\dot{x}\) coming from approximate evaluations of \(f\) affects these rates.
Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error.
We illustrate these theoretical results by numerical experiments which might indicate their generalizability to \(q \in \{ 2, 3 , \dots \}\).

Published on Tuesday 15 September 2020 at 09:00 UTC #publication #stco #prob-num #kersting #hennig

### Linear conditional expectation in Hilbert space

Ilja Klebanov, Björn Sprungk, and I have just uploaded a preprint of our recent work “The linear conditional expectation in Hilbert space” to the arXiv. In this paper, we study the best approximation \(\mathbb{E}^{\mathrm{A}}[U|V]\) of the conditional expectation \(\mathbb{E}[U|V]\) of an \(\mathcal{G}\)-valued random variable \(U\) conditional upon a \(\mathcal{H}\)-valued random variable \(V\), where “best” means \(L^{2}\)-optimality within the class \(\mathrm{A}(\mathcal{H}; \mathcal{G})\) of affine functions of the conditioning variable \(V\). This approximation is a powerful one and lies at the heart of the Bayes linear approach to statistical inference, but its analytical properties, especially for \(U\) and \(V\) taking values in infinite-dimensional spaces \(\mathcal{G}\) and \(\mathcal{H}\), are only partially understood — which this article aims to rectify.

**Abstract.**
The *linear conditional expectation* (LCE) provides a best linear (or rather, affine) estimate of the conditional expectation and hence plays an important rôle in approximate Bayesian inference, especially the *Bayes linear* approach. This article establishes the analytical properties of the LCE in an infinite-dimensional Hilbert space context. In addition, working in the space of affine Hilbert–Schmidt operators, we establish a regularisation procedure for this LCE. As an important application, we obtain a simple alternative derivation and intuitive justification of the *conditional mean embedding* formula, a concept widely used in machine learning to perform the conditioning of random variables by embedding them into reproducing kernel Hilbert spaces.

Published on Friday 28 August 2020 at 09:00 UTC #preprint #tru2 #bayesian #rkhs #mean-embedding #klebanov #sprungk

### Adaptive reconstruction of monotone functions in special issue of Algorithms

The paper “Adaptive reconstruction of imperfectly-observed monotone functions, with applications to uncertainty quantification” by Luc Bonnet, Jean-Luc Akian, Éric Savin, and myself has just appeared in a special issue of the journal *Algorithms* devoted to Methods and Applications of Uncertainty Quantification in Engineering and Science.
In this work, motivated by the computational needs of the optimal uncertainty quantification (OUQ) framework, we present and develop an algorithm for reconstructing a monotone function \(F\) given the ability to interrogate \(F\) pointwise but subject to partially controllable one-sided observational errors of the type that one would typically encounter if the observations would arise from a numerical optimisation routine.

L. Bonnet, J.-L. Akian, É. Savin, and T. J. Sullivan. “Adaptive reconstruction of imperfectly-observed monotone functions, with applications to uncertainty quantification.” *Algorithms* 13(8):196, 2020.

**Abstract.**
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.

Published on Monday 17 August 2020 at 12:00 UTC #publication #algorithms #daad #ouq #isotonic #bonnet #akian #savin

### A rigorous theory of conditional mean embeddings in SIMODS

The article “A rigorous theory of conditional mean embeddings” by Ilja Klebanov, Ingmar Schuster, and myself has just appeared online in the *SIAM Journal on Mathematics of Data Science*.
In this work we take a close mathematical look at the method of conditional mean embedding.
In this approach to non-parametric inference, a random variable \(Y \sim \mathbb{P}_{Y}\) in a set \(\mathcal{Y}\) is represented by its *kernel mean embedding*, the reproducing kernel Hilbert space element

\( \displaystyle \mu_{Y} = \int_{\mathcal{Y}} \psi(y) \, \mathrm{d} \mathbb{P}_{Y} (y) \in \mathcal{G}, \)

and conditioning with respect to an observation \(x\) of a related random variable \(X \sim \mathbb{P}_{X}\) in a set \(\mathcal{X}\) with RKHS \(\mathcal{H}\) is performed using the Woodbury formula\( \displaystyle \mu_{Y|X = x} = \mu_Y + (C_{XX}^{\dagger} C_{XY})^\ast \, (\varphi(x) - \mu_X) . \)

Here \(\psi \colon \mathcal{Y} \to \mathcal{G}\) and \(\varphi \colon \mathcal{X} \to \mathcal{H}\) are the canonical feature maps and the \(C\)'s denote the appropriate centred (cross-)covariance operators of the embedded random variables \(\psi(Y)\) in \(\mathcal{G}\) and \(\varphi(X)\) in \(\mathcal{H}\).

Our article aims to provide rigorous mathematical foundations for this attractive but apparently naïve approach to conditional probability, and hence to Bayesian inference.

I. Klebanov, I. Schuster, and T. J. Sullivan. “A rigorous theory of conditional mean embeddings.” *SIAM Journal on Mathematics of Data Science* 2(3):583–606, 2020.

**Abstract.**
Conditional mean embeddings (CMEs) have proven themselves to be a powerful tool in many machine learning applications. They allow the efficient conditioning of probability distributions within the corresponding reproducing kernel Hilbert spaces by providing a linear-algebraic relation for the kernel mean embeddings of the respective joint and conditional probability distributions. Both centered and uncentered covariance operators have been used to define CMEs in the existing literature. In this paper, we develop a mathematically rigorous theory for both variants, discuss the merits and problems of each, and significantly weaken the conditions for applicability of CMEs. In the course of this, we demonstrate a beautiful connection to Gaussian conditioning in Hilbert spaces.

Published on Wednesday 15 July 2020 at 08:00 UTC #publication #simods #mathplus #tru2 #rkhs #mean-embedding #klebanov #schuster