Tim Sullivan

Welcome!

I am a Reader in Predictive Modelling in the Mathematics Institute and School of Engineering at the University of Warwick. I have wide interests in uncertainty quantification in the broad sense, understood as the meeting point of numerical analysis, applied probability and statistics, and scientific computation. On this site you will find information about how to contact me, my research, publications, and teaching activities.

Transporting higher-order quadrature rules - Quasi-Monte Carlo points and sparse grids for mixture distributions

Transporting QMC points in Statistics and Computing

The article “Transporting higher-order quadrature rules - Quasi-Monte Carlo points and sparse grids for mixture distributions” by Ilja Klebanov and myself has just been published in its final form in Statistics and Computing.

I. Klebanov and T. J. Sullivan. “Transporting higher-order quadrature rules: Quasi-Monte Carlo points and sparse grids for mixture distributions.” Statistics and Computing 36(1):no. 46, 19pp., 2026. doi:10.1007/s11222-025-10764-x

Abstract. Integration against, and hence sampling from, high-dimensional probability distributions is of essential importance in many application areas and has been an active research area for decades. One approach that has drawn increasing attention in recent years has been the generation of samples from a target distribution \(\mathbb{P}_{\mathrm{tar}}\) using transport maps: if \(\mathbb{P}_{\mathrm{tar}} = T_{\sharp} \mathbb{P}_{\mathrm{ref}}\) is the pushforward of an easily-sampled probability distribution \(\mathbb{P}_{\mathrm{ref}}\) under the transport map \(T\), then the application of \(T\) to \(\mathbb{P}_{\mathrm{ref}}\)-distributed samples yields \(\mathbb{P}_{\mathrm{tar}}\)-distributed samples. This paper proposes the application of transport maps not just to random samples, but also to quasi-Monte Carlo points, higher-order nets, and sparse grids so that the transformed samples inherit the original convergence rates that are often better than \(N^{-1/2}\), \(N\) being the number of samples/quadrature nodes. Our main result is the derivation of an explicit transport map for the case that \(\mathbb{P}_{\mathrm{tar}}\) is a mixture of simple distributions, e.g. a Gaussian mixture, in which case application of the transport map \(T\) requires the solution of an explicit ODE with closed-form right-hand side. Mixture distributions are of particular applicability and interest since many methods proceed by first approximating \(\mathbb{P}_{\mathrm{tar}}\) by a mixture and then sampling from that mixture (often using importance reweighting). Hence, this paper allows for the sampling step to provide a better convergence rate than \(N^{-1/2}\) for all such methods.

Published on Tuesday 23 December 2025 at 12:00 UTC #publication #stat-comput #qmc #klebanov

Autoencoders in function space

Autoencoders in function space in JMLR

The article “Autoencoders in function space” by Justin Bunker, Mark Girolami, Hefin Lambley, Andrew Stuart and myself has just appeared in its final form in the Journal of Machine Learning Research. This article continues one of the main themes of my and collaborators' work, namely that powerful discretisation-invariant learning methods can be obtained by examining the problem in an infinite-dimensional function space instead of on a fixed grid.

Abstract. Autoencoders have found widespread application in both their original deterministic form and in their variational formulation (VAEs). In scientific applications and in image processing it is often of interest to consider data that are viewed as functions; while discretisation (of differential equations arising in the sciences) or pixellation (of images) renders problems finite dimensional in practice, conceiving first of algorithms that operate on functions, and only then discretising or pixellating, leads to better algorithms that smoothly operate between resolutions. In this paper function-space versions of the autoencoder (FAE) and variational autoencoder (FVAE) are introduced, analysed, and deployed. Well-definedness of the objective governing VAEs is a subtle issue, particularly in function space, limiting applicability. For the FVAE objective to be well defined requires compatibility of the data distribution with the chosen generative model; this can be achieved, for example, when the data arise from a stochastic differential equation, but is generally restrictive. The FAE objective, on the other hand, is well defined in many situations where FVAE fails to be. Pairing the FVAE and FAE objectives with neural operator architectures that can be evaluated on any mesh enables new applications of autoencoders to inpainting, superresolution, and generative modelling of scientific data.

Published on Sunday 7 September 2025 at 12:00 UTC #publication #jmlr #bunker #girolami #lambley #stuart #autoencoders

Classification of small-ball modes and maximum a posteriori estimators

Classification of small-ball modes and maximum a posteriori estimators

Ilja Klebanov, Hefin Lambley, and I have just uploaded a preprint of our paper “Classification of small-ball modes and maximum a posteriori estimators” to the arXiv. This work thoroughly revises and extends our earlier preprint “A ‘periodic table’ of modes and maximum a posteriori estimators”.

Abstract. A mode, or “most likely point”, for a probability measure \(\mu\) can be defined in various ways via the asymptotic behaviour of the \(\mu\)-mass of balls as their radius tends to zero. Such points are of intrinsic interest in the local theory of measures on metric spaces and also arise naturally in the study of Bayesian inverse problems and diffusion processes. Building upon special cases already proposed in the literature, this paper develops a systematic framework for defining modes through small-ball probabilities. We propose “common-sense” axioms that such definitions should obey, including appropriate treatment of discrete and absolutely continuous measures, as well as symmetry and invariance properties. We show that there are exactly ten such definitions consistent with these axioms, and that they are partially but not totally ordered in strength, forming a complete, distributive lattice. We also show how this classification simplifies for well-behaved \(\mu\).

Published on Wednesday 26 March 2025 at 12:00 UTC #preprint #modes #map-estimators #klebanov #lambley

Hille's theorem for Bochner integrals of functions with values in locally convex spaces

Hille's theorem for locally convex spaces in Real Analysis Exchange

The article “Hille's theorem for Bochner integrals of functions with values in locally convex spaces” has just appeared in its final form in Real Analysis Exchange.

T. J. Sullivan. “Hille's theorem for Bochner integrals of functions with values in locally convex spaces.” Real Analysis Exchange 49(2):377–388, 2024. doi:10.14321/realanalexch.49.2.1719547551

Abstract. Hille's theorem is a powerful classical result in vector measure theory. It asserts that the application of a closed, unbounded linear operator commutes with strong/Bochner integration of functions taking values in a Banach space. This note shows that Hille's theorem also holds in the setting of complete locally convex spaces.

Published on Tuesday 1 October 2024 at 13:00 UTC #publication #real-anal-exch #functional-analysis #hille-theorem

Autoencoders in function space

Autoencoders in function space

Justin Bunker, Mark Girolami, Hefin Lambley, Andrew Stuart and I have just uploaded a preprint of our paper “Autoencoders in function space” to the arXiv.

Abstract. Autoencoders have found widespread application, in both their original deterministic form and in their variational formulation (VAEs). In scientific applications it is often of interest to consider data that are comprised of functions; the same perspective is useful in image processing. In practice, discretisation (of differential equations arising in the sciences) or pixellation (of images) renders problems finite dimensional, but conceiving first of algorithms that operate on functions, and only then discretising or pixellating, leads to better algorithms that smoothly operate between different levels of discretisation or pixellation. In this paper function-space versions of the autoencoder (FAE) and variational autoencoder (FVAE) are introduced, analysed, and deployed. Well-definedness of the objective function governing VAEs is a subtle issue, even in finite dimension, and more so on function space. The FVAE objective is well defined whenever the data distribution is compatible with the chosen generative model; this happens, for example, when the data arise from a stochastic differential equation. The FAE objective is valid much more broadly, and can be straightforwardly applied to data governed by differential equations. Pairing these objectives with neural operator architectures, which can thus be evaluated on any mesh, enables new applications of autoencoders to inpainting, superresolution, and generative modelling of scientific data.

Published on Monday 5 August 2024 at 12:00 UTC #preprint #bunker #girolami #lambley #stuart #autoencoders