Tim Sullivan

#preprint

Clear Search

The linear conditional expectation in Hilbert space

Linear conditional expectation in Hilbert space

Ilja Klebanov, Björn Sprungk, and I have just uploaded a preprint of our recent work “The linear conditional expectation in Hilbert space” to the arXiv. In this paper, we study the best approximation \(\mathbb{E}^{\mathrm{A}}[U|V]\) of the conditional expectation \(\mathbb{E}[U|V]\) of an \(\mathcal{G}\)-valued random variable \(U\) conditional upon a \(\mathcal{H}\)-valued random variable \(V\), where “best” means \(L^{2}\)-optimality within the class \(\mathrm{A}(\mathcal{H}; \mathcal{G})\) of affine functions of the conditioning variable \(V\). This approximation is a powerful one and lies at the heart of the Bayes linear approach to statistical inference, but its analytical properties, especially for \(U\) and \(V\) taking values in infinite-dimensional spaces \(\mathcal{G}\) and \(\mathcal{H}\), are only partially understood — which this article aims to rectify.

Abstract. The linear conditional expectation (LCE) provides a best linear (or rather, affine) estimate of the conditional expectation and hence plays an important rôle in approximate Bayesian inference, especially the Bayes linear approach. This article establishes the analytical properties of the LCE in an infinite-dimensional Hilbert space context. In addition, working in the space of affine Hilbert–Schmidt operators, we establish a regularisation procedure for this LCE. As an important application, we obtain a simple alternative derivation and intuitive justification of the conditional mean embedding formula, a concept widely used in machine learning to perform the conditioning of random variables by embedding them into reproducing kernel Hilbert spaces.

Published on Friday 28 August 2020 at 09:00 UTC #preprint #tru2 #bayesian #rkhs #mean-embedding #klebanov #sprungk

Adaptive reconstruction of imperfectly-observed monotone functions, with applications to uncertainty quantification

Adaptive reconstruction of monotone functions

Luc Bonnet, Jean-Luc Akian, Éric Savin, and I have just uploaded a preprint of our recent work “Adaptive reconstruction of imperfectly-observed monotone functions, with applications to uncertainty quantification” to the arXiv. In this work, motivated by the computational needs of the optimal uncertainty quantification (OUQ) framework, we present and develop an algorithm for reconstructing a monotone function \(F\) given the ability to interrogate \(F\) pointwise but subject to partially controllable one-sided observational errors of the type that one would typically encounter if the observations would arise from a numerical optimisation routine.

Abstract. Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, we assume that the observational data arise from optimisation problems with partially controllable one-sided errors, and this setting alters several characteristics of the problem and opens natural algorithmic possibilities. Our algorithm uses imperfect evaluations of the target function to direct further evaluations of the target function either at new sites in the function's domain or to improve the quality of evaluations at already-evaluated sites. We establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method both to synthetic test cases and to a real-world example of uncertainty quantification for aerodynamic design.

Published on Monday 13 July 2020 at 10:00 UTC #preprint #daad #ouq #isotonic #bonnet #akian #savin

A rigorous theory of conditional mean embeddings

A rigorous theory of conditional mean embeddings

Ilja Klebanov, Ingmar Schuster, and I have just uploaded a preprint of our recent work “A rigorous theory of conditional mean embeddings” to the arXiv. In this work we take a close mathematical look at the method of conditional mean embedding. In this approach to non-parametric inference, a random variable \(Y \sim \mathbb{P}_{Y}\) in a set \(\mathcal{Y}\) is represented by its kernel mean embedding, the reproducing kernel Hilbert space element

\( \displaystyle \mu_{Y} = \int_{\mathcal{Y}} \psi(y) \, \mathrm{d} \mathbb{P}_{Y} (y) \in \mathcal{G}, \)

and conditioning with respect to an observation \(x\) of a related random variable \(X \sim \mathbb{P}_{X}\) in a set \(\mathcal{X}\) with RKHS \(\mathcal{H}\) is performed using the Woodbury formula

\( \displaystyle \mu_{Y|X = x} = \mu_Y + (C_{XX}^{\dagger} C_{XY})^\ast \, (\varphi(x) - \mu_X) . \)

Here \(\psi \colon \mathcal{Y} \to \mathcal{G}\) and \(\varphi \colon \mathcal{X} \to \mathcal{H}\) are the canonical feature maps and the \(C\)'s denote the appropriate centred (cross-)covariance operators of the embedded random variables \(\psi(Y)\) in \(\mathcal{G}\) and \(\varphi(X)\) in \(\mathcal{H}\).

Our article aims to provide rigorous mathematical foundations for this attractive but apparently naïve approach to conditional probability, and hence to Bayesian inference.

Abstract. Conditional mean embeddings (CME) have proven themselves to be a powerful tool in many machine learning applications. They allow the efficient conditioning of probability distributions within the corresponding reproducing kernel Hilbert spaces (RKHSs) by providing a linear-algebraic relation for the kernel mean embeddings of the respective probability distributions. Both centered and uncentered covariance operators have been used to define CMEs in the existing literature. In this paper, we develop a mathematically rigorous theory for both variants, discuss the merits and problems of either, and significantly weaken the conditions for applicability of CMEs. In the course of this, we demonstrate a beautiful connection to Gaussian conditioning in Hilbert spaces.

Published on Tuesday 3 December 2019 at 07:00 UTC #preprint #mathplus #tru2 #rkhs #mean-embedding #klebanov #schuster

Geodesic analysis in Kendall's shape space with epidemiological applications

Geodesic analysis in Kendall’s shape space

Esfandiar Nava-Yazdani, Christoph von Tycowicz, Christian Hege, and I have just uploaded an updated preprint of our work “Geodesic analysis in Kendall's shape space with epidemiological applications” (previously entitled “A shape trajectories approach to longitudinal statistical analysis”) to the arXiv. This work is part of the ECMath / MATH+ project CH-15 “Analysis of Empirical Shape Trajectories”.

Abstract. We analytically determine Jacobi fields and parallel transports and compute geodesic regression in Kendall's shape space. Using the derived expressions, we can fully leverage the geometry via Riemannian optimization and thereby reduce the computational expense by several orders of magnitude. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data. As an example application we have chosen 3D shapes of knee bones, reconstructed from image data of the Osteoarthritis Initiative (OAI). Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data alone.

Published on Monday 1 July 2019 at 08:00 UTC #preprint #ch15 #shape-trajectories #nava-yazdani #von-tycowicz #hege

Comments on the article “A Bayesian conjugate gradient method”

Comments on A Bayesian conjugate gradient method

I have just uploaded a preprint of “Comments on the article ‘A Bayesian conjugate gradient method’” to the arXiv. This note discusses the recent paper “A Bayesian conjugate gradient method” in Bayesian Analysis by Jon Cockayne, Chris Oates, Ilse Ipsen, and Mark Girolami, and is an invitation to a rejoinder from the authors.

Abstract. The recent article “A Bayesian conjugate gradient method” by Cockayne, Oates, Ipsen, and Girolami proposes an approximately Bayesian iterative procedure for the solution of a system of linear equations, based on the conjugate gradient method, that gives a sequence of Gaussian/normal estimates for the exact solution. The purpose of the probabilistic enrichment is that the covariance structure is intended to provide a posterior measure of uncertainty or confidence in the solution mean. This note gives some comments on the article, poses some questions, and suggests directions for further research.

Published on Wednesday 26 June 2019 at 08:00 UTC #preprint #prob-num