## Errata for Introduction to Uncertainty Quantification

This page lists corrections and clarifications to the text of *Introduction to Uncertainty Quantification*, published in 2015 by Springer as volume 63 of the *Texts in Applied Mathematics* series.
Many thanks to all those who have pointed out these mistakes;
please get in touch if you spot more.

- Introduction
- Measure and Probability Theory
- Banach and Hilbert Spaces
- Optimization Theory
- Measures of Information and Uncertainty
- Bayesian Inverse Problems
- Filtering and Data Assimilation
- Orthogonal Polynomials and Applications
- Numerical Integration
- Sensitivity Analysis and Model Reduction
- Spectral Expansions
- Stochastic Galerkin Methods
- Non-Intrusive Methods
- Distributional Uncertainty

### Chapter 1: Introduction

**p.3:**In the second display, half-way down the page, the velocity field given by Darcy's law should be \( v = - \kappa \nabla u\), i.e. the minus sign is missing.**p.6:**In the second paragraph, “August” should be “august”.

### Chapter 2: Measure and Probability Theory

**p.14:**Just before the statement of Theorem 2.10, a closing parenthesis “)” is missing.

### Chapter 3: Banach and Hilbert Spaces

**p.37:**Example 3.3(c) should begin “The analogous inner product on the space \( \mathbb{K}^{m \times n} \) of...”

### Chapter 4: Optimization Theory

**p.65:**In Theorem 4.17 (Jensen's inequality), it is implicit that the set \(K \subseteq \mathcal{X}\) is a convex set, otherwise it makes no sense to claim that \(f \colon K \to \mathbb{R} \cup \{ \pm \infty \}\) is a convex function — and, worse, \(\mathbb{E}_{\mu}[X]\) might not lie in \(K\) even though all of \(X\)'s values lie in \(K\). Incidentally, the definition of \(\mathbb{E}_{\mu}[X]\) given here — that \(\langle \ell \vert \mathbb{E}_{\mu}[X] \rangle = \mathbb{E}_{\mu}[\langle \ell \vert X \rangle]\) for all \(\ell \in \mathcal{X}'\) — corresponds to the weak/Pettis integral.

### Chapter 5: Measures of Information and Uncertainty

**pp.81 and 88:**Definition 5.4 defines the Kullback–Leibler divergence from \(\mu\) to \(\nu\) for \(\sigma\)-finite measures \(\mu\) and \(\nu\), but Exercise 5.3 only checks non-negativity for the case that \(\mu\) and \(\nu\) are probability measures. Naturally, the proof is much the same for the \(\sigma\)-finite case.**p.85:**Proposition 5.12 should have the 2 inside the square root.**p.89:**As in Proposition 5.12, Exercise 5.7 should have the 2 inside the square root.

### Chapter 6: Bayesian Inverse Problems

**p.95:**The second displayed equation is missing an adjoint/transpose on the second appearance of \(K\). It should read \(\mathbb{E}[(\hat{u} - u) \otimes (\hat{u} - u)] = K \mathbb{E}[\eta \otimes \eta] K^{\ast} = (A^{\ast} Q^{-1} A)^{-1}\).**p.101:**In the sixth line, in the definition of the measure \(\nu\), the “\(\mathbb{Q}_{0}\)” should be “\(\mathbb{Q}\)”.

### Chapter 7: Filtering and Data Assimilation

**p.115:**In the fifth bullet point, the observational noise vector \(\eta_{k}\) should hace covariance \(R_{k}\) not \(Q_{k}\).**p.116:**At the bottom of the page, the argmin should be over \(z_{k} \in \mathcal{X}^{k + 1}\), not over \(z_{k} \in \mathcal{X}\).**p.117:**In equation (7.6), recall that \(m_{0}\) is the mean of the random initial condition of the system at time \(t_{0}\), as defined two pages previously but not used until now.**p.119:**In the second line of the variational derivation of the prediction step, “\(\hat{x}_{m|k-1}\)” should be “\(\hat{x}_{k|k-1}\)”. Two lines later, the reference to a “\(k\)-tuple of states” should refer to a “\((k + 1)\)-tuple of states”.**p.122:**Just before the paragraph on the Kálmán gain, the reference to equation (7.13) should be a reference to equation (7.12).

### Chapter 8: Orthogonal Polynomials and Applications

**p.133:**In the second paragraph, “taken and as the primary definition” should be “taken as the primary definition”.**p.135:**The integral at the top of the page expressing orthogonality for the Jacobi polynomials is missing the weight function \((1 - x)^{\alpha} (1 + x)^{\beta}\) before the \(\mathrm{d} x\). Similarly, the \((1 - x)^{\beta}\) in the text just before the integral should be \((1 + x)^{\beta}\). The appearances of the weight function in Table 8.2 on p.162 are correct.**p.138:**In the proof of Lemma 8.4, delete the words “this is” after “By Sylvester's criterion,”.**p.141:**In the definition of \(\beta_{0}\), some readers might prefer the explicit statement of the integrand, i.e. \(\beta_{0} = \int_{\mathbb{R}} 1 \, \mathrm{d} \mu\).**p.152:**In the middle of the page, \(H^{k}(\mathcal{X}, \mu)\) should be \(H^{k}(I, \mu)\).**p.162:**The normalisation constant for the Chebyshev polynomials of the first kind is incorrect. It should be \(\pi\) for \(n = 0\) and \(\pi / 2\) for \(n > 0\), i.e. \(\| q_{n} \|_{L^{2}(\mu)}^{2} = (\pi / 2) ( 1 + \delta_{0 n} )\).

### Chapter 9: Numerical Integration

**p.168:**At the top of the page, after the displayed equation, “\(h = \tfrac{1}{n}\)” should be “\(h = \tfrac{b - a}{n}\)”.**p.176:**Just before the discussion of sparse quadrature formulae, “and ‘using’ derivative” should be “and ‘using’ one derivative”.**p.177:**In the multi-line displayed equation, the subscript \(\ell\) in \(Q_{\ell - i + 1}^{(1)}\) is \(\ell = 2\).**pp.179–181:**The discussion of the variance-based error bound for the Monte Carlo estimator is, of course, predicated on the assumption that \(f(X)\) has finite variance.**p.180:**In the caption of Figure 9.2, “\(\mathbb{E}[(a + X^{(1)})^{-1}]\)” should be “\(\mathbb{E}[(a - X^{(1)})^{-1}]\)”.**p.185:**At the beginning of the second paragraph on Multi-Level Monte Carlo, “have at our disposal hierarchy” should be “have at our disposal a hierarchy”.**p.188:**The definition of the Hardy–Krause variation of \(f \colon [0, 1]^{d} \to \mathbb{R}\) should make no mention of \(s\); it is simply the sum of all the Vitali variations \(V^{\mathrm{Vit}}(f|_{F})\) where \(F\) runs over all faces of \([0, 1]^{d}\), with dimension between \(1\) and \(d\) inclusive.**p.190:**In Theorem 9.23, in the second displayed equation, \(x_{N}\) should be \(x_{n}\).

### Chapter 10: Sensitivity Analysis and Model Reduction

**p.200:**The last displayed equation in the proof should end with a full stop / period.**p.205:**After equation (10.5), the next sentence should read “Equation (10.5) can be re-written as \(\frac{\mathrm{d} q}{\mathrm{d} \theta} (\bar{u}, \bar{\theta}) = \lambda \frac{\partial F}{\partial \theta} (\bar{u}, \bar{\theta}) + \frac{\partial q}{\partial \theta} (\bar{u}, \bar{\theta})\)” instead of being a reference to and rearrangement of equation (10.4).**p.211:**In the first line of the statement of Theorem 10.15, “\(i \subseteq \mathcal{N}\)” should read “\(I \subseteq \mathcal{N}\)”. Later in the statement of the same theorem, in equation (10.14), the sum over \(I \subseteq \mathcal{D}\) should also be a sum over \(I \subseteq \mathcal{N}\).

### Chapter 11: Spectral Expansions

**p.233:**Just before equation (11.4), there should be a comma between “\(U\)” and “defined”.**pp.241–242:**At the bottom of p.241, after taking the \(L^{2}(\nu)\) inner product with \(\Phi_{\ell}\), the right-hand side of the equation should be \(v_{\ell} \langle \Phi_{\ell}^{2} \rangle_{\nu}\) instead of \(v_{\ell} \langle \Psi_{\ell}^{2} \rangle_{\nu}\). This mistake is carried over the page: the denominator is \(\langle \Phi_{\ell}^{2} \rangle_{\nu}\) not \(\langle \Psi_{\ell}^{2} \rangle_{\nu}\). The denominator in the sum for \(u_{\ell}\) is correct as is, i.e. \(\langle \Psi_{\ell}^{2} \rangle_{\mu}\).

### Chapter 12: Stochastic Galerkin Methods

**p.256:**At the top of the page, “multiplication can fail to commutative” should be “multiplication can fail to be commutative”.**p.257:**In the third line of Section 12.2, “the approach is as simple is multiplying” should be “the approach is as simple as multiplying”.**p.263:**At the top of the page, “uniqueness of solutions problems like” should be “uniqueness of solutions to problems like”.**p.267:**On the third line, the Galerkin solution should be denoted \(u = u^{(M)}\), not \(u = u_{\Gamma}\). Also, in the second displayed equation, there is an extra right angle bracket just before the word “for”.**p.269:**In the final paragraph, which begins the discussion of stochastic Galerkin projection, it would have been clearer to say explicitly that \(\Psi_{1}, \dots, \Psi_{K}\) are the chosen polynomial chaos basis (or other orthogonal basis) of \(\mathcal{S}_{K}\).

### Chapter 13: Non-Intrusive Methods

**p.278:**In the footnote at the bottom of the page, the sum should read “\(U(t, x; \theta) = \sum_{k \in \mathbb{N}_{0}} u_{k}(t, x) \Psi_{k}(\theta)\)” instead of “\(U(t, x; \theta) = \sum_{k \in \mathbb{N}_{0}} (t, x) \Psi_{k}(\theta)\)”.**p.281:**In Remark 13.3(a), “the approximation the stochastic modes” should read “the approximation of the stochastic modes”.**p.286:**In the middle of the page, “has with the undesirable property” should read “has the undesirable property”.

### Chapter 14: Distributional Uncertainty

**p.299:**On the fifth line of Section 14.3, “particular” should be “particularly”.**p.308:**In the statement of Theorem 14.19, on the line after the definition of \(\mathcal{A}\), the domain of \(\varphi_{k, j}\) should be \(\mathcal{X}_{k}\) and not \(\mathcal{X}\).**p.315:**“not just no impact” should be “not just little impact”.**p.315:**There is an extra closing parenthesis at the end of “(and hence pass to a smaller feasible set \(\mathcal{A}' \subsetneq \mathcal{A}\))”.