Theory and Applications of Manifold Interpolation for Learning Empirical Green's Functions
This dissertation develops two complementary, mesh-independent frameworks for data-driven discovery and interpolation of empirical Green’s functions for one-dimensional parameterized linear operators. It also establishes theoretical groundwork for manifold interpolation in an infinite dimensional setting, and suggests some future research in that direction. Chapter 2 introduces $\bf{nsegf}$, a discrete approach requiring only input–output pairs (\lbrace{f_i,u_i\rbrace}). We show that the empirical Green’s function integral operator matrix (G\in\mathbb R^{N\times N}) can be recovered by a least-squares formula (G = U,F^+W^{-1})without adjoint solves, making it applicable to non-self-adjoint operators. This also avoids the need to train neural networks, as is done in our approach in Chapter 3. We then interpolate the low-rank SVD factors ((U,\Sigma,V)) across parameters by lifting orthonormal bases onto the finite dimensional compact Stiefel manifold, (S_K(\mathbb{R}^n)), performing polynomial interpolation in the tangent space, and retracting via a (QR)-based map. Chapter 3 presents $\bf{chebgreen}$, which learns continuous Green’s functions via Rational Neural Networks and computes high-accuracy Singular Value Expansions using a Python Chebfun implementation. By storing the orthonormal bases from the SVE as “quasimatrices” in the Hilbert space (H = (L^2(\Omega))^K), we generalize the interpolation to the infinite-dimensional Hilbert–Stiefel manifold, establish its injectivity radius, and derive analogous error bounds. Numerical experiments on Poisson, advection–diffusion, Airy, and fractional Laplacian problems achieve sub-percent accuracy with just 100 samples—even under 50 % output noise. Both frameworks drastically reduce data requirements compared to prior neural-network-only methods. Furthermore, both frameworks allow for learning of Green's functions from a broader class of problems than in previous approaches as the assumption that the associated differential operator is self-adjoint is not required. Chapter 3 also includes a theoretical framework for interpolation using quasimatrices, and establishes rigorous error bounds. Chapter 4 suggests some future directions for Hilbert manifold interpolation theory and applications beyond the Green's function application. It suggests considering subspace interpolation problems in the Hilbert Grassmannian manifolds, and lays out a framework for functional principal component analysis using Hilbert Stiefel manifold interpolation.