Fast Kernel Matrix Approximations By Series Expansions

Other Titles


Kernel functions are used in a variety of scientific settings to measure relationships or interactions between elements of a vector space. For example, in machine learning, kernel functions are often used to describe similarity or covariance between data points in a feature space. In physics, one method for solving partial differential equations is to reformulate them as boundary integral equations, where the integrand contains a kernel function related to the associated Green's function. A common challenge in computational approaches to problems involving kernel functions is the efficient handling of the associated kernel matrices, whose entries are evaluations of the kernel function on pairs of data points. These matrices are typically dense, and generic linear algebra routines will have time costs that naively scale quadratically with the amount of data for matrix-vector multiplies and cubically for solving linear systems. Since applications typically deal with large amounts of data (e.g., training data points for machine learning applications, quadrature nodes for integrals on complex and/or high dimensional surfaces), there is critical need for efficient techniques for handling kernel matrices. Much research has been done to develop such techniques for a variety of applications in recent decades. A popular approach is to leverage the fact kernel matrices tend to be rank structured which can be exploited to generate compressed representations. Another common tactic in the literature is to take advantage of properties of the kernel function, such as the existence of a reproducing property or a helpful series expansion. A nice aspect of research in this area is the existence of and opportunities for cross-pollination between different scientific disciplines connected by the use of kernel matrices. The primary topic of this dissertation is the development of new techniques for kernel matrix approximation which advance the state of the art in accuracy and efficiency in time and space. Inspired by kernel techniques in computational physics, we focus heavily on analytic methods for low-rank kernel matrix approximation, and introduce new techniques for automatic generation of series expansions for a broad set of kernel functions based on numerical interpolation and automatic differentiation. These methods can be combined with data-dependent algebraic techniques to achieve factorizations which compare favorably to modern techniques with respect to rank, accuracy, and time/space costs. In Chapter 3 we introduce the Harmonic Decomposition Factorization (HDF) which uses a Chebyshev series expansion to derive a harmonic decomposition for any isotropic kernel function. In addition, we show that the form of this decomposition allows for efficient additional data-dependent compression -- the resulting low-rank approximations to kernel matrices have near-optimal rank/error tradeoffs in a variety of settings. In Chapter 4 we address kernel matrices with hierarchical rank-structure and develop a new technique called the Fast Kernel Transform (FKT), which uses automatic differentiation to generate a harmonic decomposition from a Taylor expansion. As hierarchical factorizations generally compress interactions between well-separated sets of points, we use a Taylor approximation whose accuracy increases with the distance between sets of points. In addition, we show that both the data-dependent compression step described in Chapter 3 as well as a new lossless data-independent compression step can be applied to improve the rank/error tradeoff of the FKT. In Chapter 5 we shift focus to hierarchical factorizations which compress Green's function kernel matrices via interpolative decompositions -- we introduce a new technique for handling a special structure that arises when the boundaries of a boundary integral formulation contain multiple disconnected components. This factorization is highly parallelizeable and allows for fast updating following small perturbations to the problem geometry. We've developed open-source implementations of all new techniques presented herein, and we provide experimental results to complement the theoretical developments.

Journal / Series

Volume & Issue


169 pages


Date Issued





Effective Date

Expiration Date




Union Local


Number of Workers

Committee Chair

Damle, Anil

Committee Co-Chair

Committee Member

Bindel, David S.
Kleinberg, Robert David

Degree Discipline

Computer Science

Degree Name

Ph. D., Computer Science

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)


Link(s) to Reference(s)

Previously Published As

Government Document




Other Identifiers


Rights URI


dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record