eCommons

 

A Scalable and Flexible Framework for Gaussian Processes via Matrix-Vector Multiplication

Other Titles

Abstract

Gaussian processes (GPs) exhibit a classic tension of many machine learning methods: they possess desirable modelling capabilities yet suffer from important practical limitations. In many instances, GPs are able to offer well-calibrated uncertainty estimates, interpretable predictions, and the ability to encode prior knowledge. These properties have made them an indispensable tool for black-box optimization, time series forecasting, and high-risk applications like health care. Despite these benefits, GPs are typically not applied to datasets with more than a few thousand data points. This is in part due to an inference procedure that requires matrix inverses, determinants, and other expensive operations. Moreover, specialty models often require significant implementation efforts. This thesis aims to alleviate these practical concerns through a single simple design decision. Taking inspiration from neural network libraries, we construct GP inference algorithms using only matrix-vector multiplications (MVMs) and other linear operations. This MVM-based approach simultaneously address several of these practical concerns: it reduces asymptotic complexity, effectively utilizes GPU hardware, and provides straight-forward implementations for many specialty GP models. The chapters of this thesis each address a different aspect of Gaussian process inference. Chapter 3 introduces a MVM method for training Gaussian process regression models (i.e. optimizing kernel/likelihood hyperparameters). This approach unifies several existing methods into a highly-parallel and stable algorithm. Chapter 4 focuses on making predictions with Gaussian processes. A memory-efficient cache, which can be computed through MVMs, significantly reduces the computation of predictive distributions. Chapter 5 introduces a multi-purpose MVM algorithm that can be used to draw samples from GP posteriors and perform approximate Gaussian process inference. All three of these methods offer speedups ranging from 4x to 40x. Importantly, applying any of these algorithms to specialty models (e.g. multitask GPs and scalable approximations) simply requires a matrix-vector multiplication routine that exploits covariance structure afforded by the model. The MVM methods from this thesis form the building blocks of the GPyTorch library, an open-sourced GP implementation designed for scalability and simple implementations. In the final chapter, we evaluate GPyTorch models on several large-scale regression datasets. Using the proposed MVM methods, we can apply exact Gaussian processes to datasets that are 2 orders of magnitude larger than what has previously been reported - up to 1 million data points.

Journal / Series

Volume & Issue

Description

212 pages

Sponsorship

Date Issued

2020-08

Publisher

Keywords

Bayesian nonparametrics; Conjugate gradients; Gaussian processes; GPU acceleration; Krylov subspaces; Regression

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Weinberger, Kilian Quirin

Committee Co-Chair

Committee Member

Sridharan, Karthik
Wilson, Andrew Gordon

Degree Discipline

Computer Science

Degree Name

Ph. D., Computer Science

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Attribution 4.0 International

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record