Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Structured matrix recovery and approximation from matrix-vector products

Structured matrix recovery and approximation from matrix-vector products

File(s)
Halikias_cornellgrad_0058F_14955.pdf (9 MB)
Permanent Link(s)
https://doi.org/10.7298/y19r-nh29
https://hdl.handle.net/1813/117505
Collections
Cornell Theses and Dissertations
Author
Halikias, Diana
Abstract

Algorithms and theory are developed for the task of recovering a structured, unknown matrix $A$ from matrix-vector products $x \mapsto Ax$ and $y \mapsto A^\top y$. Upper and lower bounds on the number of matrix-vector products necessary to recover many common structured matrix families, such as tridiagonal, Toeplitz, Hankel, low-rank, hierarchical, and sparse matrices, are provided. For sparse and hierarchical matrices, the more general problem of near-optimal approximation from matrix-vector products is analyzed. This research is motivated by the field of operator learning, where one wishes to recover an infinite-dimensional operator from only operator-function products. The problem of matrix recovery and approximation also encompasses important current research directions in numerical linear algebra, such as low-rank approximation and diagonal estimation. Algorithms and theory for hierarchical matrix recovery are extended to infinite dimensions for recovery of the Green's function of a uniformly elliptic PDE over a 3D domain with Lipschitz-smooth boundary subject to Dirichlet boundary conditions. We prove that one can accurately approximate the Green's function with relatively few input-output pairs of forcing terms and solutions by exploiting hierarchical low-rank structure and PDE theory. Throughout the work, it is a central question whether one can prove an accuracy guarantee for peeling algorithms, which recover hierarchical matrices from matrix-vector products using a recursive strategy that may result in a large accumulation of error. Upper bounds on the error of peeling-type algorithms are provided in both the finite and infinite-dimensional settings. In infinite dimensions, a combination of Green's function theory for elliptic PDEs and an adaptive adjustment of the target rank at each hierarchical level are needed. In finite dimensions, the choice of low-rank approximation algorithm is shown to be crucial, and a perforated sketch structure based on CountSketch is also employed. In each setting, perturbation analysis for low-rank approximation techniques such as the randomized SVD and generalized Nystr"om method is provided. The first theoretical guarantees for the accuracy of peeling algorithms are presented, as well as the existence of hard cases for which these algorithms accumulate uncontrollable error. Lastly, motivated by realistic settings in PDE learning and numerical linear algebra, the assumption of access to the action of the transpose of a matrix, $y \mapsto A^\top y$, and adjoint of an operator is removed. Transpose-free techniques for matrix recovery and least squares problems are investigated, and transpose data is shown to be essential to each of these problems.

Description
269 pages
Date Issued
2025-05
Keywords
hierarchical matrices
•
low-rank matrices
•
matrix recovery
•
numerical linear algebra
•
operator learning
•
PDE learning
Committee Chair
Townsend, Alex
Committee Member
Damle, Anil
Saloff-Coste, Laurent
Degree Discipline
Mathematics
Degree Name
Ph. D., Mathematics
Degree Level
Doctor of Philosophy
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16938245

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance