COMPUTATIONALLY EFFICIENT AND ROBUST METHODS FOR LARGE-SCALE OPTIMIZATION AND SCIENTIFIC COMPUTING
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
This thesis is concerned with the design and analysis of computationally efficient algorithms for large-scale optimization and scientific computing. It aims to address two primary challenges: on one hand, to expand our understandingof the behavior of first-order optimization methods, which enjoy widespread success solving challenging problems such as training deep neural networks. On the other hand, to design algorithms for fundamental problems in scientific computing that can carefully balance the tradeoffs between communication and statistical efficiency arising in distributed environments. We start by studying the performance of iterative schemes for computingeigenvectors of symmetric matrices when the approximation quality is measured using a “row-wise” variation of the commonly used subspace distance, which is a more suitable error proxy for spectral algorithms used in certain graph problems. We obtain improved convergence rates and propose novel stopping criteria that enable substantial computational savings. We next turn to the problem of computing eigenvectors in distributed environments, wherein multiple views of a data matrix are scattered across a network with multiple machines communicating with a central coordinator. Reducing communication is essential for high performance, but common solutionaggregation heuristics for this problem fail due to the invariance of eigenvectors to rotations and reflections. We introduce a communication-efficient algorithm for distributed eigenvector estimation that enjoys deterministic error guarantees and nearly optimal statistical performance when applied to distributed principal component analysis (PCA). Distributed algorithms are prone to failure in the presence of adversaries and silent / soft errors, which can change a fraction of responses in the network arbitrarily. We address this shortcoming in the context of eigenvector estimation by designing an outlier-robust extension of our communication-efficient method that can tolerate a constant fraction of of “malicious” nodes; in contrast, existing approaches break down in the presence of just a few corruptions. We next study robust signal recovery in the blind deconvolution problem. Weshow that nonsmooth recovery formulations for this problem exhibit favorable local conditioning under mild assumptions and are robust to outliers or gross corruptions – properties which are not shared by their smooth counterparts. As a result, we prove that first-order methods for robust signal recovery converge rapidly at a dimension-independent rate when initialized in a neighborhood of the solution set. We complement our analysis with a spectral initialization method that leads to a globally convergent two-stage method. Finally, we present a superlinearly convergent first-order method for nonsmooth optimization, inspired by Newton’s method for solving nonlinear equations. In contrast to prior work, our method can be applied to problems withnonisolated solution sets and allows for the use of automatic differentiation oracles, which are common in modern machine learning. We also develop a numerically efficient implementation that outperforms standard “low-complexity” iterative methods on a suite of signal processing problems.
Journal / Series
Volume & Issue
Description
Sponsorship
Date Issued
Publisher
Keywords
Location
Effective Date
Expiration Date
Sector
Employer
Union
Union Local
NAICS
Number of Workers
Committee Chair
Committee Co-Chair
Committee Member
Damle, Anil
Benson, Austin