Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Effects Of Conditioning On The Convergence Of Randomized Optimization Algorithms

Effects Of Conditioning On The Convergence Of Randomized Optimization Algorithms

File(s)
Leventhal, Dennis.pdf (592.18 KB)
Permanent Link(s)
https://hdl.handle.net/1813/14057
Collections
Cornell Theses and Dissertations
Author
Leventhal, Dennis
Abstract

The connection between the conditioning of a problem instance -- the sensitivity of a problem instance to perturbations in the input -- and the speed of certain iterative algorithms in solving that problem instance is a recurring topic of study in numerical analysis. This dissertation, consisting of three distinct parts, provides a further connection through the framework of randomized optimization algorithms. In Part I, we explore how randomization can help asymptotic convergence properties of simple, directional search-based optimization methods. Specifically, we develop a randomized, iterative scheme for estimating the Hessian matrix of a twice-differentiable function. Using this estimation technique, we analyze how it can be used to enhance a random directional search method. From there, we proceed to develop a conjugate-directional search method that incorporates estimated Hessian information without requiring direct use of gradients. In Part II, we turn our focus to randomized variants of two classical algorithms: coordinate descent methods for systems of linear equations and iterated projection methods for systems of linear inequalities. We then demonstrate that, under appropriate randomization schemes, linear rates of convergence can be bounded (in expectation) in terms of natural linear-algebraic conditioning mea- sures for these problems. By considering conditioning concepts induced by metric regularity and metric subregularity, we then expand upon these results by examining randomized projection algorithms for convex feasibility problems. Extensions to reflection-based algorithms are also discussed. Observing that convex feasibility problems can be reformulated into the problem of finding a common zero of maximal monotone operators, we proceed by studying the proximal point method in Part III. Specifically, for the problem of finding a zero of a single maximal monotone operator, we show that metric subregularity of that operator is sufficient for linear convergence of the proximal point method, leading to a convergence rate in terms of the conditioning induced by the modulus of subregularity. This result is then generalized -- by considering randomized and averaged proximal point methods -- to obtain a convergence rate for the problem of finding a common zero of finitely many such operators.

Date Issued
2009-10-14T20:12:01Z
Type
dissertation or thesis

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance