Show simple item record

dc.contributor.authorLeventhal, Dennisen_US
dc.date.accessioned2009-10-14T20:12:01Z
dc.date.available2014-10-14T06:24:16Z
dc.date.issued2009-10-14T20:12:01Z
dc.identifier.otherbibid: 6714458
dc.identifier.urihttps://hdl.handle.net/1813/14057
dc.description.abstractThe connection between the conditioning of a problem instance -- the sensitivity of a problem instance to perturbations in the input -- and the speed of certain iterative algorithms in solving that problem instance is a recurring topic of study in numerical analysis. This dissertation, consisting of three distinct parts, provides a further connection through the framework of randomized optimization algorithms. In Part I, we explore how randomization can help asymptotic convergence properties of simple, directional search-based optimization methods. Specifically, we develop a randomized, iterative scheme for estimating the Hessian matrix of a twice-differentiable function. Using this estimation technique, we analyze how it can be used to enhance a random directional search method. From there, we proceed to develop a conjugate-directional search method that incorporates estimated Hessian information without requiring direct use of gradients. In Part II, we turn our focus to randomized variants of two classical algorithms: coordinate descent methods for systems of linear equations and iterated projection methods for systems of linear inequalities. We then demonstrate that, under appropriate randomization schemes, linear rates of convergence can be bounded (in expectation) in terms of natural linear-algebraic conditioning mea- sures for these problems. By considering conditioning concepts induced by metric regularity and metric subregularity, we then expand upon these results by examining randomized projection algorithms for convex feasibility problems. Extensions to reflection-based algorithms are also discussed. Observing that convex feasibility problems can be reformulated into the problem of finding a common zero of maximal monotone operators, we proceed by studying the proximal point method in Part III. Specifically, for the problem of finding a zero of a single maximal monotone operator, we show that metric subregularity of that operator is sufficient for linear convergence of the proximal point method, leading to a convergence rate in terms of the conditioning induced by the modulus of subregularity. This result is then generalized -- by considering randomized and averaged proximal point methods -- to obtain a convergence rate for the problem of finding a common zero of finitely many such operators.en_US
dc.language.isoen_USen_US
dc.titleEffects Of Conditioning On The Convergence Of Randomized Optimization Algorithmsen_US
dc.typedissertation or thesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics