SMOOTH QUASI-NEWTON METHODS FOR NONSMOOTH OPTIMIZATION
The success of Newton’s method for smooth optimization, when Hessians are available, motivated the idea of quasi-Newton methods, which approximate Hessians in response to changes in gradients and result in superlinear convergence on smooth functions. Sporadic informal observations over several decades (and more formally in recent work of Lewis and Overton) suggest that such methods also seem to work surprisingly well on nonsmooth functions. This thesis explores this phenomenon from several perspectives. First, Powell’s fundamental 1976 convergence proof for the popular Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for smooth convex functions in fact extends to some nonsmooth settings. Secondly, removing the influence of linesearch techniques and introducing linesearch-free quasi-Newton approaches (including a version of Shor’s R algorithm), shows in particular how repeated quasi-Newton updating at a single point can serve as a separation technique for convex sets. Lastly, an experimental comparison, in the nonsmooth setting, of the two most popular smooth quasi-Newton updates, BFGS and Symmetric Rank-One, emphasizes the power of the BFGS update.
Operations research; Optimization; BFGS; Convex; Nonsmooth; Quasi-Newton
Lewis, Adrian S.
Frazier, Peter; Bindel, David S.
Ph. D., Operations Research
Doctor of Philosophy
Attribution 4.0 International
dissertation or thesis
Except where otherwise noted, this item's license is described as Attribution 4.0 International