JavaScript is disabled for your browser. Some features of this site may not work without it.
Gradient Estimation and Variance Reduction in Stochastic and Deterministic Models

Author
Keane, Ronan
Abstract
It seems that in the current age, computers, computation, and data have an increasingly important role to play in scientific research and discovery. This is reflected in part by the rise of machine learning and artificial intelligence, which have become great areas of interest not just for computer science but also for many other fields of study. More generally, there have been trends moving towards the use of bigger, more complex and higher capacity models. It also seems that stochastic models, and stochastic variants of existing deterministic models, have become important research directions in various fields. For all of these types of models, gradient-based optimization remains as the dominant paradigm for model fitting, control, and more. This dissertation considers unconstrained, nonlinear optimization problems, with a focus on the gradient itself, that key quantity which enables the solution of such problems. In chapter 1, we introduce the notion of reverse differentiation, a term which describes the body of techniques which enables the efficient computation of gradients. We cover relevant techniques both in the deterministic and stochastic cases. We present a new framework for calculating the gradient of problems which involve both deterministic and stochastic elements. The resulting gradient estimator can be applied in virtually any situation (including many where automatic differentiation alone fails due to the fact that it does not give gradient terms due to score functions). In chapter 2, we analyze the properties of the gradient estimator, with a focus on those properties which are typically assumed in convergence proofs of optimization algorithms. That chapter attempts to bridge some of the gap between what is assumed in a mathematical optimization proof, and what we need to prove to get a convergence result for a specific model/problem formulation. Chapter 3 gives various examples of applying our new gradient estimator. We further explore the idea of working with piecewise continuous models, that is, models with distinct branches and if statements which define what specific branch to use. We also discuss model elements that cause problems in gradient-based optimization, and how to reformulate a model to avoid such issues. Lastly, chapter 4 presents a new optimal baseline for use in the variance reduction of gradient estimators involving score functions. We forsee that methodology as becoming a key part of gradient estimation, as the presence of score functions is a key feature in our gradient estimator. In somewhat of a departure from the previous chapters, chapters 5 and 6 present two studies in transportation, one of the core emerging application areas which have motivated this dissertation.
Description
237 pages
Date Issued
2022-05Subject
automatic differentiation; baselines; gradient estimation; gradient-based optimization; score functions; stochastic gradient
Committee Chair
Gao, H. Oliver
Committee Member
Rand, Richard Herbert; Pender, Jamol J.; Samaranayake, Samitha
Degree Discipline
Systems Engineering
Degree Name
Ph. D., Systems Engineering
Degree Level
Doctor of Philosophy
Rights
Attribution 4.0 International
Rights URI
Type
dissertation or thesis
The following license files are associated with this item:
Except where otherwise noted, this item's license is described as Attribution 4.0 International