Show simple item record

dc.contributor.authorLee, Eric Hans
dc.date.accessioned2021-03-15T13:32:05Z
dc.date.available2021-03-15T13:32:05Z
dc.date.issued2020-12
dc.identifier.otherLee_cornellgrad_0058F_12279
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:12279
dc.identifier.urihttps://hdl.handle.net/1813/103236
dc.description158 pages
dc.description.abstractGlobal optimization, which seeks to identify a maximal or minimal point over a domain Omega, is a ubiquitous and well-studied problem in applied mathematics, computer science, statistics, operations research, and many other fields. The resulting body of global optimization research is vast, ranging from heuristic and metaheuristic-driven approaches such as evolutionary search to application-driven systems such as multi-level, multi-fidelity optimization of physical simulations. Global optimization's inherent hardness underlies this sheer variety of different methods; absent any additional assumptions, obtaining an efficient certificate of global optimality is not possible. Consequently, there are no agreed-upon methods that exhibit robust, all-around performance like there are in local optimization. Data-driven algorithms and models, spurred by recent advances in cheap computing and flexible, open-source software, have been growing in popularity over recent years. Bayesian optimization (BO) is one such instance of this trend in global optimization. Using its past evaluations, BO builds a probabilistic model of the objective function to guide optimization, and selects the next iterate through an acquisition function, which scores each point in the optimization domain based on its potential to decrease the objective function. BO has been observed to converge faster than competing classes of global optimization algorithms.This sample efficiency is BO's key strength, and makes it ideal for optimizing objective functions that are expensive to evaluate and potentially contaminated with noise. Key BO applications that meet these criteria include optimizing machine learning hyperparameters, calibrating physical simulations, and designing engineering systems. BO's performance is heavily influenced by its acquisition function, which must effectively balance exploration and exploitation to converge quickly. Default acquisition functions such as expected improvement are greedy in the sense that they ignore how the current iteration will affect future ones. Typically, the BO exploration-exploitation trade-off is expressed in the context of a one-step optimal process: for the next iteration, choose the point that balances information quantity and quality. However, if we possess a pre-specified iteration budget h, we might instead choose the point that balances information quantity and quality over the next h steps. This non-myopic approach is aware of the remaining iterations and can balance the exploration-exploitation trade-off correspondingly. Non-myopic BO is the primary topic of this dissertation; we hope that making decisions according to a known iteration budget will improve upon the performance of classic BO, which is budget-agnostic.
dc.language.isoen
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectBayesian optimization
dc.subjectGaussian process
dc.subjectGlobal optimization
dc.titleBudget-constrained Bayesian optimization
dc.typedissertation or thesis
thesis.degree.disciplineComputer Science
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Computer Science
dc.contributor.chairBindel, David S.
dc.contributor.committeeMemberBirman, Ken
dc.contributor.committeeMemberFrazier, Peter
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/4ywa-w957


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, this item's license is described as Attribution 4.0 International

Statistics