eCommons

 

Adaptive Bayes-Optimal Methods for Stochastic Search with Applications to Preference Learning

dc.contributor.authorPallone, Stephen Nicholas
dc.contributor.chairFrazier, Peter
dc.contributor.chairHenderson, Shane G.
dc.contributor.committeeMemberLewis, Adrian S.
dc.contributor.committeeMemberJoachims, Thorsten
dc.date.accessioned2018-04-26T14:17:18Z
dc.date.available2018-04-26T14:17:18Z
dc.date.issued2017-08-30
dc.description.abstractMany fundamental problems in mathematics can be considered search problems, where one can make sequential queries to find a point with desired properties. This includes convex optimization, in which queries are points within the feasible region, and the corresponding subgradient implies a separating hyperplane that allows non-optimal points to be discarded. Search problems can be solved from a Bayes-optimal viewpoint, where a prior distribution is maintained over the search space quantifying the likelihood of the desired point's location. In this manner, queries and their responses allow this prior to be updated with new information, and the objective is to ask the best query to learn the location of the desired point as efficiently as possible. This framework can be adapted to solving variants of one dimensional deterministic search, which includes work such as developing query strategies associated with parallel function evaluations, or analyzing a bisection-like method that sorts the roots of functions that must be evaluated together. Most significantly, this idea is used in multidimensional search in the application of learning the preferences of a single user. In this case, the queries are comparative questions, where multiple alternatives are presented to the user, and they choose the most preferred option. The preferences of the user are encoded in a linear classifier, where the user's utility for an alternative roughly corresponds to the dot product between this classifier and a feature vector. The response model enables answers to be contaminated by probabilistic noise, allowing the analysis of responses that break mathematical principles such as transitivity. The objective is to minimize the expected posterior loss of this classifier by adaptively and sequentially selecting the best subset of alternatives to present to the user. We consider multiple forms of loss, most notably including posterior entropy, and provide results characterizing these best subsets. We develop theory yielding tight bounds for measures of loss under the optimal policy, show how computationally tractable algorithms can be implemented, and analyze these policies in two unique computational scenarios.
dc.identifier.doihttps://doi.org/10.7298/X4SQ8XKG
dc.identifier.otherPallone_cornellgrad_0058F_10430
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:10430
dc.identifier.otherbibid: 10361580
dc.identifier.urihttps://hdl.handle.net/1813/56903
dc.language.isoen_US
dc.rightsAttribution 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subjectpreferences
dc.subjectOperations research
dc.subjectactive learning
dc.subjectbayesian
dc.subjectbisection search
dc.subjectinformation theory
dc.titleAdaptive Bayes-Optimal Methods for Stochastic Search with Applications to Preference Learning
dc.typedissertation or thesis
dcterms.licensehttps://hdl.handle.net/1813/59810
thesis.degree.disciplineOperations Research
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Operations Research

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Pallone_cornellgrad_0058F_10430.pdf
Size:
3.81 MB
Format:
Adobe Portable Document Format