eCommons

 

Stochastic Optimization and Learning: An Adaptive and Resource-Efficient Approach

dc.contributor.authorSalgia, Sudeep
dc.contributor.chairZhao, Qingen_US
dc.contributor.committeeMemberWegkamp, Martenen_US
dc.contributor.committeeMemberAcharya, Jayadeven_US
dc.contributor.committeeMemberKrishnamurthy, Vikramen_US
dc.date.accessioned2024-04-05T18:47:50Z
dc.date.available2024-04-05T18:47:50Z
dc.date.issued2023-08
dc.description501 pagesen_US
dc.description.abstractThis dissertation focuses on stochastic optimization and learning where the underlying probabilistic models are unknown and the decision maker optimizes their actions over time through sequential interactions with the unknown environment. The first part of this dissertation is on stochastic optimization where the learner aims at optimizing an unknown random loss function in expectation --- the fundamental building block of training any machine learning model today. In the centralized setting, we study three different classes of stochastic optimization problems, categorized based on the structural assumptions of the unknown function. For stochastic convex optimization, we propose the first extension of the coordinate minimization (CM) approach to stochastic optimization. The proposed approach provides a universal framework for extending low-dimensional optimization routines to high-dimensional problems and inherits the scalability and parallelizability properties of CM. For kernel-based optimization, we propose the first algorithm with order-optimal regret guarantees thereby closing the existing gap between upper and lower bounds. Thirdly, for neural net based optimization for contextual bandits, we explore the setting where neural nets are equipped with a smooth activation function, in contrast to the existing work which primarily focuses on ReLU neural nets. In the second part of this dissertation, we focus on challenges unique to distributed stochastic optimization. For the problem of distributed linear bandits, we investigate the regret-communication trade-off by establishing the information-theoretic lower bounds on the required communications (in terms of bits) for achieving a sublinear regret order. We also develop an efficient algorithm that is the first to achieve both order-optimal regret and optimal order of communication cost (in bits). We also extend our algorithm for stochastic convex optimization where it continues to enjoy order-optimal regret and communication performance. Secondly, we study the impact of statistical heterogeneity among clients in a distributed kernel-based bandit framework. We adopt a personalization approach to tackle the heterogeneity among users propose an algorithm that achieves order-optimal regret through a novel design that carefully balances personalized exploration with collaborative exploration. The third part of the dissertation focuses on problems arising in Active Learning and Active Hypothesis Testing. For the problem of online active learning for classifying streaming instances, we develop a disagreement-based learning algorithm for a general hypothesis space and noise model that incurs a bounded regret and has an order-optimal label complexity. We also study the problem of noisy group testing under unknown noise models, which is contrast to the existing studies that assume perfect knowledge of probabilistic model of the noise. We propose a novel algorithm that is agnostic to the noise distribution and offers a sample complexity that adapts to the noise level and is order-optimal in both the population size and the error rate. In the last chapter, we consider the problem of uniformity testing of Lipschitz continuous distributions with bounded support. We propose a sequential test that offers adaptivity to the unknown $\ell_1$ distance to the uniform distribution, allowing quicker identification of more anomalous (non-uniform) distributions.en_US
dc.identifier.doihttps://doi.org/10.7298/a3gx-wr09
dc.identifier.otherSalgia_cornellgrad_0058F_13730
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:13730
dc.identifier.urihttps://hdl.handle.net/1813/114753
dc.language.isoen
dc.rightsAttribution 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subjectAdaptiveen_US
dc.subjectBanditsen_US
dc.subjectCommunication Efficienten_US
dc.subjectComputationally Efficienten_US
dc.subjectMachine Learningen_US
dc.subjectStochastic Optimizationen_US
dc.titleStochastic Optimization and Learning: An Adaptive and Resource-Efficient Approachen_US
dc.typedissertation or thesisen_US
dcterms.licensehttps://hdl.handle.net/1813/59810.2
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Electrical and Computer Engineering

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Salgia_cornellgrad_0058F_13730.pdf
Size:
5.41 MB
Format:
Adobe Portable Document Format