Non-convex and Interactive Learning via Stochastic Optimization
dc.contributor.author | Sekhari, Ayush | |
dc.contributor.chair | Sridharan, Karthik | en_US |
dc.contributor.committeeMember | Kleinberg, Robert | en_US |
dc.contributor.committeeMember | Tardos, Eva | en_US |
dc.contributor.committeeMember | Sun, Wen | en_US |
dc.date.accessioned | 2023-03-31T16:38:13Z | |
dc.date.available | 2023-03-31T16:38:13Z | |
dc.date.issued | 2022-12 | |
dc.description | 512 pages | en_US |
dc.description.abstract | Advances in machine learning have led to many empirical breakthroughs in computer science, from developing state-of-the-art image classification models to defeating the world champion in the game of Go. From statistical learning (e.g. image recognition) to interactive learning (e.g. the game of Go), this success is primarily driven by a shift towards using over-parameterized non-convex models, like deep neural networks, for learning. In practice, these high-dimensional models are trained by formulating the learning task as a stochastic optimization problem and using simple first-order algorithms like Stochastic Gradient Descent (SGD) to solve them. In the first part of the thesis, I will discuss my work on understanding why SGD succeeds in solving high-dimensional stochastic non-convex optimization problems arising in practice, and its implications for non-convex learning. It is widely believed that SGD works so well because it has an implicit bias that guides the algorithm towards good solutions that generalize well. I will start by discussing the limitations of this approach for understanding SGD, and then present a new framework based on Lyapunov potential that can overcome these limitations. This framework is based on exploiting a deep connection between the rates at which gradient flow converges on the test loss, the geometrical properties of the test loss, and an associated Lyapunov potential. In the second part of the thesis, I will discuss my work on understanding interactive learning through the lens of stochastic optimization. Much of the recent research in interactive learning, e.g.~in MDPs with large state-action spaces, has been limited to the stochastic setting where the underlying MDP is fixed throughout the interaction, and the learner has a value function class that contains the optimal value function for that MDP. I will consider two interactive learning settings: agnostic RL, and learning with an adversarially changing environment, where these assumptions do not hold. For agnostic RL, I will discuss a statistically optimal learning algorithm for low-rank MDPs that is based on using auto-regressions to estimate the value of a policy, and for finding the best policy. The coefficients for the auto-regressions are estimated by solving a stochastic non-convex optimization problem. For learning with adversarially changing environments, we provide a general approach to solving adversarial decision-making problems by reducing them to full information online learning using a per-step minimax optimization problem. | en_US |
dc.identifier.doi | https://doi.org/10.7298/2qzd-0525 | |
dc.identifier.other | Sekhari_cornellgrad_0058_13421 | |
dc.identifier.other | http://dissertations.umi.com/cornellgrad:13421 | |
dc.identifier.uri | https://hdl.handle.net/1813/112975 | |
dc.language.iso | en | |
dc.rights | Attribution-NonCommercial 4.0 International | * |
dc.rights.uri | https://creativecommons.org/licenses/by-nc/4.0/ | * |
dc.subject | Deep learning theory | en_US |
dc.subject | Generalization | en_US |
dc.subject | Interactive Learning | en_US |
dc.subject | Machine learning theory | en_US |
dc.subject | Stochastic optimization | en_US |
dc.title | Non-convex and Interactive Learning via Stochastic Optimization | en_US |
dc.type | dissertation or thesis | en_US |
dcterms.license | https://hdl.handle.net/1813/59810.2 | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Cornell University | |
thesis.degree.level | Doctor of Philosophy | |
thesis.degree.name | Ph. D., Computer Science |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Sekhari_cornellgrad_0058_13421.pdf
- Size:
- 3.66 MB
- Format:
- Adobe Portable Document Format