Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. PROCESSING NETWORK CONTROLS VIA DEEP REINFORCEMENT LEARNING

PROCESSING NETWORK CONTROLS VIA DEEP REINFORCEMENT LEARNING

File(s)
Gluzman_cornellgrad_0058F_13000.pdf (2.04 MB)
Permanent Link(s)
https://doi.org/10.7298/35W4-WV61
https://hdl.handle.net/1813/111367
Collections
Cornell Theses and Dissertations
Author
Gluzman, Mark
Abstract

Novel advanced policy gradient (APG) algorithms, such as proximal policy optimization (PPO), trust region policy optimization, and their variations, have become the dominant reinforcement learning (RL) algorithms because of their ease of implementation and good practical performance. This dissertation is concerned with theoretical justification and practical application of the APG algorithms for solving processing network control optimization problems. Processing network control problems are typically formulated as Markov decision process (MDP) or semi-Markov decision process (SMDP) problems that have several unconventional for RL features: infinite state spaces, unbounded costs, long-run average cost objectives. Policy improvement bounds play a crucial role in the theoretical justification of the APG algorithms. In this thesis we refine existing bounds for MDPs with finite state spaces and prove novel policy improvement bounds for classes of MDPs and SMDPs used to model processing network operations. We consider two examples of processing network control problems and customize the PPO algorithm to solve them. First, we consider parallel-server and multiclass queueing networks controls. Second, we consider the drivers repositioning problem in a ride-hailing service system. For both examples the PPO algorithm with auxiliary modifications consistently generates control policies that outperform state-of-art heuristics.

Description
173 pages
Date Issued
2022-05
Keywords
multiclass queueing network
•
policy improvement bound
•
processing network
•
reinforcement learning
•
ride-hailing
Committee Chair
Dai, Jiangang
Committee Member
Vladimirsky, Alexander
Henderson, Shane
Degree Discipline
Applied Mathematics
Degree Name
Ph. D., Applied Mathematics
Degree Level
Doctor of Philosophy
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/15530021

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance