eCommons

 

PROCESSING NETWORK CONTROLS VIA DEEP REINFORCEMENT LEARNING

dc.contributor.authorGluzman, Mark
dc.contributor.chairDai, Jiangang
dc.contributor.committeeMemberVladimirsky, Alexander
dc.contributor.committeeMemberHenderson, Shane
dc.date.accessioned2022-07-15T13:54:39Z
dc.date.available2022-07-15T13:54:39Z
dc.date.issued2022-05
dc.description173 pages
dc.description.abstractNovel advanced policy gradient (APG) algorithms, such as proximal policy optimization (PPO), trust region policy optimization, and their variations, have become the dominant reinforcement learning (RL) algorithms because of their ease of implementation and good practical performance. This dissertation is concerned with theoretical justification and practical application of the APG algorithms for solving processing network control optimization problems. Processing network control problems are typically formulated as Markov decision process (MDP) or semi-Markov decision process (SMDP) problems that have several unconventional for RL features: infinite state spaces, unbounded costs, long-run average cost objectives. Policy improvement bounds play a crucial role in the theoretical justification of the APG algorithms. In this thesis we refine existing bounds for MDPs with finite state spaces and prove novel policy improvement bounds for classes of MDPs and SMDPs used to model processing network operations. We consider two examples of processing network control problems and customize the PPO algorithm to solve them. First, we consider parallel-server and multiclass queueing networks controls. Second, we consider the drivers repositioning problem in a ride-hailing service system. For both examples the PPO algorithm with auxiliary modifications consistently generates control policies that outperform state-of-art heuristics.
dc.identifier.doihttps://doi.org/10.7298/35W4-WV61
dc.identifier.otherGluzman_cornellgrad_0058F_13000
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:13000
dc.identifier.urihttps://hdl.handle.net/1813/111367
dc.language.isoen
dc.subjectmulticlass queueing network
dc.subjectpolicy improvement bound
dc.subjectprocessing network
dc.subjectreinforcement learning
dc.subjectride-hailing
dc.titlePROCESSING NETWORK CONTROLS VIA DEEP REINFORCEMENT LEARNING
dc.typedissertation or thesis
dcterms.licensehttps://hdl.handle.net/1813/59810.2
thesis.degree.disciplineApplied Mathematics
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Applied Mathematics

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Gluzman_cornellgrad_0058F_13000.pdf
Size:
2.04 MB
Format:
Adobe Portable Document Format