Show simple item record

dc.contributor.authorGirard, Cory Jay
dc.date.accessioned2018-10-23T13:33:13Z
dc.date.available2018-10-23T13:33:13Z
dc.date.issued2018-08-30
dc.identifier.otherGirard_cornellgrad_0058F_10925
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:10925
dc.identifier.otherbibid: 10489635
dc.identifier.urihttps://hdl.handle.net/1813/59539
dc.description.abstractIn the existing literature on the dynamic control of service systems, a decision-maker seeks to optimize a single performance metric over a given time-horizon. However, in many settings, the decision-maker may be interested in multiple performance metrics. Take, for instance, the problem of assigning cross-trained hospital staff to two classes of patients: low-priority and high-priority. In the typical framework, this problem could be modelled as a Markov Decision Process (MDP), in which the performance metric to be minimized is a weighted combination of expected waiting times for each class. However, we argue that a more natural approach is to consider the constrained problem: minimizing the expected waiting time for lower priority patients, while keeping that of higher priority patients under a given target, $V$. In particular, we concern ourselves with uncovering structural properties of this problem. These properties imply the existence of simple optimal policies that are easy to implement in practice. We formulate this problem (the parallel setting), as well as a related problem in which customers undergo two phases of service in series (the tandem setting), as Constrained Markov Decision Processes (CMDPs). We present a general framework for solving two-class CMDPs, showing that they can be solved by using the Lagrangian dual to specify a particular unconstrained problem. If an appropriate Lagrange multiplier can be discerned, structural results from the resulting Lagrangian relaxation can be used to exploit structure in the original CMDP. We show that for both the parallel and tandem settings, the framework leads to simple threshold-like optimal policies. The results in each case are used to develop heuristics for analogous problems with abandonments with applications to healthcare, call centers and manufacturing systems. The efficacy of the heuristics are verified in each case via a detailed numerical study. We then extend the results in the parallel case to handle multiple classes and constraints. Lastly, we consider a controlled, truncated birth-death chain motivated by optimal treatment prescription in the context of personalized medicine. In this model, states represent the patient's state of health, and treatments can be prescribed to influence improvement and/or deterioration of health. The problem of dynamically prescribing treatments at minimal cost while maintaining a given level of health is modelled as a two-cost CMDP. Rather than employing the more general methods developed earlier, we decompose the state space and consider an alternative Lagrangian relaxation involving two simpler subproblems. We obtain structural results for this problem by showing that optimal solutions to these subproblems can be combined into a constrained-optimal policy. We then attempt to find conditions under which a monotone optimal policy exists under more general transition rates between states of health.
dc.language.isoen_US
dc.subjectapplied probability
dc.subjectOperations research
dc.subjectqueueing theory
dc.subjectconstrained optimization
dc.subjectdynamic programming
dc.subjectMarkov decision processes
dc.subjectOptimal control
dc.titleStructural Results for Constrained Markov Decision Processes
dc.typedissertation or thesis
thesis.degree.disciplineOperations Research
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Operations Research
dc.contributor.chairLewis, Mark E.
dc.contributor.committeeMemberLewis, Adrian S.
dc.contributor.committeeMemberHenderson, Shane G.
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/X4X63K54


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics