eCommons

 

Experimental Design For Partially Observed Markov Decision Processes

dc.contributor.authorThorbergsson, Leifuren_US
dc.contributor.chairHooker, Giles J.en_US
dc.contributor.committeeMemberTurnbull, Bruce Williamen_US
dc.contributor.committeeMemberBooth, Jamesen_US
dc.date.accessioned2015-01-07T20:58:57Z
dc.date.available2015-01-07T20:58:57Z
dc.date.issued2014-08-18en_US
dc.description.abstractThis thesis considers the question of how to most effectively conduct experiments in Partially Observed Markov Decision Processes so as to provide data that is most informative about a parameter of interest. Methods from Markov decision processes, especially dynamic programming, are introduced and then used in algorithms to maximize a relevant Fisher Information. These algorithms are then applied to two POMDP examples. The methods developed can also be applied to stochastic dynamical systems, by suitable discretization, and we consequently show what control policies look like in the Morris-Lecar Neuron model and the Rosenzweig MacArthur Model, and simulation results are presented. We discuss how parameter dependence within these methods can be dealt with by the use of priors, and develop tools to update control policies online. This is demonstrated in another stochastic dynamical system describing growth dynamics of DNA template in a PCR model.en_US
dc.identifier.otherbibid: 8793390
dc.identifier.urihttps://hdl.handle.net/1813/38999
dc.language.isoen_USen_US
dc.subjectExperimental Designen_US
dc.subjectPOMDPen_US
dc.subjectDiffusion processesen_US
dc.titleExperimental Design For Partially Observed Markov Decision Processesen_US
dc.typedissertation or thesisen_US
thesis.degree.disciplineStatistics
thesis.degree.grantorCornell Universityen_US
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Statistics

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
lt274.pdf
Size:
592.44 KB
Format:
Adobe Portable Document Format