Show simple item record

dc.contributor.authorJayasuriya, Suren
dc.date.accessioned2017-04-04T20:28:02Z
dc.date.available2018-12-08T07:01:45Z
dc.date.issued2017-01-30
dc.identifier.otherJayasuriya_cornellgrad_0058F_10076
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:10076
dc.identifier.otherbibid: 9906087
dc.identifier.urihttps://hdl.handle.net/1813/47840
dc.description.abstractComputational cameras with sensor hardware co-designed with computer vision and graphics algorithms are an exciting recent trend in visual computing. In particular, most of these new cameras capture the plenoptic function of light, a multidimensional function of radiance for light rays in a scene. Such plenoptic information can be used for a variety of tasks including depth estimation, novel view synthesis, and inferring physical properties of a scene that the light interacts with. In this thesis, we present multimodal plenoptic imaging, the simultaenous capture of multiple plenoptic dimensions, using Angle Sensitive Pixels (ASP), custom CMOS image sensors with embedded per-pixel diffraction gratings. We extend ASP models for plenoptic image capture, and showcase several computer vision and computational imaging applications. First, we show how high resolution 4D light fields can be recovered from ASP images, using both a dictionary-based machine learning method as well as deep learning. We then extend ASP imaging to include the effects of polarization, and use this new information to image stress-induced birefringence and remove specular highlights from light field depth mapping. We explore the potential for ASPs performing time-of-flight imaging, and introduce the depth field, a combined representation of time-of-flight depth with plenoptic spatio-angular coordinates, which is used for applications in robust depth estimation. Finally, we leverage ASP optical edge filtering to be a low power front end for an embedded deep learning imaging system. We also present two technical appendices: a study of using deep learning with energy-efficient binary gradient cameras, and a design flow to enable agile hardware design for computational image sensors in the future.
dc.language.isoen_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectElectrical engineering
dc.subjectCMOS image sensors
dc.subjectcomputer vision
dc.subjectneural networks
dc.subjectplenoptic imaging
dc.subjectsignal processing
dc.subjectComputer science
dc.subjectOptics
dc.subjectmachine learning
dc.titlePlenoptic Imaging and Vision using Angle Sensitive Pixels
dc.typedissertation or thesis
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Electrical and Computer Engineering
dc.contributor.chairMolnar, Alyosha Christopher
dc.contributor.committeeMemberMarschner, Stephen Robert
dc.contributor.committeeMemberApsel, Alyssa B.
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/X4ZP444T


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International

Statistics