Show simple item record

dc.contributor.authorVeit, Andreas
dc.date.accessioned2018-10-23T13:23:41Z
dc.date.available2018-10-23T13:23:41Z
dc.date.issued2018-05-30
dc.identifier.otherVeit_cornellgrad_0058F_10825
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:10825
dc.identifier.otherbibid: 10489579
dc.identifier.urihttps://hdl.handle.net/1813/59494
dc.description.abstractIn recent years, the field of computer vision has seen a series of major advances, made possible by rapid development in algorithms, data collection and computing infrastructure. As a result, vision systems have started to be broadly adopted in everyday applications. Progress has been particularly promising in image recognition, where algorithms now often match human performance. Nevertheless, vision systems still largely fall behind humans in their ability to understand the complexities of the visual world and its apparent contradictions. For example, an image can carry different meanings to different people in different contexts. However, being often limited to a single point of view, vision systems tend to focus on the meaning that dominates in the training data. In this dissertation, we address this limitation by building conditional vision models that can learn from multiple points of view and adapt their results to account for different conditions. First, we address the related tasks of image tagging and tag based image retrieval. In particular, we build a system that can take into account the fact that people may associate different meaning with certain images and tags. Thus, the system can personalize outputs for ambiguous tags such as #rock, which could refer either to a music genre, a geological object or even outdoor climbing. Further, we focus on the task of image based similarity search. Specifically, we design a system that can understand multiple notions of similarity. For example, when searching for related items to an input images of a shoe, users might be interested in shoes of similar color, style, or for the same kind of activity. By capturing the multitude of aspects in terms of which objects can be compared, our system can find the right set of related items. Lastly, we explore how the underlying convolutional networks themselves can be made aware of the context in which they are used. In a study, we first discover a new understanding of the roles that individual layers take on in modern convolutional networks. Then, we leverage our insights and design a network that can adaptively define its own topology conditioned on the input image to increase both accuracy and efficiency.
dc.language.isoen_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectcomputer vision
dc.subjectmachine learning
dc.subjectComputer science
dc.titleLearning Conditional Models for Visual Perception
dc.typedissertation or thesis
thesis.degree.disciplineComputer Science
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Computer Science
dc.contributor.chairBelongie, Serge J.
dc.contributor.committeeMemberKleinberg, Jon M.
dc.contributor.committeeMemberNaaman, Mor
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/X44T6GKG


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International

Statistics