Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Active Vision and Perception

Active Vision and Perception

File(s)
Gemerek_cornellgrad_0058F_12178.pdf (14.33 MB)
Permanent Link(s)
https://doi.org/10.7298/gtcf-yy18
https://hdl.handle.net/1813/102961
Collections
Cornell Theses and Dissertations
Author
Gemerek, Jake
Abstract

Active vision and perception for resource-constrained autonomous vehicles, such as small ground robots and quadrotors, are limited in their allowable algorithmic complexity and slow reaction times. For an autonomous mobile robot to safely and reliably perform a useful task or behavior, real-time visual perception that informs a controller with a fast reaction time is needed. This dissertation covers new research developments in the areas of active vision, planning, and control for directional sensors with a focus on event-cameras and RGB cameras. Event-cameras, also known as neuromorphic cameras, are biologically inspired visual sensors that measure local changes in light intensity, mitigating latency and redundant data. Several high-level active vision algorithms, interfaced with autonomous vehicle controllers, are developed for event-cameras and quantitatively compared to analogous RGB camera algorithms, in terms of both accuracy and computational cost. In particular, motion-based perception algorithms for object recognition and tracking, action recognition, and depth estimation are developed for use on a moving quadrotor tasked with reacting to the perceived environment. Novel active vision algorithms for RGB cameras are also developed in which an autonomous ground vehicle or quadrotor interact with a human target of interest using novel action recognition and tracking perception capabilities paralleled with new control methods for target following. Furthermore, a novel occlusion-avoiding path planning algorithm that is applicable to both event-cameras and RGB cameras is developed. The proposed method computes a closed-form collection of subsets of the sensor's configuration space, referred to as visibility regions, that quantify the visibility of targets subject to the sensor field of view geometry and line of sigh visibility. This method is quantitatively compared to several existing sensor path planning methods in terms of analytical computational complexity, experimental path performance, and experimental computational cost analysis. The results of this work enable active vision, perception, and planning for resource-constrained mobile robots equipped with directional sensors such as an event-camera or RGB camera.

Description
164 pages
Date Issued
2020-08
Keywords
Autonomous Systems
•
Computer Vision
•
Neuromorphic Perception
•
Robot Path Planning
•
Visual Perception
Committee Chair
Ferrari, Silvia
Committee Member
Campbell, Mark
Weinberger, Kilian Quirin
Degree Discipline
Mechanical Engineering
Degree Name
Ph. D., Mechanical Engineering
Degree Level
Doctor of Philosophy
Type
dissertation or thesis
Link(s) to Catalog Record
https://catalog.library.cornell.edu/catalog/13277845

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance