Perception Enabled Planning for Autonomous Systems
As we move towards fully autonomous robotic systems, one of the key challenges is the integration of state-of-the-art perception techniques in machine learning and computer vision with decision making for robots. While there have been tremendous strides in the deep learning community, there are still challenges in how to apply these technological advances to extend the boundaries of autonomous planning. In this thesis, I will discuss my research which attempts to bridge advances in computervision to practical applications in robotics. In the first part of this thesis, I will present a pedestrian motion model which combines high level decision making with low level motion patterns. Next, I will present my work on managing scene uncertainties with a focus on semantic surface types. Next-Bext-View measurements are taken to iteratively decrease uncertainty for a multi-hypothesis planner. Then, I will present my work on planning through occluded spaces. A GAN-based image inpainting network fills in unknown spaces for planning and I demonstrate the efficacy of my method compared to the traditional paradigm for managing occluded spaces. In the last part, I will discuss a planner which leverages instance segmentation and ordinal information to extend the boundaries of traditional metrical planners to long distance planning.