eCommons

 

PERCEPTION FOR AUTONOMOUS VEHICLES IN CHALLENGING WEATHERS AND OCCLUDED ENVIRONMENTS

Access Restricted

Access to this document is restricted. Some items have been embargoed at the request of the author, but will be made publicly available after the "No Access Until" date.

During the embargo period, you may request access to the item by clicking the link to the restricted file(s) and completing the request form. If we have contact information for a Cornell author, we will contact the author and request permission to provide access. If we do not have contact information for a Cornell author, or the author denies or does not respond to our inquiry, we will not be able to provide access. For more information, review our policies for restricted content.

No Access Until

2024-09-05
Permanent Link(s)

Other Titles

Abstract

Self-driving cars must detect other traffic participants like vehicles and pedestrians in 3D in order to plan safe routes and avoid collisions. State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit domain idiosyncrasies, making them fail in new environments and across weather conditions--- a serious problem for the robustness of self-driving cars. Additionally, object detection in occluded cases still remains a challenge. In this dissertation, I present my work on a a novel learning approach that reduces the gap between domains by fine-tuning detectors on high-quality pseudo-labels in the target domain -- pseudo-labels that are automatically generated after driving based on replays of previously recorded driving sequences. In these replays, object tracks are smoothed forward and backward in time, and detections are interpolated and extrapolated---crucially, leveraging future information to catch hard cases such as missed detections due to occlusions or far ranges. I then, present a new dataset to enable robust autonomous driving via a novel data collection process --- data is repeatedly recorded along a 15 km route under diverse scene (urban, highway, rural, campus), weather (snow, rain, sun), time (day/night), and traffic conditions (pedestrians, cyclists and cars). The dataset includes road and object annotations using amodal masks to capture partial occlusions and 3D bounding boxes. We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects. Finally, I present a novel approach that leverages repeated traversals to improve 3D object detection. I propose a simple and effective approach for combining visibility with current and past traversal LiDAR scans along with a per class past occupancy prediction task. We argue that these additional inputs and prediction tasks, add important information for detecting occluded objects.

Journal / Series

Volume & Issue

Description

116 pages

Sponsorship

Date Issued

2023-08

Publisher

Keywords

Autonomous; Detection; Machine Learning; Perception; Tracking; Vision

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Campbell, Mark

Committee Co-Chair

Committee Member

Hariharan, Bharath
Kress Gazit, Hadas
Weinberger, Kilian

Degree Discipline

Mechanical Engineering

Degree Name

Ph. D., Mechanical Engineering

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Attribution 4.0 International

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record