Show simple item record

dc.contributor.authorChipka, Jordan Bradford
dc.date.accessioned2019-04-02T14:01:24Z
dc.date.available2019-04-02T14:01:24Z
dc.date.issued2018-12-30
dc.identifier.otherChipka_cornellgrad_0058F_11247
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:11247
dc.identifier.otherbibid: 10758134
dc.identifier.urihttps://hdl.handle.net/1813/64994
dc.description.abstractUrban environments offer a challenging scenario for autonomous driving. Globally localizing information, such as a GPS signal, can be unreliable due to signal shadowing and multipath errors. Detailed a priori maps of the environment with sufficient information for autonomous navigation typically require driving the area multiple times to collect large amounts of data, substantial post-processing on that data to obtain the map, and then maintaining updates on the map as the environment changes. This dissertation addresses the issue of autonomous driving in an urban environment by investigating algorithms and an architecture to enable fully functional autonomous driving with limited information. In Chapter 2, an algorithm to autonomously navigate urban roadways with little to no reliance on an a priori map or GPS is developed. Localization is performed with an extended Kalman filter with odometry, compass, and sparse landmark measurement updates. Navigation is accomplished by a compass-based navigation control law. Key results from Monte Carlo studies show success rates of urban navigation under different environmental conditions. Experiments validate the simulated results and demonstrate that, for given test conditions, an expected range can be found for a given success rate. Chapter 3 develops an approach to detecting and estimating key roadway features, which are then used to create an understanding of the static scene around the vehicle. The primary focus of this study is specifically on intersections, given their complexity compared to other scenes and their importance to navigation. Using a test vehicle equipped with a vision system, odometry and vision data is collected for a variety of intersections under diverse conditions. Experimental results are then obtained using computer vision and estimation techniques. These results demonstrate the ability to probabilistically infer key features of an intersection as the vehicle approaches the intersection, in real-time. In separate earlier research, a novel, meso-scale hydraulic actuator characterization test platform, termed a Linear Hydraulic Actuator Characterization Device (LHACD), is developed. This work is detailed in Chapter 1. The LHACD is applied to testing McKibben artificial muscles and is used to show the energy savings due to the implementation of a variable recruitment muscle control scheme. The LHACD is a hydraulic linear dynamometer that offers the ability to experimentally validate the muscles’ performance and energetic characteristics. For instance, the McKibben muscles’ quasi-static force-stroke capabilities, as well as the power savings of a variable recruitment control scheme, are measured and presented in this work. Moreover, the development and fabrication of this highly versatile characterization test platform for hydraulic actuators is described in this chapter, and characterization test results and efficiency study results are presented.
dc.language.isoen_US
dc.subjectEstimation
dc.subjectsensing
dc.subjectcomputer vision
dc.subjectMechanical engineering
dc.subjectautonomous driving
dc.subjectAerospace engineering
dc.subjectRobotics
dc.titleEstimation and Navigation Methods with Limited Information for Autonomous Urban Driving
dc.typedissertation or thesis
thesis.degree.disciplineAerospace Engineering
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Aerospace Engineering
dc.contributor.chairCampbell, Mark
dc.contributor.committeeMemberKress Gazit, Hadas
dc.contributor.committeeMemberSelman, Bart
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/nt50-1997


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics