JavaScript is disabled for your browser. Some features of this site may not work without it.
Learning to Manipulate Novel Objects for Assistive Robots

Author
Sung, Jaeyong
Abstract
The ability to reason about different modalities of information, for the purpose of physical interaction with objects, is a critical skill for assistive robots. For a robot to be able to assist us in our daily lives, it is not feasible to train each robot for a large number of tasks with all instances of objects that exist in human environments. Robots will have to generalize their skills by jointly reasoning with various sensor modalities such as vision, language and haptic feedback. This is an extremely challenging problem because each modality has intrinsically different statistical properties. Moreover, even with expert knowledge, manually designing joint features between such disparate modalities is difficult.
In this dissertation, we focus on developing learning algorithms for robots that model tasks involving interactions with various objects in unstructured human environments --- especially on novel objects and scenarios that involve sequences of complicated manipulation. To this end, we develop algorithms that learn shared representations of multimodal data and model full sequences of complex motions. We demonstrate our approach on several different applications: understanding human activities in unstructured environment, synthesizing manipulation sequences for under-specified tasks, manipulating novel appliances, and manipulating objects with haptic feedback.
Date Issued
2017-05-30Subject
machine learning; Multimodal Data; Robotic Manipulation; Robot Learning; Artificial intelligence; Deep Learning; Computer science; Robotics
Committee Chair
Saxena, Ashutosh
Committee Member
Salisbury, J. Kenneth; Selman, Bart; Guimbretière, François; Marschner, Steve
Degree Discipline
Computer Science
Degree Name
Ph. D., Computer Science
Degree Level
Doctor of Philosophy
Type
dissertation or thesis