Show simple item record

dc.contributor.authorJain, Ashesh
dc.identifier.otherbibid: 9596994
dc.description.abstractLeveraging human knowledge to train robots is a core problem in robotics. In the near future we will see humans interacting with agents such as, assistive robots, cars, smart houses, etc. Agents that can elicit and learn from such interactions will find use in many applications. Previous works have proposed methods for learning low-level robotic controls or motion primitives from (near) optimal human signals. In many applications such signals are not naturally available. Furthermore, optimal human signals are also difficult to elicit from non-expert users at a large scale. Understanding and learning user preferences from weak signals is therefore of great emphasis. To this end, in this dissertation we propose interactive learning systems which allow robots to learn by interacting with humans. We develop interaction methods that are natural to the end-user, and algorithms to learn from sub-optimal interactions. Furthermore, the interactions between humans and robots have complex spatio-temporal structure. Inspired by the recent success of powerful function approximators based on deep neural networks, we propose a generic framework for modeling interactions with structure of Recurrent Neural Networks. We demonstrate applications of our work on real-world scenarios on assistive robots and cars. This work also established state-of-the-art on several existing benchmarks.
dc.subjectMachine Learning
dc.subjectComputer Vision
dc.titleLearning From Natural Human Interactions For Assistive Robots
dc.typedissertation or thesis Science University of Philosophy D., Computer Science
dc.contributor.committeeMemberKleinberg,Robert David
dc.contributor.committeeMemberJames,Douglas Leonard

Files in this item


This item appears in the following Collection(s)

Show simple item record