Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Human-Robot and Multi-Autonomous Agent Collaborations in Cyber-physical Environments

Human-Robot and Multi-Autonomous Agent Collaborations in Cyber-physical Environments

File(s)
Menezes_cornell_0058O_11985.pdf (9.98 MB)
Permanent Link(s)
http://doi.org/10.7298/q08y-r779
https://hdl.handle.net/1813/115628
Collections
Cornell Theses and Dissertations
Author
Menezes, Jovan
Abstract

The focus of robotics, mechanical engineering, and computer science research on human-robot teams requires evaluating software and algorithms in complex real-world scenarios that are challenging to replicate in laboratory experiments/environments. Conversely, conducting field experiments in natural settings lacks the necessary level of detailed information for comparing and validating performance. Additionally, using pre-recorded real-world datasets has limitations in assessing the effectiveness of perception, control, and decision strategies. Moreover, the cost and logistical challenges of involving a large number of humans or robots in experiments make it impractical, especially when factors such as environmental conditions disrupt testing. To address these issues, this work presents a cyber-physical framework that serves as a testbed for conducting such research. The presented framework combines humans alongside virtual and real robots in simulated photo-realistic environments using motion capture technology, virtual reality (VR), wearable sensors, and physics-based simulations for the robot platforms. This creates an extended reality (XR) testbed where humans and real robots can experience virtual worlds with real-time visual feedback and interaction. The movements and actions made by the real human/robot agents are transferred from the physical world or laboratory setting to a synthetic virtual environment using VR coupled with 3D body tracking and motion capture systems. This process generates avatars that replicate the behavior of real agents in real-time and enable them to receive feedback from the virtual world. Synthetic environments created such that they narrow the gap between reality and simulation, allowing the inclusion of autonomous agents with multi-modal sensor suites. The potential of the framework is demonstrated through three experiments which showcase interactions between agents in different domains, leveraging the advantages of both real-world and simulation experimentation to complement and enhance each other.

Description
65 pages
Date Issued
2023-12
Keywords
Cyber-Physical Environments
•
Digital Twin
•
Human-Robot Collaboration
•
Multi-Agent Control
•
Multi-Modal Perception
•
Virtual Reality
Committee Chair
Ferrari, Silvia
Committee Member
Hariharan, Bharath
Napp, Nils
Degree Discipline
Mechanical Engineering
Degree Name
M.S., Mechanical Engineering
Degree Level
Master of Science
Rights
Attribution 4.0 International
Rights URI
https://creativecommons.org/licenses/by/4.0/
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16454798

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance