Developing Fair Resource Allocation Behaviors for Robots
The goal of this dissertation is to design and develop behaviors for robots that enable them to distribute resources in a just and impartial way within group settings. In order for robots to be deployed within groups and teams, it is essential that they are equipped with the capability to make fair decisions but this remains overlooked by the human-robot interaction (HRI) community. In this work, I first provide a systemic review of the different studies exploring fairness across the top robotic conferences and journals. These studies reveal five key themes (resource allocation, cheating, accuracy, social norms, causing harm) on how fairness is conceptualized within HRI. Additionally, I discuss the ways in which fairness has been shown to influence human-robot collaboration. I propose theoretical and design suggestions recommending a dynamic perspective of fairness instead of a static one. Second, I provide empirical evidence that the way in which a machine distributes resources across a group, a behavior referred to as machine allocation behavior, has effects on interpersonal relationships between group members. Our experiment uses a developed platform, Co-Tetris, which follows a standard Tetris game but allows multiple individuals to work together on scoring as many points while a separate group member (the allocator) is tasked with deciding who has access to the falling Tetris blocks at every turn. By manipulating the agency of the allocator (human vs. AI) and the amount of Tetris blocks an individual receives, I explore how factors such as performance, interpersonal perceptions, and perceived fairness changes based on the machine allocation behavior. Third, I introduce a novel multi-armed bandit algorithm and show that fairness in resource allocation can influence the trust of weaker performing individuals but does not have an impact on the overall performance of a group. I introduce a multi-armed bandit algorithm with fairness constraints to investigate the scenario in which a robot must distribute resources among human group members. The developed algorithm constrains the number of resources any group member can receive throughout a task via constraint rates. To evaluate the algorithm, an experimental study was conducted using the Co-Tetris platform. In the study, participants were given the task of achieving the highest possible group score in Co-Tetris. Group members were informed that an AI algorithm is in charge of determining how many turns a player gets to control the falling Tetris blocks. I manipulated the level of resources an individual may receive by using three different constraint rates (UCB 25%, UCB 33%, UCB 50%). I used our platform as well as survey responses to capture performance data as well as perceptions of trust and fairness. Overall, I show that reasoning about fairness is essential for robots in groups. The way in which a robot will distribute resources across group members will not only shape the level of trust that a user has in the system but will also shape the way in which they see other group members.