Title: Development of a Bias-Aware Algorithm for the Analysis and Perception of Children’s Facial Expressions by Autonomous Agents
Date: Tuesday, January 31, 2023
Time: 8:00am – 10:00am
Location: Zoom Link
Meeting ID: 936 4850 8890
Passcode: 576950
De’Aira Bryant
Computer Science Ph.D. Student
School of Interactive Computing
Georgia Institute of Technology
Committee:
Ayanna Howard (Advisor) – School of Interactive Computing, Georgia Institute of Technology / College of Engineering, Ohio State University
Charles Isbell – School of Interactive Computing, Georgia Institute of Technology
Sonia Chernova – School of Interactive Computing, Georgia Institute of Technology
Jason Borenstein – School of Public Policy, Georgia Institute of Technology
Tom Williams – Department of Computer Science, Colorado School of Mines
Abstract
The field of human-robot interaction (HRI) has made great strides toward designing autonomous agents that operate in real-world environments with humans. Potential applications span the sectors of education, healthcare, hospitality, and more. These agents are often complex intelligent systems that utilize various perception algorithms to interact with their environment. Prior work in artificial intelligence (AI) has shown that human-centered perception algorithms are susceptible to producing biased output, thus leading to ethical concerns around fairness, privacy, and safety. The measurement and mitigation of bias in AI systems have since risen to be among the most important challenges in recent computer science history. Yet, very little work has examined bias with respect to autonomous agents or how the perpetuation of bias via autonomous agents affects HRI.
This thesis examines the effects of human, data, and algorithmic bias on HRI through the lens of facial expression recognition (FER) and presents bias-aware techniques that will facilitate more effective HRI. First, we analyze human bias through an examination of normative expectations for expressive autonomous agents when considering factors like robot race, gender, and embodiment. These expectations help inform the design processes needed to develop effective agents. Next, we investigate data and algorithmic bias in FER systems for populations with scarce data. Subsequently, we develop improved techniques for modeling facial expression perception and benchmarking FER algorithms. We then propose the application of a semi-supervised machine learning technique, self-learning, as a bias-aware strategy for FER development and present preliminary results from a pilot experiment. We further propose a validation experiment to assess human perceptions of fairness during an interaction with an autonomous agent using a standard or bias-aware FER perception algorithm, thus improving our understanding of the implications of bias for HRI.