Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. AI Explainability in the Global South: Towards an Inclusive Praxis for Emerging Technology Users

AI Explainability in the Global South: Towards an Inclusive Praxis for Emerging Technology Users

File(s)
Okolo_cornellgrad_0058F_13866.pdf (15.68 MB)
Permanent Link(s)
https://doi.org/10.7298/c1dw-j313
https://hdl.handle.net/1813/114719
Collections
Cornell Theses and Dissertations
Author
Okolo, Chinasa
Abstract

As researchers and technology companies rush to develop artificial intelligence (AI) applications that aid the health of marginalized communities, it is critical to consider the needs of community health workers (CHWs), who will be increasingly expected to operate tools that incorporate these technologies. My work in this dissertation shows that these users have low levels of AI knowledge, form incorrect mental models about how AI works, and at times, may trust algorithmic decisions more than their own. This is concerning, given that AI applications targeting the work of CHWs are already in active development, and early deployments in low-resource healthcare settings have already reported failures that created additional workflow inefficiencies and inconvenienced patients. Explainable AI (XAI) can help avoid such pitfalls, but nearly all prior work has focused on users that live in relatively resource-rich settings (e.g., the US and Europe) and who arguably have substantially more experience with digital technologies overall and AI systems in particular. Comprehensively, my dissertation aims to aid AI practitioners (designers, developers, researchers, etc.) in building tools accessible to users with limited AI knowledge who are situated in resource-constrained environments in the Global South. Chapter 3 of this dissertation begins by characterizing the knowledge and perceptions CHWs hold regarding AI. Given CHWs' misconceptions about AI, XAI could potentially aid in addressing this issue. However, there is currently a low amount of XAI research focused on the Global South and on novice AI users, which could limit how researchers make AI understandable to users such as CHWs. To work towards making AI more explainable for users within the Global South, Chapter 4 conducts a literature review of XAI research within this region, highlighting unique factors that could potentially hinder the implementation of these methods by AI practitioners for real-world use. Given the small amount of XAI research I found that engages with users in the Global South, Chapter 5 details my efforts in designing interactive prototypes with CHWs to understand what aspects of model decision-making need to be explained and how they can be explained most effectively. To comprehend how researchers make AI tools understandable for users like CHWs, Chapter 6 examines how AI practitioners identify problems to address, leverage participatory methods, and consider explainability in their work.

Description
262 pages
Date Issued
2023-08
Keywords
AI for Social Good
•
Artificial Intelligence
•
Community Health Workers
•
Explainable AI
•
Global Health
•
Human-Centered Design
Committee Chair
Dell, Nicola
Committee Member
Hariharan, Bharath
Vashistha, Aditya
Choudhury, Tanzeem
Degree Discipline
Computer Science
Degree Name
Ph. D., Computer Science
Degree Level
Doctor of Philosophy
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights URI
https://creativecommons.org/licenses/by-nc-nd/4.0/
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16219327

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance