AI Explainability in the Global South: Towards an Inclusive Praxis for Emerging Technology Users
As researchers and technology companies rush to develop artificial intelligence (AI) applications that aid the health of marginalized communities, it is critical to consider the needs of community health workers (CHWs), who will be increasingly expected to operate tools that incorporate these technologies. My work in this dissertation shows that these users have low levels of AI knowledge, form incorrect mental models about how AI works, and at times, may trust algorithmic decisions more than their own. This is concerning, given that AI applications targeting the work of CHWs are already in active development, and early deployments in low-resource healthcare settings have already reported failures that created additional workflow inefficiencies and inconvenienced patients. Explainable AI (XAI) can help avoid such pitfalls, but nearly all prior work has focused on users that live in relatively resource-rich settings (e.g., the US and Europe) and who arguably have substantially more experience with digital technologies overall and AI systems in particular. Comprehensively, my dissertation aims to aid AI practitioners (designers, developers, researchers, etc.) in building tools accessible to users with limited AI knowledge who are situated in resource-constrained environments in the Global South. Chapter 3 of this dissertation begins by characterizing the knowledge and perceptions CHWs hold regarding AI. Given CHWs' misconceptions about AI, XAI could potentially aid in addressing this issue. However, there is currently a low amount of XAI research focused on the Global South and on novice AI users, which could limit how researchers make AI understandable to users such as CHWs. To work towards making AI more explainable for users within the Global South, Chapter 4 conducts a literature review of XAI research within this region, highlighting unique factors that could potentially hinder the implementation of these methods by AI practitioners for real-world use. Given the small amount of XAI research I found that engages with users in the Global South, Chapter 5 details my efforts in designing interactive prototypes with CHWs to understand what aspects of model decision-making need to be explained and how they can be explained most effectively. To comprehend how researchers make AI tools understandable for users like CHWs, Chapter 6 examines how AI practitioners identify problems to address, leverage participatory methods, and consider explainability in their work.