Participation and Transparency in AI System Design and Integration
As AI systems proliferate across various institutions and organizations, questions of stakeholder participation and algorithmic transparency grow in importance and urgency. Yet very few in-depth empirical studies of real-world AI design and governance processes exist that we can draw lessons from for informing the future state of practice. In this dissertation, I present an in-depth qualitative analysis of the design processes and transparency practices related to Technology-Assisted Review, or TAR: an AI-driven workflow that has been in use in the U.S. civil justice system for over a decade. Through extensive interviews with computer scientists, lawyers, and judges, as well as archival analysis of government research and U.S. civil court documents, I uncover an unprecedented model for AI participatory design previously unrecognized in the literature. I also uncover a cautionary tale regarding what is likely to occur when we fail to treat AI governance as a design problem on the same order as AI system architecture or workflow. Leveraging these insights, I propose a move beyond current formulations of “human-centered design” focused on individual preferences, beliefs, and values, toward an institution-centered approach. This new approach tackles AI design as an integrative task anchored to a deep analysis of the governing norms, precedents, and structures constituting the institutions into which AI systems are to be embedded.