The Algorithmic Reassembling of Expertise: Credible Knowledge and Machine Learning in Medical Imaging
Access to this document is restricted. Some items have been embargoed at the request of the author, but will be made publicly available after the "No Access Until" date.
During the embargo period, you may request access to the item by clicking the link to the restricted file(s) and completing the request form. If we have contact information for a Cornell author, we will contact the author and request permission to provide access. If we do not have contact information for a Cornell author, or the author denies or does not respond to our inquiry, we will not be able to provide access. For more information, review our policies for restricted content.
This dissertation provides an ethnographic account of an emerging mode of knowledge production: the cultivation of credible machine learning (ML) models to perform expert tasks, with a focus on image-based diagnostics in radiology within the rapidly growing Chinese medical artificial intelligence (AI) industry. By “learning” hidden patterns from large quantities of individual examples based on previous expert judgments or decisions, ML models “reassemble” human expertise into a new algorithmic form. While such models are often claimed to “outperform” human experts, their mode of operation is ultimately indecipherable to human logic. This raises critical questions: How does human expertise get translated into algorithmic models? And how is the credibility of such algorithmic knowledge established among stakeholders? Drawing on multi-sited ethnography at two Chinese medical AI startups and in-depth interviews with various actors, including medical annotators, ML engineers, and radiologists, this dissertation examines the day-to-day practices in three core phases of the medical AI pipeline: data annotation, model training, and clinical application. Adopting a relational view of expertise as enactments within specific socio-technical networks, I develop conceptual tools in each empirical chapter to capture the unique dynamics of credible knowledge as the expertise network is reassembled due to algorithmic involvement. These conceptualizations, including knowledge inscription, the code-data equilibrium, and human-machine alignment, all highlight the complex social, institutional, and material contexts that shape AI’s credibility. Overall, this dissertation elaborates on the notion of the “algorithmic reassembling of expertise” by exploring how expertise is performed and enacted when ML algorithms are involved in various situations. In all three phases of the AI pipeline examined, the sociotechnical networks enabling expert work are reassembled in different ways, and “medical expertise” is re-enacted at various human-algorithmic interfaces, distinct from regular clinical work. I argue that the performance of an expert task by an ML model occurs in a specific sociotechnical network, which only partially overlaps with and is distinct from that of human experts. The model’s credibility is contingent upon the network where it is situated and cannot be reduced to its intrinsic, technical properties.