Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Interpretable, Robust, and Controllable Machine Learning Methods for Medical Imaging

Interpretable, Robust, and Controllable Machine Learning Methods for Medical Imaging

File(s)
Wang_cornellgrad_0058F_14198.pdf (24.24 MB)
Permanent Link(s)
https://doi.org/10.7298/tvwv-5306
https://hdl.handle.net/1813/116023
Collections
Cornell Theses and Dissertations
Author
Wang, Alan
Abstract

Machine learning (ML) algorithms fueling the advancements in artificial intelligence are leading to breakthroughs in medical image analysis. These algorithms are enabling fast and scalable automation of human-expensive tasks like image registration and image reconstruction, while also showing promise in performing more complex, higher-level tasks like diagnosis and prognosis. At the same time, the healthcare arena that AI seeks to disrupt is formidable; healthcare is not only facilitated by domain experts (e.g. doctors and radiologists) who undergo years of training, but also is characterized by a high-stakes setting where safety and trust is critical. Indeed, there is a need for reliable and trustworthy ML in this arena, which can interface with humans, perform well under varying conditions, and accept user input and feedback. In this thesis, several ML methods are overviewed which approach reliability and trustworthiness along three directions: interpretability, robustness, and controllability. In the first method, a controllable image reconstruction method is described, called HyperRecon, which leverages a "hypernetwork" to generate multiple plausible reconstructions at test-time efficiently, each consistent with the data but visually diverse.This enables the model to present the user different reconstructions that can be efficiently examined via "turning a knob", thereby empowering the user to choose and control the right solution that they deem most appropriate for their specific real-world use case. In the second method, an interpretable, robust, and controllable image registration method is described, called KeyMorph, which uses a deep neural network to extract corresponding keypoints in a pair of images and subsequently uses the keypoints to solve for the desired transformation which aligns the images in closed-form. This approach leads not only to a more interpretable and controllable registration via the keypoints, but also to a more robust registration that is less sensitive to large initial misalignments. In the third method, an interpretable and well-calibrated image classification method is presented, called the Nadaraya-Watson Head, which can be seen as a "soft" version of a nearest-neighbors classifier and works by making a classification prediction via comparisons with examples in the training dataset. Besides interpretability and calibration, one can further leverage this model to learn "invariant" representations of images that come from multiple environments (e.g. hospitals) for the purposes of robust domain generalization, starting from rigorous causally-informed assumptions of the data-generating process. Finally, the thesis culminates in a description of a framework for interpretability in machine learning and medical imaging. This chapter is distinct from the previous chapters in that it is not methodological in nature. Instead, motivated by a perceived sense of murkiness in what interpretability means, the framework seeks to formalize the goals that one seeks to address when interpretability is sought, and in so doing enables the development of a step-by-step guide to approaching interpretability in this context. Overall, it hopes to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

Description
218 pages
Date Issued
2024-05
Keywords
Controllability
•
Deep Learning
•
Interpretability
•
Machine Learning
•
Medical Imaging
•
Robustness
Committee Chair
Sabuncu, Mert
Committee Member
Acharya, Jayadev
Xu, Chunhui
Degree Discipline
Electrical and Computer Engineering
Degree Name
Ph. D., Electrical and Computer Engineering
Degree Level
Doctor of Philosophy
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16575590

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance