Reliable Deep Learning with Application to Digital Histopathology Image Analysis
Deep learning has achieved tremendous success over the past decade, pushing the limit in various application domains such as computer vision and natural language processing. Despite the advancements, recent work has demonstrated potential risks associated with modern neural networks, hindering the reliability of such deep learning systems for real-world applications. In this thesis, we consider several challenges associated with the reliable application of neural networks. Specifically, the broad term of reliability is broken down into two aspects. Firstly, it has been demonstrated that typical deep learning systems are prone to overfitting to noisy labels commonly present in large-scale datasets, thereby leading to sub-optimal performances. To alleviate this problem, we propose a novel loss function and demonstrate its robustness against label noise. Secondly, prior work has highlighted problems in the uncertainty quantification of neural networks. This can significantly hamper the interpretability of neural network predictions. In this thesis, we discuss several strategies that enable us to obtain neural networks with better uncertainty estimations. Lastly, as a case study, we apply deep learning to a real-world problem of a large-scale whole-slide histopathology image classification task, and demonstrate the effectiveness of such a deep learning system for real-world medical application.