Statistical Inference for Machine Learning: Feature Importance, Uncertainty Quantification and Interpretation Stability
dc.contributor.author | Zhou, Zhengze | |
dc.contributor.chair | Hooker, Giles J. | |
dc.contributor.committeeMember | Weinberger, Kilian Quirin | |
dc.contributor.committeeMember | Udell, Madeleine Richards | |
dc.date.accessioned | 2021-09-09T17:41:12Z | |
dc.date.available | 2021-09-09T17:41:12Z | |
dc.date.issued | 2021-05 | |
dc.description | 171 pages | |
dc.description.abstract | Machine learning has become ubiquitous in many areas, including high-stake applications such as autonomous driving, financial forecasting and clinical decisions. However, many models are complex in nature and act as ``black boxes", providing predictions but little insight as to how they were arrived at. In this thesis, we present our work from three different perspectives towards a better understanding of several types of machine learning models through the lens of statistical inference. Tree-based methods, including decision trees, random forests and gradient boosting machines, are a popular class of nonparametric statistical model. They are widely used owing to their flexibility and superior performances. Many practitioners reply on some kind of feature importance measurements to examine model behavior. We propose a modification that corrects for split-improvement variable importance measures in random forests and other tree-based methods. These measurements have been shown to be biased towards increasing the importance of features with more potential splits. We show that by appropriately incorporating split-improvement as measured on out-of-sample data, this bias can be corrected yielding better summaries and screening tools. Our next study focuses on understanding statistical properties and quantifying uncertainty for ensemble models. Tree-based ensembles like random forests remain one such popular option for which several important theoretical advances have been made in recent years by drawing upon a connection between their natural subsampled structure and the classical theory of U-statistics. Unfortunately, the procedures for estimating predictive variance resulting from these studies are plagued by severe bias and extreme computational overhead. Here, we argue that the root of these problems lies in the structure of the resamples themselves. We develop a general framework for analyzing the asymptotic behavior of V-statistics, demonstrating asymptotic normality under precise regularity conditions and establishing previously unreported connections to U-statistics. Importantly, these findings allow us to produce a natural and efficient means of estimating the variance of a conditional expectation, a problem of wide interest across multiple scientific domains that also lies at the heart of uncertainty quantification for supervised learning ensembles. As an application, we apply this result to design a stopping rule for determining the ideal tree depth in model distillation. Lastly, we investigate the stability for model explanation. Post hoc explanations based on perturbations are widely used approaches to interpret a machine learning model after it has been built. This class of methods has been shown to exhibit large instability, posing serious challenges to the effectiveness of the method itself and harming user trust. We propose a new algorithm called S-LIME, which utilizes a hypothesis testing framework based on central limit theorem for determining the number of perturbation points needed to guarantee stability of the resulting explanation. Experiments on both simulated and real world data sets are provided to demonstrate the effectiveness of our method. | |
dc.identifier.doi | https://doi.org/10.7298/6cj0-wd06 | |
dc.identifier.other | Zhou_cornellgrad_0058F_12499 | |
dc.identifier.other | http://dissertations.umi.com/cornellgrad:12499 | |
dc.identifier.uri | https://hdl.handle.net/1813/109830 | |
dc.language.iso | en | |
dc.rights | Attribution 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Decision Trees | |
dc.subject | Ensembles | |
dc.subject | Feature Importance | |
dc.subject | Model Interpretation | |
dc.subject | Stability | |
dc.subject | Uncertainty Quantification | |
dc.title | Statistical Inference for Machine Learning: Feature Importance, Uncertainty Quantification and Interpretation Stability | |
dc.type | dissertation or thesis | |
dcterms.license | https://hdl.handle.net/1813/59810 | |
thesis.degree.discipline | Statistics | |
thesis.degree.grantor | Cornell University | |
thesis.degree.level | Doctor of Philosophy | |
thesis.degree.name | Ph. D., Statistics |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Zhou_cornellgrad_0058F_12499.pdf
- Size:
- 1.84 MB
- Format:
- Adobe Portable Document Format