Show simple item record

dc.contributor.authorRisk, Benjamin
dc.identifier.otherbibid: 9333121
dc.description.abstractThis dissertation explores dependence patterns using a range of statistical methods: from estimating latent factors in multivariate analysis to mixed modeling of spatially and temporally dependent data. The methods may be applied to many scientific problems and types of data, but here we focus on the application to functional magnetic resonance imaging (fMRI). In the first chapter, we examine differences between independent component analyses (ICAs) arising from different assumptions, measures of dependence, and starting points of the algorithms. ICA is a popular method with diverse applications including artifact removal in electrophysiology data, feature extraction in microarray data, and identifying brain networks in functional magnetic resonance imaging (fMRI). ICA can be viewed as a generalization of principal component analysis (PCA) that takes into account higher-order cross-correlations. Whereas the PCA solution is unique, there are many ICA methods-whose solutions may differ. Infomax, FastICA, and JADE are commonly applied to fMRI studies, with FastICA being arguably the most popular. A previous study demonstrated that ProDenICA outperformed FastICA in simulations with two components. We introduce the application of ProDenICA to simulations with more components and to fMRI data. ProDenICA was more accurate in simulations, and we identified differences between biologically meaningful ICs from ProDenICA versus other methods in the fMRI analysis. ICA methods require non-convex optimization, yet current practices do not recognize the importance of, nor adequately address sensitivity to, initial values. We found that local optima led to dramatically different estimates in both simulations and group ICA of fMRI, and we provide evidence that the global optimum from ProDenICA is the best estimate. We applied a modification of the Hungarian (Kuhn-Munkres) algorithm to match ICs from multiple estimates, thereby gaining novel insights into how brain networks vary in their sensitivity to initial values and ICA method. The manuscript resulting from this research is co-authored by David Matteson, David Ruppert, Ani Eloyan (Johns Hopkins University), and Brian Caffo (Johns Hopkins University). In the second chapter, we develop a new approach for dimension reduction and latent variable estimation by maximizing a non-Gaussian likelihood. Independent component analysis (ICA) is popular in many applications, including cognitive neuroscience and signal processing. Due to computational constraints, principal component analysis is used for dimension reduction prior to ICA (PCA-ICA), which could remove important information. To address this issue, we propose likelihood component analysis (LCA) in which dimension reduction and latent variable estimation is achieved simultaneously by maximizing a likelihood with Gaussian and non-Gaussian components. We present a parametric model using the logistic density and a semi-parametric version using tilted Gaussians with cubic B-splines. We implement an algorithm scalable to datasets common in applications (e.g., hundreds of thousands of observations across hundreds of variables with dozens of latent components). In simulations, our methods recover latent components that are discarded by PCA-ICA methods. PCA-ICA is a popular technique to identify artifacts in functional magnetic resonance imaging. We apply our method to an experiment from the Human Connectome Project with state-of-the-art temporal and spatial resolution, and identify an artifact using LCA that was missed by PCA-ICA. Our results suggest that likelihood component analysis can detect novel signals in neuroimagery. The third chapter is a departure from the previous topics as it develops a model with Gaussian assumptions. Function magnetic resonance imaging (fMRI) can be used to locate which areas of the brain are activated from thoughts and/or behaviors. In order to assess activation, fMRI data are analyzed by fitting univariate models at every location in the brain, which is called the massive univariate approach. Prior to fitting these models, fMRI data are smoothed for two reasons: to increase the power to detect activated locations and to increase the overlap of corresponding features. However, this decreases the precision with which activation is localized. There is no clear answer to how much smoothing should be used. Moreover, technological improvements that increase the resolution of fMRI data can not be used to increase the resolution of localization if too much smoothing is used. We propose a spatiotemporal mixed model that chooses smoothing in a principled manner that balances its costs and benefits. The model includes a vertex random effect common to all subjects that captures local deviations from regional activation, which obviates the need for smoothing to increase power. The model also includes a subject-vertex random effect that allows subject-specific deviations from the population-level activation, which obviates the need for smoothing to increase the overlap between features in different subjects. We apply our method to high resolution (2 x 2 x 2 mm) and high frequency (0.72 seconds between scans) fMRI data from the Human Connectome Project and demonstrate the ability to automate smoothing via a unified spatiotemporal mixed model involving a covariance matrix with dimensions 326 million by 326 million.
dc.subjectdimension reduction
dc.subjectspatiotemporal dependence
dc.subjectfunctional magnetic resonance imaging
dc.titleTopics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling
dc.typedissertation or thesis University of Philosophy D., Statistics

Files in this item


This item appears in the following Collection(s)

Show simple item record