Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Latent Variable Based Methods for Matrix Clustering and Generalized Linear Models

Latent Variable Based Methods for Matrix Clustering and Generalized Linear Models

Access Restricted

Access to this document is restricted. Some items have been embargoed at the request of the author, but will be made publicly available after the "No Access Until" date.

During the embargo period, you may request access to the item by clicking the link to the restricted file(s) and completing the request form. If we have contact information for a Cornell author, we will contact the author and request permission to provide access. If we do not have contact information for a Cornell author, or the author denies or does not respond to our inquiry, we will not be able to provide access. For more information, review our policies for restricted content.

File(s)
Lee_cornellgrad_0058F_14592.pdf (1.53 MB)
No Access Until
2026-09-03
Permanent Link(s)
https://doi.org/10.7298/hkb5-9330
https://hdl.handle.net/1813/116503
Collections
Cornell Theses and Dissertations
Author
Lee, Inbeom
Abstract

Despite modern advances, it is still impossible to observe every relevant factor in a phenomenon. Therefore, it is important to acknowledge and take advantage of the existence of latent variables when utilizing statistical models. This dissertation introduces two problems of statistical interest and includes novel methods of incorporating latent variables to better address these problems. The first problem is on high-dimensional matrix clustering. Matrix valued data has become increasingly prevalent in many applications. Most of the existing clustering methods for this type of data are tailored to the mean model and do not account for the dependence structure of the features, which can be very informative, especially in high-dimensional settings or when mean information is not available. To extract the information from the dependence structure for clustering, we propose a new latent variable model for the features arranged in matrix form, with some unknown membership matrices representing the clusters for the rows and columns. Under this model, we further propose a class of hierarchical clustering algorithms using the difference of a weighted covariance matrix as the dissimilarity measure. Theoretically, we show that under mild conditions, our algorithm attains clustering consistency in the high-dimensional setting. While this consistency result holds for our algorithm with a broad class of weighted covariance matrices, the conditions for this result depend on the choice of the weight. To investigate how the weight affects the theoretical performance of our algorithm, we establish the minimax lower bound for clustering under our latent variable model in terms of some cluster separation metric. Given these results, we identify the optimal weight in the sense that using this weight guarantees our algorithm to be minimax rate-optimal. The practical implementation of our algorithm with the optimal weight is also discussed. Simulation studies show that our algorithm performs better than existing methods in terms of the adjusted Rand index (ARI). The method is applied to a genomic dataset and yields meaningful interpretations. The second problem is on multivariate response generalized linear models with hidden variables. Hidden variables introduce estimation bias and are often tricky to deal with, especially in the case of non-linear models. Motivated by this practical issue, we study the multivariate response generalized linear model with hidden variables with $Y \in \mathbb{R}^M$ being an $M$-dimensional response variable, $X \in \mathbb{R}^p$ being a $p$-dimensional vector of observed covariates and $Z \in \mathbb{R}^K$ being a $K$-dimensional vector of hidden variables. $\Theta \in \mathbb{R}^{M \times p}$ and $B \in \mathbb{R}^{M \times K}$ are coefficient matrices corresponding to the observed covariates and the hidden variables, respectively, and $\Theta_m$ and $B_m$ denote the $m$-th row of the two matrices, respectively. The multivariate response generalized model is represented as $m$ separate generalized linear models where the $m$-th model characterizes the relationship between the conditional mean of the $m$-th response variable $Y_m$ and the linear predictor $\Theta_m X + B_m Z$. We consider the regime where $p, M$ grows with the sample size $n$, but where $K \leq p \leq n$ and $K \leq M$ hold. We propose a two-step method, G-HIVE, that estimates $\Delta:=P_B^{\perp}\Theta$, the projection of $\Theta$ onto the orthogonal complement of the column space of $B$. Estimating $\Delta$ is meaningful as the quantity $\Delta X$ captures the effect that $X$ has on $Y$ that cannot be explained through the hidden variables, $Z$, and is in fact referred to as the partial direct effect of $X$ on $Y$ in the mediation analysis literature. G-HIVE first incorporates an estimating equations step (or EE step for short) that involves a reweighted residual scheme to obtain $\hat{F}_m$, the best estimate of $\Theta_m$ without accounting for the hidden variables. We also obtain estimates for the residuals $\hat{\epsilon}_m$ using the best linear predictor $\hat{F}mX$ obtained from this first step. Secondly, a PCA step is used on the second moment of the residuals, $\hat{\epsilon}\hat{\epsilon}^T/n$, to extract information on $B$. Specifically, $\hat{P}^{\perp}{B}$, an estimate of the projection matrix corresponding to the space orthogonal to the column space of $B$ is constructed. Finally, $\hat{\Delta} = \hat{P}_B^{\perp}\hat{F}$ is constructed from the estimates from the above steps. Error bounds for our final estimator, $||\hat{\Delta}-\Delta||_F/\sqrt{M}$, are constructed and shown to be stochastically bounded under reasonable assumptions and the above regime for $n,p,M$ and $K$. Simulation results show that G-HIVE can outperform the naive comparison method in several settings, and real data analysis results show that G-HIVE can be feasibly implemented with reasonable results.

Description
176 pages
Date Issued
2024-08
Keywords
Clustering
•
Generalized Linear Model
•
Latent Variable Model
Committee Chair
Ning, Yang
Committee Member
Bunea, Florentina
Wang, Yu-hsuan
Degree Discipline
Statistics
Degree Name
Ph. D., Statistics
Degree Level
Doctor of Philosophy
Rights
Attribution 4.0 International
Rights URI
https://creativecommons.org/licenses/by/4.0/
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16611932

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance