STRUCTURED LATENT FACTOR MODELS: IDENTIFIABILITY, ESTIMATION, INFERENCE AND PREDICTION
This work first introduces a novel estimation method, called $LOVE$, of the entries and structure of a loading matrix $A$ in a latent factor model $X=AZ+E$, for an observable random vector $X \in \mathbb{R}^p$, with correlated unobservable factors $Z\in\mathbb{R}^K$, with $K$ unknown, and uncorrelated noise $E$. Each row of $A$ is scaled, and allowed to be sparse. In order to identify the loading matrix $A$ we require the existence of pure variables, which are components of $X$ that are associated, via $A$, with one and only one latent factor. Despite the fact that the number of factors $K$, the number of the pure variables, and their location are all unknown, we only require a mild condition on the covariance matrix of $Z$, and a minimum of only two pure variables per latent factor to show that $A$ is uniquely defined, up to signed permutations. Our proofs for model identifiability are constructive, and lead to our novel estimation method of the number of factors and of the set of pure variables, from a sample of size $n$ of observations on $X$. This is the first step of our LOVE algorithm, which is optimization-free, and has low computational complexity of order $p^2$. The second step of LOVE is an easily implementable linear program that estimates $A$. We prove that the resulting estimator is near minimax rate optimal for $A$, with respect to the $| \ |_{\infty, q}$ loss, for $q \geq 1$, up to logarithmic factors in $p$, and that it can be minimax-rate optimal in many cases of interest. When there is an additional response $Y\in\mathbb{R}$ that is also generated from the same latent factor $Z\in \mathbb{R}^K$, with unknown $K$<$p$, the second part of this work studies both the inference on the regression coefficient $\beta\in \mathbb{R}^K$ relating $Y$ to $Z$, and the prediction of the response $Y$. For developing inferential tools, we construct computationally efficient estimators of $\beta$, along with estimators of other important model parameters. We benchmark the rate of convergence of $\beta$ by first establishing its $\ell_2$-norm minimax lower bound, and show that our proposed estimator $\widehat \beta$ is minimax-rate adaptive. Our main contribution is the provision of a unified analysis of the component-wise Gaussian asymptotic distribution of $\widehat \beta$ and, especially, the derivation of a closed form expression of its asymptotic variance, together with consistent variance estimators. The resulting inferential tools can be used when both $K$ and $p$ are independent of the sample size $n$, and also when both, or either, $p$ and $K$ vary with $n$, while allowing for $p$ > $n$. This complements the only asymptotic normality results obtained for a particular case of the model under consideration, in the regime $K = O(1)$ and $p \rightarrow\infty$, but without a variance estimate. For predicting $Y$, we also provide the finite sample prediction risk analysis of a class of linear predictors. Our primary contribution is in establishing finite sample risk bounds for prediction with the ubiquitous Principal Component Regression (PCR) method, under the factor regression model, with the number of principal components adaptively selected from the data -- a form of theoretical guarantee that is surprisingly lacking from the PCR literature. To accomplish this, we prove a master theorem that establishes a risk bound for a large class of predictors, including the PCR predictor as a special case. This approach has the benefit of providing a unified framework for the analysis of a wide range of linear prediction methods, under the factor regression setting. In particular, we use our main theorem to recover known risk bounds for the minimum-norm interpolating predictor, which has received renewed attention in the past two years, and a prediction method tailored to a subclass of factor regression models with identifiable parameters. To address the problem of selecting among a set of candidate predictors, we analyze a simple model selection procedure based on data-splitting, providing an oracle inequality under the factor model to prove that the performance of the selected predictor is close to the optimal candidate.