JavaScript is disabled for your browser. Some features of this site may not work without it.

## Shrinkage Estimation For Penalised Regression, Loss Estimation And Topics On Largest Eigenvalue Distributions.

#####
**Author**

Narayanan, Rajendran

#####
**Abstract**

The dissertation can be broadly classified into four projects. They are presented in four different chapters as (a) Stein estimation for l1 penalised regression and model selection, (b) Loss estimation for model selection, (c) Largest eigenvalue distributions of random matrices, and (d) Maximum domain of attraction of Tracy-Widom Distribution. In the first project, we construct Stein-type shrinkage estimators for the coefficients of a linear model, based on a convex combination of the Lasso and the least squares estimator. Since the Lasso constraint set is a closed and bounded polyhedron (a crosspolytope), we observe that under a general quadratic loss function, we can treat the Lasso solution as a metric projection of the least squares estimator onto the constraint set. We derive analytical expressions for the decision theoretic risk difference of the proposed Stein-type estimators and Lasso and establish data-based verifiable conditions for risk gains of the proposed estimator over Lasso. Following the Stein's Unbiased Risk Estimation (SURE) framework, we further derive expressions for unbiased esimates of prediction error for selecting the optimal tuning parameter. In the second project, we consider the following problem. For a random vector X , estimation of the unknown location parameter [theta] using an estimator d(X ) is often accompanied by a loss function L(d(X ), [theta]). Performance of such an estimator is usually evaluated using the risk of d(X ). We consider estimating the loss function using an estimator [lamda](X ) which is conditional on the actual observations as opposed to an average over the sampling distribution of d(X ). In this context, we consider estimating the loss function when the unknown mean vector [theta] of a multivariate normal distribution with an arbitrary covariance matrix is estimated using both the MLE and a shrinkage estimator. We derive sufficient conditions for inadmissibility of the unbiased estimators of loss for such a random vector. We further establish conditions for improved estimators of the loss function for a linear model when the Lasso is used as a model selection tool and exhibit such an improved estimator. The largest eigenvalue of the Gaussian and Jacobi ensembles plays an important role in classical multivariate analysis and random matrix theory. Historically, the exact distribution for the largest eigenvalue has required extensive tables or use of specialised software. More recently, asymptotic approximations for the cumulative distribution function of the largest eigenvalue in both settings have been shown to have the Tracy-Widom limit. Our main results concern using a unified approach to derive the exact cumulative distribution function of the largest eigenvalue in both settings in terms of elements of a matrix that have explicit scalar analytical forms. In the fourth chapter, the maximum of i.i.d. Tracy-Widom distributed random variables arising from the Gaussian unitary ensemble is shown to belong to the Gumbel domain of attraction. This theoretical result has potential applications in any situation where a multiple comparisons is needed using the greatest root statistic.

#####
**Date Issued**

2012-08-20#####
**Subject**

Shrinkage Estimation; Loss Estimation; Distribution of Largest Eigenvalue; Domain of Attraciton of Tracy-Widom

#####
**Committee Chair**

Wells, Martin Timothy

#####
**Committee Member**

Strawderman, Robert Lee; Nussbaum, Michael

#####
**Degree Discipline**

Statistics

#####
**Degree Name**

Ph.D. of Statistics

#####
**Degree Level**

Doctor of Philosophy

#####
**Type**

dissertation or thesis