Topics In Penalized Estimation
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
The use of regularization, or penalization, has become increasingly common in highdimensional statistical analysis over the past several years, where a common goal is to simultaneously select important variables and estimate their effects. This goal can be achieved by minimizing some parameter-dependent "goodness of fit" function (e.g., negative loglikelihood) subject to a penalization that promotes sparsity. Penalty functions that are nonsmooth (i.e., not differentiable) at the origin have received substantial attention, arguably beginning with LASSO (Tibshirani, 1996). This dissertation consists of three parts, each related to penalized estimation. First, a general class of algorithms is proposed for optimizing an extensive variety of nonsmoothly penalized objective functions that satisfy certain regularity conditions. The proposed framework utilizes the majorization-minimization (MM) algorithm as its core optimization engine. The resulting algorithms rely on iterated soft-thresholding, implemented componentwise, allowing for fast, stable updating that avoids the need for any high-dimensional matrix inversion. Local convergence theory is established for this class of algorithms under weaker assumptions than previously considered in the statistical literature. The second portion of this work extends the MM framework to finite mixture regression models, allowing for penalization among the regression coefficients within a potentially unknown number of components. Finally, a hierarchical structure imposed on the penalty parameter provides new motivation for the Minimax Concave Penalty of Zhang (2010). Frequentist and Bayesian risk of the MCP thresholding estimator and several other thresholding estimators are compared and explored in detail.