eCommons

DigitalCollections@ILR
ILR School
 

Triangle Census Research Network (TCRN)

Permanent URI for this collection

The Triangle Census Research Network (TCRN) is an interdisciplinary team of researchers from Duke University and the National Institute of Statistical Sciences dedicated to improving the way that federal statistical agencies collect, analyze, and disseminate data to the public.http://sites.duke.edu/tcrn/

Browse

Recent Submissions

Now showing 1 - 2 of 2
  • Item
    Bayesian multiple imputation for large-scale categorical data with structural zeros
    Manrique-Vallier, D.; Reiter, J. P. (Survey Methodology, 2013-12-18)
    We propose an approach for multiple imputation of items missing at random in large-scale surveys with exclusively categorical variables that have structural zeros. Our approach is to use mixtures of multinomial distributions as imputation engines, accounting for structural zeros by conceiving of the observed data as a truncated sample from a hypothetical population without structural zeros. This approach has several appealing features: imputations are generated from coherent, Bayesian joint models that automatically capture complex dependencies and readily scale to large numbers of variables. We outline a Gibbs sampling algorithm for implementing the approach, and we illustrate its potential with a repeated sampling study using public use census microdata from the state of New York, USA.
  • Item
    Estimating identification disclosure risk using mixed membership models
    Manrique-Vallier, Daniel; Reiter, Jerome (Journal of the American Statistical Association, 2012)
    Statistical agencies and other organizations that disseminate data are obligated to protect data subjects' confi dentiality. For example, ill-intentioned individuals might link data subjects to records in other databases by matching on common characteristics (keys). Successful links are particularly problematic for data subjects with combinations of keys that are unique in the population. Hence, as part of their assessments of disclosure risks, many data stewards estimate the probabilities that sample uniques on sets of discrete keys are also population uniques on those keys. This is typically done using log-linear modeling on the keys. However, log-linear models can yield biased estimates of cell probabilities for sparse contingency tables with many zero counts, which often occurs in databases with many keys. This bias can result in unreliable estimates of probabilities of uniqueness and, hence, misrepresentations of disclosure risks. We propose an alternative to log-linear models for datasets with sparse keys based on a Bayesian version of grade of membership (GoM) models. We present a Bayesian GoM model for multinomial variables and off er an MCMC algorithm for fitting the model. We evaluate the approach by treating data from a recent US Census Bureau public use microdata sample as a population, taking simple random samples from that population, and benchmarking estimated probabilities of uniqueness against population values. Compared to log-linear models, GoM models provide more accurate estimates of the total number of uniques in the samples. Additionally, they offer record-level predictions of uniqueness that dominate those based on log-linear models.