Luck, Fairness and Bayesian Tensor Completion
Gilbert, Daniel E.
This thesis contains papers on three diverse topics. The first topic is luck in games, and how to measure it. Game theory is the study of tractable games which may be used to model more complex systems. Board games, video games and sports, however, are intractable by design, so "ludological" theories about these games as complex phenomena should be grounded in empiricism. A first "ludometric" concern is the empirical measurement of the amount of luck in various games. We argue against a narrow view of luck which includes only factors outside any player's control, and advocate for a holistic definition of luck as complementary to the variation in effective skill within a population of players. We introduce two metrics for luck in a game for a given population - one information theoretical, and one Bayesian, and discuss the estimation of these metrics using sparse, high-dimensional regression techniques. Finally, we apply these techniques to compare the amount of luck between various professional sports, between Chess and Go, and between two hobby board games: Race for the Galaxy and Seasons. The second topic centers on matrix and tensor completion, which are frameworks for a wide range of problems, including collaborative filtering, missing data, and image reconstruction. Missing entries are estimated by leveraging an assumption that the matrix or tensor is low-rank. Most existing Bayesian techniques encourage rank-sparsity by modelling factorized matrices and tensors with Normal-Gamma priors. However, the Horseshoe prior and other "global-local" formulations provide tuning-parameter-free solutions which may better achieve simultaneous rank-sparsity and missing-value recovery. We find these global-local priors outperform commonly used alternatives in simulations and in a collaborative filtering task predicting board game ratings. The third topic is a review and novel perspective on fairness in algorithms. A substantial portion of the literature on fairness in algorithms proposes, analyzes, and operationalizes simple formulaic criteria for assessing fairness. Two of these criteria, Equalized Odds and Calibration by Group, have gained significant attention for their simplicity and intuitive appeal, but also for their incompatibility. This chapter provides a perspective on the meaning and consequences of these and other fairness criteria using graphical models which reveals Equalized Odds and related criteria to be ultimately misleading. An assessment of various graphical models suggests that fairness criteria should ultimately be case-specific and sensitive to the nature of the information the algorithm processes.
luck; matrix completion; tensor completion; Statistics; bayesian; equalized odds; fairness
Wells, Martin Timothy
Booth, James; Wilson, Andrew Gordon
Doctor of Philosophy
dissertation or thesis