Cho, Sungjun2020-08-102020-08-102020-05Cho_cornell_0058O_10866http://dissertations.umi.com/cornell:10866https://hdl.handle.net/1813/7030756 pagesAcross many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative. In topic modeling, spectral methods can provably learn low-dimensional latent topics from easily-collected word co-occurrence statistics unlike likelihood-based methods that require exhaustive reiterations through the corpus. However, spectral methods suffer from two major drawbacks: the quality of learned topics deteriorates drastically when the empirical data does not follow the generative model, and the co-occurrence statistics itself grows to an intractable size when working with large vocabularies. This thesis is an attempt to overcome these drawbacks by developing a scalable and robust spectral topic inference framework based on Joint Stochastic Matrix Factorization. First, we provide theoretical foundations of spectral topic inference as well as step-wise algorithmic implementations of our anchor-based approach that can learn quality topics despite model-data mismatch. We then scale towards larger vocabularies by operating solely on compressed low-rank representations of co-occurrence statistics, keeping the overall cost linear with respect to the vocabulary size. Quantitative and qualitative experiments on various datasets not only demonstrate our framework's consistency and efficiency in inferring high-quality topics, but also introduce improvements in interpretability of the individual topics.enNatural Language ProcessingNonlinear Dimensionality ReductionSpectral MethodsUnsupervised LearningRobust and Scalable Spectral Topic Modeling for Large Vocabulariesdissertation or thesishttps://doi.org/10.7298/eymc-6724