Lexical Structure, Weightedness, And Information In Sentence Processing
MetadataShow full item record
corpus. We derive greater surprisals (Hale, 2001) for the unergative than the unaccusative case, which we interpret as supporting the account that reduced relative clauses with unergative verbs present the human comprehender with compounded surprise. Overall, the thesis argues that the proper explanatory role of lexical semantics in sentence processing is complexity-based, and not a special appeal to lexically-sensitive ameliorative effects (-Q verbs, as argued in Ross (1967), unaccusative verbs as argued by Stevenson and Merlo (1997)). The thesis argues that the human sentence processor is optimized for human language, and that these classical sentence processing cases are better considered as corner cases where the human sentence processor fails to leverage the lexicon. In such cases, lexical ambiguity compounds main structural ambiguity. This thesis argues for a resource-optimizing parallel parser where at each point in the sentence, processing resources are allocated proportionally to their probabilistic weight. Information-theoretic accounts of sentence processing (Hale, 2001, 2006; Levy, 2008) obtain on an account of the human sentence processor where resources are dynamically reallocated to more likely and less entropic hypotheses to insure that sentences are parsed as fast as possible with as few resources as possible. The thesis argues for a probabilistic lexical syntax, where the observations of professional lexical semanticists can be encoded in a grammar that can be empirically weighted by corpora and computed by a parser. The thesis strives to maximize parsimony and theory-independence in both the core and linking hypotheses: our theories are formulated as lexicalized mildly context-sensitive grammars, directly suggested by independently attested lexical semantics facts, and translatable into a variety of formalisms (Minimalist Grammars, Tree Adjoining Grammars, Generalized Phrase Structure Grammars); and our results are information-theoretic com- plexity metrics (entropy, surprisal), which are interpretable as theory-independent measurements, respectively, of ambiguity and surprise.
probabilistic grammar; information theory; lexical syntax
Master of Arts
dissertation or thesis