Structured State Tracking for Natural Language Understanding
Autonomous agents that collaborate with humans must understand language, track the state of the world, and make good decisions. A central challenge common to these three desiderata is uncertainty. Language is often ambiguous, with many possible interpretations of the same utterance. The state of the world is unobservable, as a single agent cannot observe every aspect due to physical constraints. Agents must reason about the unobserved aspects as new information is received. Finally, many decisions have uncertain outcomes. Agents must anticipate how the world will respond to their decisions in order to achieve the best outcomes. In this thesis, we build uncertainty-aware agents that understand language, the state of the world, and decision-making.The thesis is divided into three sections, with each section exploring a different application by applying a modern twist to a classical approach. The first section studies uncertainty-aware representations in language modeling. We revisit classical structured and uncertainty-aware language models, showing that they can achieve strong performance when combined with modern techniques from deep learning. The second section studies the use of language models for reasoning about uncertainty in question-answering. We reason about unobserved reasoning paths, applying modern pretrained language model representations to aid in high-level reasoning. The third section studies task-oriented dialogue as decision-making under uncertainty. We intertwine classical decision-making methods with the powerful language and code understanding capabilities of modern large language models, resulting in an accurate neurosymbolic dialogue system.