Secrecy and Anonymity in Interactive Systems
O'Neill, Kevin Ross
When building systems that guarantee confidentiality, system designers must first define confidentiality appropriately. Although researchers have proposed definitions of properties such as secrecy, anonymity, and privacy for a wide variety of system models, general definitions that are intuitive, widely applicable, and sufficiently formal have proven surprisingly elusive. The goal of this dissertation is to provide such a framework for systems that interact with multiple agents, emphasizing definitions of secrecy (to rule out unwanted information flows) and anonymity (to prevent observers from learning the identity of an agent who performs some action). The definitions of secrecy extend earlier definitions of secrecy and nondeducibility given by Shannon and Sutherland. Roughly speaking, one agent maintains secrecy with respect to another if the second agent cannot rule out any possibilities for the behavior or state of the first agent. These definitions are characterized syntactically, using a modal logic of knowledge. Definitions of anonymity are given, with respect to agents, actions, and observers, and are also stated in terms of a modal logic of knowledge. The general framework is shown to handle probability and nondeterminism cleanly, and to be useful for reasoning about asynchronous systems as well as synchronous systems. It also suggests generalizations of secrecy and anonymity that may be useful for dealing with issues such as resource-bounded reasoning. Finally, the dissertation leverages these definitions of secrecy and formulates new strategy-based information-flow conditions for a simple imperative programming language that includes input and output operators. A soundness theorem demonstrates the feasibility of statically enforcing the security conditions via a simple type system.
Kevin O'Neill's Ph.D. Dissertation
computer science; security; logic; privacy; anonymity; programming languages; secrecy
dissertation or thesis