Reasoning About Information Disclosure In Relational Databases
Companies and organizations collect and use vast troves of sensitive user data whose release must be carefully controlled. In practice, the access policies that govern this data are often fine-grained, complex, poorly documented, and difficult to reason about. These issues make it easy for principals to accidentally request and be granted access to data they never use. To encourage developers and administrators to use security mechanisms more effectively, we propose a novel security model in which all security decisions are formally explainable. Whether a query is accepted or denied, the system returns a concise yet formal explanation which can allow the issuer to reformulate a rejected query or adjust his/her security credentials. In order to demonstrate the practical applicability of our approach, we implement and evaluate a disclosure control system that handles a wide variety of real SQL queries and can accommodate complex policy constraints. Our explainable security model is based on a new theoretical foundation for reasoning about information disclosure in database systems that we call disclosure labeling. Information disclosure is expressed in terms of a set of security views that are defined by a human administrator and reveal types of information that are relevant to the security constraints of the system at hand. Disclosure labeling allows us to precisely characterize which subsets of the security views contain enough information to determine a query's answer; such characterizations form the basis for the explanations generated by our system.
Database Security; Access Control; Explainable Security
Gehrke, Johannes E.
Pass, Rafael N.; Kozen, Dexter Campbell
Ph.D. of Computer Science
Doctor of Philosophy
dissertation or thesis