Computer Science Department
School of Computer Science, Carnegie Mellon University
Abstractions for Model Checking System Security
Jason Douglas Franklin
Systems software such as operating systems, virtual machine monitors, and hypervisors form the foundation of security for desktop, cloud, and mobile platforms. Despite their ubiquity, these security-critical software systems regularly suffer from serious vulnerabilities in both their design and implementation. A critically important goal is to automatically provide verifiable security guarantees for systems software.
Software model checking is a promising approach toward automated verification of programs. Yet, the size and complexity of systems software presents a major challenge to model checking their security properties. In this thesis, we develop a framework that enables automated, verifiable security guarantees for a wide range of systems software. Central to the framework are novel abstractions that improve the scalability of model checking the security of systems. The abstractions exploit structure common to systems software and the nondeterminism inherent in adversary models.
Our key insight is that much of the complexity of model checking security properties of systems software stems from two sources: 1) system data-structures and functions that operate over them with a form of well-structured data-flow, and 2) adversary-controlled data structures and functions that operate over them. We develop a family of techniques to abstract these components of a system.
We introduce the problem of semantic security verification and develop three abstractions to improve the scalability of model checking secure systems: 1) Constrained-to-System-Interface (CSI) adversary abstraction, a technique to scalably integrate flexible systemspecific adversary models into the verification process, 2) Small Model Analysis, a family of parametric verification techniques that scale even when a system and adversary operate over unbounded but finite data structures, and 3) Havoc abstraction, a source-level analysis technique that reduces the verification complexity associated with system components that operate on adversary-controlled data structures.
We prove that our abstractions are sound–no attacks are missed by the abstractions. Further, we prove a completeness result that provides theoretical justification for our zero false positive rate. Finally, we prove a refinement theorem that carries our results to a hypervisor, implemented in a subset of the C programming language. We perform a number of case studies focused on verifying hypervisors which were designed to enforce a variety of security properties in the presence of adversary-controlled guest operating systems. These empirical evaluations demonstrate the effectiveness of our abstractions on several hypervisors. We identify previously unknown vulnerabilities in the design of SecVisor, the C code of ShadowVisor (a prototype hypervisor), and successfully model check their code using the CBMC model checker after fixing the vulnerabilities. Without the abstractions, CBMC - a state-of-the-art model checker - is unable to model check these systems; in all attempts, it either exhausts system resources or times out after an extended period. With the abstractions, CBMC model checks the systems in a few seconds.