Computer Science Department
School of Computer Science, Carnegie Mellon University
Learning Abstractions for Model Checking
Learning is a process that causes a system to improve its performance through experience. Inductive learning is the process of learning by examples, i.e. the system learns a general rule from a set of sample instances.
Abstraction techniques have been successful in model checking large systems by enabling the model checker to ignore irrelevant details. The aim of abstraction is to identify a small abstract model on which the property holds. Most previous approaches for automatically generating abstract models are based on heuristics combined with the iterative abstraction-refinement loop. These techniques provide no guarantees on the size of the abstract models.
We present an application of machine learning to counterexample-guided abstraction refinement, and to abstraction without refinement. Our work formulates abstraction as an inductive learner that searches through a set of abstract models. The machine learning techniques precisely identify the information in the design that is relevant to the property. Our approach leverages recent advances in boolean satisfiability and integer linear programming techniques. We provide better control on the size of the abstract models, and our approach can generate the smallest abstract model that proves the property.
Most previous work has focused on applying abstraction to model checking, and bounded model checking is used as a subroutine in many of these approaches. We also present an abstraction technique that speeds up bounded model checking.
We have implemented our techniques for the verication of safety properties on hardware circuits using localization abstraction. Our experimental evaluation shows a signicant improvement over previous state-of-the-art approaches.