An inductive synthesis framework for verifiable reinforcement learning

He Zhu, Stephen Magill, Zikang Xiong, Suresh Jagannathan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations

Abstract

Despite the tremendous advances that have been made in the last decade on developing useful machine-learning applications, their wider adoption has been hindered by the lack of strong assurance guarantees that can be made about their behavior. In this paper, we consider how formal verification techniques developed for traditional software systems can be repurposed for verification of reinforcement learning-enabled ones, a particularly important class of machine learning systems. Rather than enforcing safety by examining and altering the structure of a complex neural network implementation, our technique uses blackbox methods to synthesizes deterministic programs, simpler, more interpretable, approximations of the network that can nonetheless guarantee desired safety properties are preserved, even when the network is deployed in unanticipated or previously unobserved environments. Our methodology frames the problem of neural network verification in terms of a counterexample and syntax-guided inductive synthesis procedure over these programs. The synthesis procedure searches for both a deterministic program and an inductive invariant over an infinite state transition system that represents a specification of an application's control logic. Additional specifications defining environment-based constraints can also be provided to further refine the search space. Synthesized programs deployed in conjunction with a neural network implementation dynamically enforce safety conditions by monitoring and preventing potentially unsafe actions proposed by neural policies. Experimental results over a wide range of cyber-physical applications demonstrate that software-inspired formal verification techniques can be used to realize trustworthy reinforcement learning systems with low overhead.

Original languageEnglish (US)
Title of host publicationPLDI 2019 - Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation
EditorsKathryn S. McKinley, Kathleen Fisher
PublisherAssociation for Computing Machinery
Pages686-701
Number of pages16
ISBN (Electronic)9781450367127
DOIs
StatePublished - Jun 8 2019
Externally publishedYes
Event40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019 - Phoenix, United States
Duration: Jun 22 2019Jun 26 2019

Publication series

NameProceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)

Conference

Conference40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019
Country/TerritoryUnited States
CityPhoenix
Period6/22/196/26/19

All Science Journal Classification (ASJC) codes

  • Software

Keywords

  • Invariant Inference
  • Program Synthesis
  • Program Verification
  • Reinforcement Learning
  • Runtime Shielding

Fingerprint

Dive into the research topics of 'An inductive synthesis framework for verifiable reinforcement learning'. Together they form a unique fingerprint.

Cite this