Verifying Global Two-Safety Properties in Neural Networks with Confidence

Anagha Athavale (Author and Speaker), Ezio Bartocci, Maria Christakis, Matteo Maffei, Dejan Nickovic, Georg Weissenbacher

Research output: Chapter in Book or Conference ProceedingsConference Proceedings with Oral Presentationpeer-review

Abstract

We present the first automated verification technique for confidence-based 2-safety properties, such as global robustness and global fairness, in deep neural networks (DNNs). Our approach combines self-composition to leverage existing reachability analysis techniques and a novel abstraction of the softmax function, which is amenable to automated verification. We characterize and prove the soundness of our static analysis technique. Furthermore, we implement it on top of Marabou, a safety analysis tool for neural networks, conducting a performance evaluation on several publicly available benchmarks for DNN verification.
Original languageEnglish
Title of host publicationComputer Aided Verification - 36th International Conference, (CAV)
EditorsArie Gurfinkel, Vijay Ganesh
PublisherSpringer
Pages329-351
Volume14682
ISBN (Electronic)978-3-031-65630-9
ISBN (Print)978-3-031-65629-3
DOIs
Publication statusPublished - 2024
EventCAV 2024 36th International Conference - Montreal, Montreal, Canada
Duration: 24 Jul 202427 Jul 2024

Publication series

Name Lecture Notes in Computer Science
PublisherSpringer
Volume14682
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceCAV 2024 36th International Conference
Country/TerritoryCanada
CityMontreal
Period24/07/2427/07/24

Research Field

  • Dependable Systems Engineering

Fingerprint

Dive into the research topics of 'Verifying Global Two-Safety Properties in Neural Networks with Confidence'. Together they form a unique fingerprint.

Cite this