Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Day - Time: 15 October 2019, h.11:00
Place: Area della Ricerca CNR di Pisa - Room: C-29
Speakers
  • Lijun Zhang (Institute of Software, Chinese Academy of Sciences, China)
Referent

Vincenzo Ciancia

Abstract
Deep neural networks (DNNs) have been shown lack of robustnessfor the vulnerability of their classification to smallperturbations on the inputs. This has led to safety concerns ofapplying DNNs to safety-critical domains. Several verificationapproaches have been developed to automatically prove or disprovesafety properties of DNNs. However, these approaches suffer fromeither the scalability problem, i.e., only small DNNs can behandled, or the precision problem, i.e., the obtained bounds areloose. This paper improves on a recent proposal of analyzing DNNsthrough the classic abstract interpretation technique, by a novelsymbolic propagation technique. More specifically, the values ofneurons are represented symbolically and propagated forwardlyfrom the input layer to the output layer, on top of abstractdomains. We show that our approach can achieve significantlyhigher precision and thus can prove more properties than usingonly abstract domains. Moreover, we show that the bounds derivedfrom our approach on the hidden neurons, when applied to astate-of-the-art SMT based verification tool, can improve itsperformance. We implement our approach into a software tool andvalidate it over a few DNNs trained on benchmark datasets such asMNIST, etc.