We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the size of noise fails to capture human similarity perception, which is why measures like the Wasserstein metric are increasingly being used in the field of adversarial robustness.
We were now able to develop a formal verification framework for the highly complex Wasserstein metric. Furthermore, we present a simple Wasserstein attack, which generates adversarial data 16x faster than previous benchmark attacks.
We recommend the reading: https://arxiv.org/pdf/2110.06816.pdf