Wasserstein Verification Paper Published

Wasserstein Verification Paper Published

We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the...
Papercat Thursday 12/11/2020

Papercat Thursday 12/11/2020

Proud to present the research paper contributed by colleagues at neurocat GmbH & Volkswagen AG. Risk Assessment for ML models is of paramount importance to understand the failure modes and improve the models before deployment. Our paper defines the standardized...
Decision Boundary

Decision Boundary

State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...