Wasserstein Verification Paper Published

Wasserstein Verification Paper Published

We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the...
Papercat Thursday 12/11/2020

Papercat Thursday 12/11/2020

Proud to present the research paper contributed by colleagues at neurocat GmbH & Volkswagen AG. Risk Assessment for ML models is of paramount importance to understand the failure modes and improve the models before deployment. Our paper defines the standardized...
Decision Boundary

Decision Boundary

State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...
DIN SPEC 92001

DIN SPEC 92001

AI and ML experts from all over Germany came together to kick-off work on the DIN SPEC 92001 ‘Artificial Intelligence – Quality requirements and life cycle management for AI modules’. The huge advancements in and the rapid development of the field of...
LIME for GDPR

LIME for GDPR

As mentioned in our last post the GDPR is coming and companies have to get ready! There is a considerable number of people claiming that the GDPR could potentially harm the deployment of deep learning models in Europe. The main reason given in this context is the fact...