


Papercat Thursday 12/11/2020
Proud to present the research paper contributed by colleagues at neurocat GmbH & Volkswagen AG. Risk Assessment for ML models is of paramount importance to understand the failure modes and improve the models before deployment. Our paper defines the standardized...
Decision Boundary
State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...
Robustness Verification: Seminar Presentation at HU Berlin
Two of our cats are soon going to give a talk about the current status of duality-based adversarial robustness verification. The content will strongly rely on the recent publication ”Training Verified Learners with Learned Verifiers” by K. Dvijotham et al. (DeepMind)....
DIN SPEC 92001
AI and ML experts from all over Germany came together to kick-off work on the DIN SPEC 92001 ‘Artificial Intelligence – Quality requirements and life cycle management for AI modules’. The huge advancements in and the rapid development of the field of...