Featured News

neurocat to Participate in “News-Polygraph” Research Consortium to Counter Disinformation

We recently announced on social media that neurocat will participate as a member of the research consortium "News-Polygraph" to counter disinformation.  News-Polygraph is funded by the German Federal Ministry of Education and Research as part of its Regional Entrepreneurial Alliances for Innovation program (RUBIN). The consortium brings together 10...
Rise of AI Logo

neurocat and the Rise of AI

On the 8th of June 2022 Florens Gressner  (CEO of neurocat) had the opportunity to talk at the Rise of AI Conference 2022, the most influential platform in the AI Industry. He talks about the upcoming AI Act that will introduce strict obligations before AI systems can be put on the market and how neurocat delivers the perfect solution to that problem....

Featured Articles

Condensed best practices from “Security of AI-Systems: Fundamentals – Adversarial Deep Learning”

The security and safety of AI is something we all inherently recognize as important. Yet many of us don't have a background in this topic and have limited time to learn new things or keep up on the latest developments. neurocat is here to help. We recently contributed* to a new report by the German Ministry for Information Security “Security of AI-Systems: Fundamentals – Adversarial Deep...

Hands on Guide to comply with ECJ ruling

In this article we want to propose a strategy on how to satisfy legal constraints AI is facing right now. The ruling of the European Court of Justice (ECJ), on the 21st of July 2022, is a perfect example that empathizes the need for transparency and risk evaluation methods in order to use AI to its full potential. Data is one of the most valuable assets in the age of digitization and almost...

Featured Research

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

Despite achieving remarkable performance on many image classification tasks, state-of-the-art machine learning (ML) classifiers remain vulnerable to small input perturbations. Especially, the existence of adversarial examples raises concerns about the deployment of ML models in safety- and security-critical environments, like autonomous driving and disease detection. Over the last few years, numerous defense methods have been published with the goal of improving adversarial as well as corruption robustness. However, the proposed measures succeeded only to a very limited extent. This limited progress is partly due to the lack of understanding of the decision boundary and decision regions of deep neural networks. Therefore, we study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network. By conducting experiments on MNIST, FASHION-MNIST, and CIFAR-10, we observe that the decision boundary moves closer to natural images over training. This phenomenon even remains intact in the late epochs of training, where the classifier already obtains low training and test error rates. On the other hand, adversarial training appears to have the potential to prevent this undesired convergence of the decision boundary.

The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks

Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the “attack generator”. In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.

Risk Assessment for Machine Learning Models

In this paper we propose a framework for assessing the risk associated with deploying a machine learning model in a specified environment. For that we carry over the risk definition from decision theory to machine learning. We develop and implement a method that allows to define deployment scenarios, test the machine learning model under the conditions specified in each scenario, and estimate the damage associated with the output of the machine learning model under test. Using the likelihood of each scenario together with the estimated damage we define \emph{key risk indicators} of a machine learning model.
The definition of scenarios and weighting by their likelihood allows for standardized risk assessment in machine learning throughout multiple domains of application. In particular, in our framework, the robustness of a machine learning model to random input corruptions, distributional shifts caused by a changing environment, and adversarial perturbations can be assessed.

Method and Generator for Generating Disturbed Input Data for a Neural Network

The invention relates to a method for generating disturbed input data for a neural network for analyzing sensor data, in particular digital images, of a driver assistance system, in which a first metric is defined which indicates how the magnitude of a change in sensor data is measured, a second metric is defined which indicates where a disturbance of sensor data is directed, an optimization problem is generated from a combination of the first metric and second metric, the optimization problem is solved by means of at least one solution algorithm, wherein the solution indicates a target disturbance of the input data, and disturbed input data is generated from sensor data for the neural network by means of the target disturbance.

A Framework for Verification of Wasserstein Adversarial Robustness

Machine learning image classifiers are susceptible to adversarial and corruption perturbations. Adding imperceptible noise to images can lead to severe misclassifications of the machine learning model. Using Lp-norms for measuring the size of the noise fails to capture human similarity perception, which is why optimal transport based distance measures like the Wasserstein metric are increasingly being used in the field of adversarial robustness. Verifying the robustness of classifiers using the Wasserstein metric can be achieved by proving the absence of adversarial examples (certification) or proving their presence (attack). In this work we present a framework based on the work by Levine and Feizi, which allows us to transfer existing certification methods for convex polytopes or L1-balls to the Wasserstein threat model. The resulting certification can be complete or incomplete, depending on whether convex polytopes or L1-balls were chosen. Additionally, we present a new Wasserstein adversarial attack that is projected gradient descent based and which has a significantly reduced computational burden compared to existing attack approaches.

Browse all neurocat content

News and Articles

dSPACE Acquires Interests in Berlin Start-Up neurocat

dSPACE Acquires Interests in Berlin Start-Up neurocat

dSPACE, an internationally leading provider of simulation and validation solutions for the development of networked, self-driving, and electric vehicles, has acquired interests in neurocat. Support from a renowned industry leader will give us the necessary support in...

Wasserstein Verification Paper Published

Wasserstein Verification Paper Published

We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the...

Neurocat accepted onto NVIDIA AI Inception program

Neurocat accepted onto NVIDIA AI Inception program

Neurocat has been accepted into the NVIDIA AI Inception program! Being recognized by one of the giants in AI space showcases the potential of our software offering aidkit and neurocat's work in the domain of AI robustness, explainability and functionality. Want to try...

Papercat Thursday 12/11/2020

Papercat Thursday 12/11/2020

Proud to present the research paper contributed by colleagues at neurocat GmbH & Volkswagen AG. Risk Assessment for ML models is of paramount importance to understand the failure modes and improve the models before deployment. Our paper defines the standardized...

1E9 Interview Tarek R. Besold

1E9 Interview Tarek R. Besold

Making AI models robust against adversarial attacks, unboxing the black-box models to make their decision-making process more explainable, and also working with standardization agencies to bring quality in AI to the forefront. It's all in a day's work for us. Read the...

Most Promising German AI Startups

Most Promising German AI Startups

Neurocat was voted one of the 247 most promising German AI startups by the initiative for applied artificial intelligence. We are pleased to be part of the AI Startup Landscape 2020 and working hard to shape the future of AI quality. The Startup Landscape and a...

Decision Boundary

Decision Boundary

State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...

Times of Growth

Times of Growth

Since the beginning of 2019, neurocat’s manpower has more than doubled. With all teams growing fast, we significantly expanded our office space. Luckily, we were able to move into a second floor in our company headquarter in Adlershof. This is just the beginning: If...

Techcode Pitch Won

Techcode Pitch Won

The world's leading trade show for industrial technology and AI: Hannover Messe, Germany. With 6,500 exhibitors and over 250,000 visitors, it’s the place where digital innovations meet industry heavyweights. When Chancellor Angela Merkel opened Hannover Messe on 1...

Robustness Verification: Seminar Presentation at HU Berlin

Robustness Verification: Seminar Presentation at HU Berlin

Two of our cats are soon going to give a talk about the current status of duality-based adversarial robustness verification. The content will strongly rely on the recent publication ”Training Verified Learners with Learned Verifiers” by K. Dvijotham et al. (DeepMind)....

5
4

neurocat Honoured With Vision Award 2021

Neurocat was awarded with the VisionAward 2021 at MedienTage Munich. VisionAwards have been given to young and established companies and entrepreneurs who set new trends with a new business idea. The jury for the award consisted of interdisciplinary experts and...

dSPACE Acquires Interests in Berlin Start-Up neurocat

dSPACE, an internationally leading provider of simulation and validation solutions for the development of networked, self-driving, and electric vehicles, has acquired interests in neurocat. Support from a renowned industry leader will give us the necessary support in...

Wasserstein Verification Paper Published

We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the...

Neurocat accepted onto NVIDIA AI Inception program

Neurocat has been accepted into the NVIDIA AI Inception program! Being recognized by one of the giants in AI space showcases the potential of our software offering aidkit and neurocat's work in the domain of AI robustness, explainability and functionality. Want to try...

Papercat Thursday 12/11/2020

Proud to present the research paper contributed by colleagues at neurocat GmbH & Volkswagen AG. Risk Assessment for ML models is of paramount importance to understand the failure modes and improve the models before deployment. Our paper defines the standardized...

1E9 Interview Tarek R. Besold

Making AI models robust against adversarial attacks, unboxing the black-box models to make their decision-making process more explainable, and also working with standardization agencies to bring quality in AI to the forefront. It's all in a day's work for us. Read the...

Most Promising German AI Startups

Neurocat was voted one of the 247 most promising German AI startups by the initiative for applied artificial intelligence. We are pleased to be part of the AI Startup Landscape 2020 and working hard to shape the future of AI quality. The Startup Landscape and a...

Decision Boundary

State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...

Times of Growth

Since the beginning of 2019, neurocat’s manpower has more than doubled. With all teams growing fast, we significantly expanded our office space. Luckily, we were able to move into a second floor in our company headquarter in Adlershof. This is just the beginning: If...