Featured News

neurocat to Participate in “News-Polygraph” Research Consortium to Counter Disinformation

We recently announced on social media that neurocat will participate as a member of the research consortium "News-Polygraph" to counter disinformation.  News-Polygraph is funded by the German Federal Ministry of Education and Research as part of its Regional Entrepreneurial Alliances for Innovation program (RUBIN). The consortium brings together 10...
Rise of AI Logo

neurocat and the Rise of AI

On the 8th of June 2022 Florens Gressner  (CEO of neurocat) had the opportunity to talk at the Rise of AI Conference 2022, the most influential platform in the AI Industry. He talks about the upcoming AI Act that will introduce strict obligations before AI systems can be put on the market and how neurocat delivers the perfect solution to that problem....

Featured Articles

Condensed best practices from “Security of AI-Systems: Fundamentals – Adversarial Deep Learning”

The security and safety of AI is something we all inherently recognize as important. Yet many of us don't have a background in this topic and have limited time to learn new things or keep up on the latest developments. neurocat is here to help. We recently contributed* to a new report by the German Ministry for Information Security “Security of AI-Systems: Fundamentals – Adversarial Deep...

Hands on Guide to comply with ECJ ruling

In this article we want to propose a strategy on how to satisfy legal constraints AI is facing right now. The ruling of the European Court of Justice (ECJ), on the 21st of July 2022, is a perfect example that empathizes the need for transparency and risk evaluation methods in order to use AI to its full potential. Data is one of the most valuable assets in the age of digitization and almost...

Featured Research

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

Despite achieving remarkable performance on many image classification tasks, state-of-the-art machine learning (ML) classifiers remain vulnerable to small input perturbations. Especially, the existence of adversarial examples raises concerns about the deployment of ML models in safety- and security-critical environments, like autonomous driving and disease detection. Over the last few years, numerous defense methods have been published with the goal of improving adversarial as well as corruption robustness. However, the proposed measures succeeded only to a very limited extent. This limited progress is partly due to the lack of understanding of the decision boundary and decision regions of deep neural networks. Therefore, we study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network. By conducting experiments on MNIST, FASHION-MNIST, and CIFAR-10, we observe that the decision boundary moves closer to natural images over training. This phenomenon even remains intact in the late epochs of training, where the classifier already obtains low training and test error rates. On the other hand, adversarial training appears to have the potential to prevent this undesired convergence of the decision boundary.

The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks

Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the “attack generator”. In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.

Risk Assessment for Machine Learning Models

In this paper we propose a framework for assessing the risk associated with deploying a machine learning model in a specified environment. For that we carry over the risk definition from decision theory to machine learning. We develop and implement a method that allows to define deployment scenarios, test the machine learning model under the conditions specified in each scenario, and estimate the damage associated with the output of the machine learning model under test. Using the likelihood of each scenario together with the estimated damage we define \emph{key risk indicators} of a machine learning model.
The definition of scenarios and weighting by their likelihood allows for standardized risk assessment in machine learning throughout multiple domains of application. In particular, in our framework, the robustness of a machine learning model to random input corruptions, distributional shifts caused by a changing environment, and adversarial perturbations can be assessed.

Method and Generator for Generating Disturbed Input Data for a Neural Network

The invention relates to a method for generating disturbed input data for a neural network for analyzing sensor data, in particular digital images, of a driver assistance system, in which a first metric is defined which indicates how the magnitude of a change in sensor data is measured, a second metric is defined which indicates where a disturbance of sensor data is directed, an optimization problem is generated from a combination of the first metric and second metric, the optimization problem is solved by means of at least one solution algorithm, wherein the solution indicates a target disturbance of the input data, and disturbed input data is generated from sensor data for the neural network by means of the target disturbance.

A Framework for Verification of Wasserstein Adversarial Robustness

Machine learning image classifiers are susceptible to adversarial and corruption perturbations. Adding imperceptible noise to images can lead to severe misclassifications of the machine learning model. Using Lp-norms for measuring the size of the noise fails to capture human similarity perception, which is why optimal transport based distance measures like the Wasserstein metric are increasingly being used in the field of adversarial robustness. Verifying the robustness of classifiers using the Wasserstein metric can be achieved by proving the absence of adversarial examples (certification) or proving their presence (attack). In this work we present a framework based on the work by Levine and Feizi, which allows us to transfer existing certification methods for convex polytopes or L1-balls to the Wasserstein threat model. The resulting certification can be complete or incomplete, depending on whether convex polytopes or L1-balls were chosen. Additionally, we present a new Wasserstein adversarial attack that is projected gradient descent based and which has a significantly reduced computational burden compared to existing attack approaches.

Browse all neurocat content

News and Articles

Hands on Guide to comply with ECJ ruling

Hands on Guide to comply with ECJ ruling

In this article we want to propose a strategy on how to satisfy legal constraints AI is facing right now. The ruling of the European Court of Justice (ECJ), on the 21st of July 2022, is a perfect example that empathizes the need for transparency and risk evaluation...

Germany takes enormous leap towards safe autonomous driving

Germany takes enormous leap towards safe autonomous driving

KI Absicherung positions Germany to compete in the race for  key new technologies and prepares the German automotive industry for market leadership in autonomous driving. We at neurocat were proud to be part of the "KI Absicherung" (roughly, Safeguarding AI)...

neurocat and the Rise of AI

neurocat and the Rise of AI

On the 8th of June 2022 Florens Gressner  (CEO of neurocat) had the opportunity to talk at the Rise of AI Conference 2022, the most influential platform in the AI Industry. He talks about the upcoming AI Act that will...

neurocat wins 1st place award at TechAD 2022

neurocat wins 1st place award at TechAD 2022

Neurocat won 1st place in Software and Compute Category at the 2022 edition of the Tech.AD EUROPE AWARD. The Tech.AD Europe Award exclusively honours extraordinary projects in the automotive industry and celebrates exceptional solutions & innovations. “The winners...

neurocat donates 3.600€ to support Ukrainian refugees

neurocat donates 3.600€ to support Ukrainian refugees

On the 24th of February, Putin announced the start of his contemptible and unjustified military operation on the Ukraine. We from neurocat believe this is also an attack on peace itself and on the fundamental rights that we share with our Ukrainian brothers and...

neurocat CEO Florens Greßner in IHK Berlin Magazine

neurocat CEO Florens Greßner in IHK Berlin Magazine

Our CEO Florens Greßner featured in IHK Berlin magazine’s February 2022 edition talking about his inspiration for #neurocat, love for abstract #mathematics and advice for #young #entrepreneurs 📰 Check out the magazine below (#35, only in German) 👇🏻...

neurocat Honoured With Vision Award 2021

neurocat Honoured With Vision Award 2021

Neurocat was awarded with the VisionAward 2021 at MedienTage Munich. VisionAwards have been given to young and established companies and entrepreneurs who set new trends with a new business idea. The jury for the award consisted of interdisciplinary experts and...

5
4

Hands on Guide to comply with ECJ ruling

In this article we want to propose a strategy on how to satisfy legal constraints AI is facing right now. The ruling of the European Court of Justice (ECJ), on the 21st of July 2022, is a perfect example that empathizes the need for transparency and risk evaluation...

Germany takes enormous leap towards safe autonomous driving

KI Absicherung positions Germany to compete in the race for  key new technologies and prepares the German automotive industry for market leadership in autonomous driving. We at neurocat were proud to be part of the "KI Absicherung" (roughly, Safeguarding AI)...

neurocat and the Rise of AI

On the 8th of June 2022 Florens Gressner  (CEO of neurocat) had the opportunity to talk at the Rise of AI Conference 2022, the most influential platform in the AI Industry. He talks about the upcoming AI Act that will...

neurocat wins 1st place award at TechAD 2022

Neurocat won 1st place in Software and Compute Category at the 2022 edition of the Tech.AD EUROPE AWARD. The Tech.AD Europe Award exclusively honours extraordinary projects in the automotive industry and celebrates exceptional solutions & innovations. “The winners...

neurocat donates 3.600€ to support Ukrainian refugees

On the 24th of February, Putin announced the start of his contemptible and unjustified military operation on the Ukraine. We from neurocat believe this is also an attack on peace itself and on the fundamental rights that we share with our Ukrainian brothers and...

neurocat CEO Florens Greßner in IHK Berlin Magazine

Our CEO Florens Greßner featured in IHK Berlin magazine’s February 2022 edition talking about his inspiration for #neurocat, love for abstract #mathematics and advice for #young #entrepreneurs 📰 Check out the magazine below (#35, only in German) 👇🏻...