About Us
neurocat was founded in 2017 with a unifying desire to learn, grow and provide exemplary services. Focused on enabling companies to build AI systems of tomorrow while keeping people safe, and maintaining values and ethics intact. We are joined in our mission by a multidisciplinary team consisting of research, industry, and application experts with a common desire of bringing robust AI systems to the public domain.


Our Vision
ML enables the industry to accelerate the proficiency of applications for its end users’ benefits, while retaining the downsides of biases contained within the datasets leading to safety, security, and ethical concerns. We want to enable ML pioneers to develop safe, secure, robust, and ethical AI systems to avoid mistakes and enable trust to experience the future today.
Our Mission
We are on the mission to ensure protection of users from the biases that exist within the ML systems. We consult ML pioneers to set up a safety focused test strategy and provide automated evaluations and improvements with our cloud platform “aidkit”.
News

Decision Boundary
State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...

Times of Growth
Since the beginning of 2019, neurocat’s manpower has more than doubled. With all teams growing fast, we significantly expanded our office space. Luckily, we were able to move into a second floor in our company headquarter in Adlershof. This is just the beginning: If...

Techcode Pitch Won
The world's leading trade show for industrial technology and AI: Hannover Messe, Germany. With 6,500 exhibitors and over 250,000 visitors, it’s the place where digital innovations meet industry heavyweights. When Chancellor Angela Merkel opened Hannover Messe on 1...

Robustness Verification: Seminar Presentation at HU Berlin
Two of our cats are soon going to give a talk about the current status of duality-based adversarial robustness verification. The content will strongly rely on the recent publication ”Training Verified Learners with Learned Verifiers” by K. Dvijotham et al. (DeepMind)....

Interview Adlershof-Journal
What is growth? The Adlershof Journal examined beliefs about growth and interviewed us about AI security and our personal growth story. After a conversation with editor Peter Trechow, our way to grow, successes and ambitions were printed in the September/October issue...

Website launched
After a year of hard and exciting work the time has come for a first small website! Today we launched the first version of the site and plan to fill it with more and more content in the next few weeks. Our loyal fans will know what we mean when our website was...

Physical Adversarial Examples
One of the biggest obstacles to the confidence-inspiring use of deep neural networks in production (like, for instance, in autonomous vehicles) are adversarial attacks. An enemy attack usually consists of a carefully crafted sensor input to distort the prediction of...

Adversarial Examples
Deep neural networks have given rise to great hopes and expectations for new applications in safety-critical areas such as automotive, healthcare or finance. However, to have society gain confidence in the technology, we must minimize the often unintuitive behavior...

DIN SPEC 92001
AI and ML experts from all over Germany came together to kick-off work on the DIN SPEC 92001 'Artificial Intelligence - Quality requirements and life cycle management for AI modules'. The huge advancements in and the rapid development of the field of AI have shown...

LIME for GDPR
As mentioned in our last post the GDPR is coming and companies have to get ready! There is a considerable number of people claiming that the GDPR could potentially harm the deployment of deep learning models in Europe. The main reason given in this context is the fact...
Most Promising German AI Startups
Neurocat was voted one of the 247 most promising German AI startups by the initiative for applied artificial intelligence. We are pleased to be part of the AI Startup Landscape 2020 and working hard to shape the future of AI quality. The Startup Landscape and a...
Decision Boundary
State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is...
Times of Growth
Since the beginning of 2019, neurocat’s manpower has more than doubled. With all teams growing fast, we significantly expanded our office space. Luckily, we were able to move into a second floor in our company headquarter in Adlershof. This is just the beginning: If...
Techcode Pitch Won
The world's leading trade show for industrial technology and AI: Hannover Messe, Germany. With 6,500 exhibitors and over 250,000 visitors, it’s the place where digital innovations meet industry heavyweights. When Chancellor Angela Merkel opened Hannover Messe on 1...
Robustness Verification: Seminar Presentation at HU Berlin
Two of our cats are soon going to give a talk about the current status of duality-based adversarial robustness verification. The content will strongly rely on the recent publication ”Training Verified Learners with Learned Verifiers” by K. Dvijotham et al. (DeepMind)....
Interview Adlershof-Journal
What is growth? The Adlershof Journal examined beliefs about growth and interviewed us about AI security and our personal growth story. After a conversation with editor Peter Trechow, our way to grow, successes and ambitions were printed in the September/October issue...
Website launched
After a year of hard and exciting work the time has come for a first small website! Today we launched the first version of the site and plan to fill it with more and more content in the next few weeks. Our loyal fans will know what we mean when our website was...
Physical Adversarial Examples
One of the biggest obstacles to the confidence-inspiring use of deep neural networks in production (like, for instance, in autonomous vehicles) are adversarial attacks. An enemy attack usually consists of a carefully crafted sensor input to distort the prediction of...
Adversarial Examples
Deep neural networks have given rise to great hopes and expectations for new applications in safety-critical areas such as automotive, healthcare or finance. However, to have society gain confidence in the technology, we must minimize the often unintuitive behavior...

Ready For a Change in Your Career?
Let’s get started!
Find Us At
Germany’s most modern and advanced science and technology park!
neurocat GmbH
Rudower Chaussee 29
12489 Berlin
Germany



Our Partners




