Loading

Adversarial Examples

Deep neural networks have given rise to great hopes and expectations for new applications in safety-critical areas such as automotive, healthcare or finance. However, to have society gain confidence in  the technology, we must minimize the often unintuitive behavior of deep neural networks. Can you trust a machine that confuses pedestrians with a free road, dust with cancer or an economic crisis with a bullish market? We are able to manipulate images, sound waves and even 3D-printed objects that can cause your well-trained model with allegedly superhuman performance to fail, while people will not notice any difference from the original data. The future doesn’t look so bright any more, does it? Yes, it does – with neurocat. Meow!

In the first step we let your model fail once, in the second step we make sure it will succeed forever after.

Privacy Notice

There may have been unauthorized access of third parties to files uploaded to this website by applicants in the period 09/08/2018 – 23/08/2019.

According to the log files, nobody but the finder accessed the files.

The access possibility was removed within one hour after we became aware of this.

We have opted for this notification in order to guarantee maximum transparency and to inform you as a precaution.

If you have any questions, please do not hesitate to contact us at: jobs@neurocat.ai and +49 (0)30 34065918.