Loading

Physical Adversarial Examples

One of the biggest obstacles to the confidence-inspiring use of deep neural networks in production (like, for instance, in autonomous vehicles) are adversarial attacks. An enemy attack usually consists of a carefully crafted sensor input to distort the prediction of the network. Recently, researchers succeeded in deceiving popular vision networks with physical objects. For example, MIT researchers used a 3D printer to create a turtle that was classified as a rifle from any angle.

Autonomous cars have to be robust against such attacks. Think of a troll who swaps all speed limits or a terrorist who swaps pedestrians with the street. The cat would not like to be near such a technology. However, we offer solutions for these kinds of problems by simulating attacks on the customers’ networks in the first step and building up strong defenses against such attacks in the second.

https://www.labsix.org/physical-objects-that-fool-neural-nets/
https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

Privacy Notice

There may have been unauthorized access of third parties to files uploaded to this website by applicants in the period 09/08/2018 – 23/08/2019.

According to the log files, nobody but the finder accessed the files.

The access possibility was removed within one hour after we became aware of this.

We have opted for this notification in order to guarantee maximum transparency and to inform you as a precaution.

If you have any questions, please do not hesitate to contact us at: jobs@neurocat.ai and +49 (0)30 34065918.