Loading

Physical Adversarial Examples

One of the biggest obstacles to the confidence-inspiring use of deep neural networks in production (like, for instance, in autonomous vehicles) are adversarial attacks. An enemy attack usually consists of a carefully crafted sensor input to distort the prediction of the network. Recently, researchers succeeded in deceiving popular vision networks with physical objects. For example, MIT researchers used a 3D printer to create a turtle that was classified as a rifle from any angle.

Autonomous cars have to be robust against such attacks. Think of a troll who swaps all speed limits or a terrorist who swaps pedestrians with the street. The cat would not like to be near such a technology. However, we offer solutions for these kinds of problems by simulating attacks on the customers’ networks in the first step and building up strong defenses against such attacks in the second.

https://www.labsix.org/physical-objects-that-fool-neural-nets/
https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

AI jobs in Berlin

We’re looking for people with a passion for AI who want to thrive in a great team.
Check our openings
and get in touch now.robot

See our open positionsbtn-icon