Loading

Decision Boundary

State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is proof that the minimal distance of natural images to the decision boundary is low. Increasing this distance would directly increase model robustness. But how? Although there is a large body of adversarial robustness research, we know surprisingly little about the decision boundary of a neural network and its development over the training process. We therefore conducted experiments on MNIST, FASHION MNIST, and CIFAR-10 and found empirical observations challenging common beliefs of neural network training. For example, it does not seem to be true that training moves the decision boundary away from the training data in order to facilitate generalization – at least this is not true for all directions in the input space. We hope the results will encourage more research on the decision boundary and contribute to new ways to increase DNN robustness.

http://arxiv.org/abs/2002.01810

AI jobs in Berlin

We’re looking for people with a passion for AI who want to thrive in a great team.
Check our openings
and get in touch now.robot

See our open positionsbtn-icon