State-of-the-art neural networks are vulnerable to adversarial examples. This major problem for the safe deployment of ML models arises when minor input modifications push a data point across the decision boundary of the model. The existence of adversarial examples is proof that the minimal distance of natural images to the decision boundary is low. Increasing this distance would directly increase model robustness. But how? Although there is a large body of adversarial robustness research, we know surprisingly little about the decision boundary of a neural network and its development over the training process. We therefore conducted experiments on MNIST, FASHION MNIST, and CIFAR-10 and found empirical observations challenging common beliefs of neural network training. For example, it does not seem to be true that training moves the decision boundary away from the training data in order to facilitate generalization – at least this is not true for all directions in the input space. We hope the results will encourage more research on the decision boundary and contribute to new ways to increase DNN robustness.

neurocat wins 1st place award at TechAD 2022
Neurocat won 1st place in Software and Compute Category at the 2022 edition of the Tech.AD EUROPE AWARD. The Tech.AD Europe Award exclusively honours extraordinary projects in the automotive industry and celebrates exceptional solutions & innovations. “The winners...