As mentioned in our last post the GDPR is coming and companies have to get ready! There is a considerable number of people claiming that the GDPR could potentially harm the deployment of deep learning models in Europe. The main reason given in this context is the fact that deep neural networks do not automatically offer insights into why they produce certain outputs. However, one should acknowledge that especially in the last years there has been exciting progress in the area of explainable AI, which can already remedy some concerns regarding the GDPR. A notable example is LIME (Local Interpretable Model-agnostic Explanations), a method for fitting local, interpretable models that can explain single predictions of deep neural networks. Thus, this method fits perfectly to the GDPR requirement that companies „should find simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm“. Overall, we are aware that still a lot of work has to be done, but one should not throw out the baby with the bath water! Deep learning is complex and hard to fully analyze, but already today we have various ways of generating powerful explanations. The great LIME-paper by Ribeiro, Singh, and Guestrin (2016) can be found at:

Condensed best practices from “Security of AI-Systems: Fundamentals – Adversarial Deep Learning”
The security and safety of AI is something we all inherently recognize as important. Yet many of us don't have a background in this topic and have limited time to learn new things or keep up on the latest developments. neurocat is here to help. We recently...