Loading

LIME for GDPR

As mentioned in our last post the GDPR is coming and companies have to get ready! There is a considerable number of people claiming that the GDPR could potentially harm the deployment of deep learning models in Europe. The main reason given in this context is the fact that deep neural networks do not automatically offer insights into why they produce certain outputs. However, one should acknowledge that especially in the last years there has been exciting progress in the area of explainable AI, which can already remedy some concerns regarding the GDPR. A notable example is LIME (Local Interpretable Model-agnostic Explanations), a method for fitting local, interpretable models that can explain single predictions of deep neural networks. Thus, this method fits perfectly to the GDPR requirement that companies „should find simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm“. Overall, we are aware that still a lot of work has to be done, but one should not throw out the baby with the bath water! Deep learning is complex and hard to fully analyze, but already today we have various ways of generating powerful explanations. The great LIME-paper by Ribeiro, Singh, and Guestrin (2016) can be found at:

LIME-Paper

This could also be interesting for you

AI jobs in Berlin

We’re looking for people with a passion for AI who want to thrive in a great team.
Check our openings
and get in touch now.robot

See our open positionsbtn-icon