As mentioned in our last post the GDPR is coming and companies have to get ready! There is a considerable number of people claiming that the GDPR could potentially harm the deployment of deep learning models in Europe. The main reason given in this context is the fact that deep neural networks do not automatically offer insights into why they produce certain outputs. However, one should acknowledge that especially in the last years there has been exciting progress in the area of explainable AI, which can already remedy some concerns regarding the GDPR. A notable example is LIME (Local Interpretable Model-agnostic Explanations), a method for fitting local, interpretable models that can explain single predictions of deep neural networks. Thus, this method fits perfectly to the GDPR requirement that companies „should find simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm“. Overall, we are aware that still a lot of work has to be done, but one should not throw out the baby with the bath water! Deep learning is complex and hard to fully analyze, but already today we have various ways of generating powerful explanations. The great LIME-paper by Ribeiro, Singh, and Guestrin (2016) can be found at:

neurocat wins 1st place award at TechAD 2022
Neurocat won 1st place in Software and Compute Category at the 2022 edition of the Tech.AD EUROPE AWARD. The Tech.AD Europe Award exclusively honours extraordinary projects in the automotive industry and celebrates exceptional solutions & innovations. “The winners...