news category icon

Hands on Guide to comply with ECJ ruling

Post Date: 25. August 2022

In this article we want to propose a strategy on how to satisfy legal constraints AI is facing right now. The ruling of the European Court of Justice (ECJ), on the 21st of July 2022, is a perfect example that empathizes the need for transparency and risk evaluation methods in order to use AI to its full potential.

Data is one of the most valuable assets in the age of digitization and almost impossible to keep them for oneself. Nowadays, when you want to find a person, you don’t follow their footprints, you follow their data tracks. For example, if you plan to get on a plane, passenger and travel data is stored and evaluated. According to the Passenger Name Record (PNR) Directive, the aim of the collection of this data is to detect or prevent terrorist offences and serious crime.

The ECJ has now decided to readjust this PNR Directive in order to further protect the right for privacy and to avoid discrimination.

The  automated processing and querying of the PNR database among other things have been limited by these new rules. The Advocate General has concerns about an AI’s ability to change evaluation criteria and might select human properties that lead to discrimination. 

In addition, automated decision making resulting in identifying a person as a potential threat must be checked individually for legality required by the provisions of the PNR Directive. However, due to the lack of transparency of results, it is seemingly impossible to determine the reason why an artificial intelligence algorithm has decided to flag a human and thus it cannot be guaranteed that such a result is free of discrimination.

Risk is always the hurdle that makes it difficult to deploy AI, because many companies are inexperienced in mitigating risk or understanding the decision making of their AI systems. Risk is not only introduced if a person can be physically harmed, like this example shows the human dignity needs protection as well. Every human being has the right to equal and fair treatment, regardless of gender, age, origin, ethnicity, religious affiliation, etc., and this right must also be protected and preserved when using AI.

The European Court of Justice is not alone with its concerns, also the upcoming AI Act will introduce strict obligations before such systems can be put on the market. High-risk AI systems will require systematic risk assessment and mitigation systems and must minimise risks and discriminatory outcomes. In this use case we are observing the necessity of these upcoming obligations in real-life. Besides an undefined quality threshold for the false positive rate certain input variables might lead to discriminatory outcomes.

At neurocat, our goal is to make artificial intelligence more robust and thus utilize the added value of this disruptive technology. For over 5 years, we are focusing on enabling AI pioneers to enter critical use cases. We do this by developing systematic test strategies to enable the deployment of our clients AI products. For this use case we would propose the following automated test strategy to avoid discrimination for any AI system:

In 2017, Wachter et al. analyzed methods that provide a post-hoc explanation  of AI systems. At the time they had concerns about the automated decision making related to GDPR. However, also for the use case at hand we can still use the insights to automatically identify discrimination with, so called counterfactuals.

But what are counterfactuals? Let us assume an AI has concluded from the “training data”, that people coming from Country A are more likely to pose a risk for society. The people who implemented the system are not aware of this yet. So, if Mrs. Cat, who is a citizen of Country A, books a flight, she will be wrongfully flagged by the algorithm. To inspect whether this decision is based on discriminatory rules counterfactuals can be used. Counterfacts are an alternative data set, with those the same AI module would not have classified Mrs. Cat as a danger. In this “explanatory data set” the data matches the original input about Mrs. Cat, except that she is now coming from Country B. By comparing the original input to the counterfact it is straightforward to conclude that the algorithm produces discriminatory results solely based on the fact, that a person is living in a specific country.

To automatically generate counterfacts, it is important to create an explanatory data set that provides a diverse set of inputs that would have changed the decision of the AI. If you think about it counterfacts are just like adversarial attacks. Both have the goal to introduce small changes to the input data in order to change the classification of the algorithm. With counterfacts we use that to our advantage and produce post-hoc explanations that are able to reveal discrimination.

We understood now that we can use adversarial attacks to detect discrimination. The next challenge is to find adversarial attacks that produce a diverse set of counterfacts to prevent overseeing any form of discrimination.

One common mistake we are observing is to arbitrarily select adversarial attacks and hope they provide meaningful insights about said module.

To avoid this neurocat invented the Attack Generator, which in a nutshell is a systematic approach towards generating diverse adversarial attacks, and thus counterfacts, for any given AI-module. This is a post-hoc explanation method that can be facilitated to determine if discrimination was involved in automated decision making and especially for the new PNR data regulations.

AI is changing the world and we want to unleash its full potential even in critical applications, such as the ECJ is facing right now. Our focus is on delivering innovation in quality control of AI systems. We want to enable companies in evaluation, development and deployment of safe and secure AI systems and help people realize the full capacity of AI systems by applying best practices in testing standards and especially risk mitigation.


References

Judgment of 21 June 2022, Ligue des droits humains, C‑817/19, ECLI:EU:C:2022:491

ASSION, F., SCHLICHT, P., GRESSNER, F., GÜNTHER, W., HÜGER, F., SCHMIDT, N., AND RASHEED, U. The attack generator: A systematic approach towards constructing adversarial attacks. arXiv  (2019). pages 1-12.

DORAN, D., SCHULZ, S., AND BESOLD, T. R. What does explainable AI really mean? A new conceptualization of perspectives. In Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML (CEX), Bari, Italy, November 16th and 17th, 2017. (2018), vol. 2071 of CEUR Workshop Proceedings, CEUR-WS.org.

MCGRATH, R., COSTABELLO, L., VAN, C. L., SWEENEY, P., KAMIAB, F., SHEN, Z., AND LÉCUÉ, F. Interpretable credit application predictions with counterfactual explanations. NIPS(2018). pages 1-9.

PAPANGELOU, K., SECHIDIS, K., WEATHERALL, J., AND BROWN, G. Toward an understanding of adversarial examples in clinical trials. Machine Learning and Knowledge Discovery in Databases ECML PKDD(2019). Vol 11051, pages 1-16.

SELBST, A. D., AND POWLES, J. Meaningful information and the right to explanation. International Data Privacy Law (2017). Vol 7, pages 233-242.

SOKOL,K., AND FLACH, P. Counterfactual explanations of machine learning predictions: Opportunities and challenges for AI safety. SafeAI@AAAI(2019). pages 1-4.

WACHTER, S., MITTELSTADT, B., AND RUSSEL, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology (2018). Vol 31, pages 841-887.

WOODWARD, J. Interventionism and causal exclusion.Philosophy and Phenomenological Research (2015). Vol. XCI, pages 303-347.

You may also find this interesting

Germany takes enormous leap towards safe autonomous driving.

Germany takes enormous leap towards safe autonomous driving.

KI-Absicherung positions Germany for the new key technologies and market leadership of the German automotive industry with regard to automated driving. We were part of the „KI Absicherung“ research project that is part of the Federal Government’s AI strategy, which...

neurocat and the Rise of AI

neurocat and the Rise of AI

On the 8th of June 2022 Florens Gressner  (CEO of neurocat) had the opportunity to talk at the Rise of AI Conference 2022, the most influential platform in the AI Industry. He talks about the upcoming AI Act that will...

neurocat wins 1st place award at TechAD 2022

neurocat wins 1st place award at TechAD 2022

Neurocat won 1st place in Software and Compute Category at the 2022 edition of the Tech.AD EUROPE AWARD. The Tech.AD Europe Award exclusively honours extraordinary projects in the automotive industry and celebrates exceptional solutions & innovations. “The winners...

neurocat donates 3.600€ to support Ukrainian refugees

neurocat donates 3.600€ to support Ukrainian refugees

On the 24th of February, Putin announced the start of his contemptible and unjustified military operation on the Ukraine. We from neurocat believe this is also an attack on peace itself and on the fundamental rights that we share with our Ukrainian brothers and...

neurocat CEO Florens Greßner in IHK Berlin Magazine

neurocat CEO Florens Greßner in IHK Berlin Magazine

Our CEO Florens Greßner featured in IHK Berlin magazine’s February 2022 edition talking about his inspiration for #neurocat, love for abstract #mathematics and advice for #young #entrepreneurs 📰 Check out the magazine below (#35, only in German) 👇🏻...

neurocat Honoured With Vision Award 2021

neurocat Honoured With Vision Award 2021

Neurocat was awarded with the VisionAward 2021 at MedienTage Munich. VisionAwards have been given to young and established companies and entrepreneurs who set new trends with a new business idea. The jury for the award consisted of interdisciplinary experts and...

dSPACE Acquires Interests in Berlin Start-Up neurocat

dSPACE Acquires Interests in Berlin Start-Up neurocat

dSPACE, an internationally leading provider of simulation and validation solutions for the development of networked, self-driving, and electric vehicles, has acquired interests in neurocat. Support from a renowned industry leader will give us the necessary support in...

Wasserstein Verification Paper Published

Wasserstein Verification Paper Published

We are proud to present our joint work with the University of Göttingen. Formal verification of neural networks is a challenging topic! In the past, verification methods were mainly focused on Lp-norms for measuring imperceptibility. Using Lp-norms for measuring the...

Neurocat accepted onto NVIDIA AI Inception program

Neurocat accepted onto NVIDIA AI Inception program

Neurocat has been accepted into the NVIDIA AI Inception program! Being recognized by one of the giants in AI space showcases the potential of our software offering aidkit and neurocat's work in the domain of AI robustness, explainability and functionality. Want to try...

5
4

Start Your Own Newsworthy Project!