In December 2023, the Court of Justice of the European Union (CJEU) issued Judgement C-634/21 on the Schufa case. This landmark ruling is set to shape the GDPR-friendly approach to future AI-based businesses. At a pivotal moment where AI takes center stage in the European Institutions’ agenda, with efforts towards the adoption of the renowned European Regulation on Artificial Intelligence (the AI Act), the CJEU’s arguments provide significant GDPR interpretations that will impact all AI-based personal data processing activities.
Here’s why.
The case
The Administrative Court of Wiesbaden (Germany), following numerous complaints received from citizens regarding Schufa Holding’s activities, refers the case to the CJEU to clarify the company’s compliance with the GDPR regarding its credit scoring activities. One of these complaints, in particular, involves a citizen who was denied a credit by a credit institution based on the negative score attributed to them by Schufa’s algorithm.
Schufa Holding is an influential company in the German market that provides credit scoring services, supplying banks and financial institutions with „scores“ predicting an individual’s ability to repay debts, using a complex algorithm.
To understand the reasons behind Schufa’s negative rating (and consequently, the denial of credit), the individual requested Schufa to provide access to their personal data held by the company and used to calculate the score (a right guaranteed by Article 15 of the GDPR).
In response to the individual’s request, Schufa decided not to provide any significant information on how the score was determined, hiding behind the „trade secret“ curtain. Hence, the escalation of the dispute, all the way to the CJEU.
An in-depth analysis of the judgment can also be found in this article.
The link between “fully-automated processing” and AI
An algorithm that assigns a score to citizens, based on various parameters, affecting their ability to access a loan or credit – this underpins the business created by Schufa. Such activity falls squarely within the definition of „Automated individual decision-making“ under Article 22 of the GDPR, when the processing of personal data produces “a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her„.
What should this algorithm have to do with AI? Indeed, not all algorithms are AI, but all AIs start from algorithms! An algorithm capable of analyzing data and making decisions based on it uses machine learning techniques to process information and generate evaluations or decisions automatically, without direct human intervention.
We note, in this description, two fundamental factors for the definition of AI, namely: the ability to „learn“ and „think“ autonomously, and the autonomous generation of specific „outputs“ – in our case, the scores assigned to citizens.
This concept aligns also with the new, official definition of AI included in the latest publicly approved text of the AI Act (Brussels, 26 January 2024):
“’AI system’ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
Key findings of the Judgement
The judgment is extremely interesting under many aspects and provides many useful GDPR interpretations. The concept of “decision” outlined in this judgment proves to be particularly important: According to the Court, Article 22 GDPR undoubtedly applies to these algorithms, extending the concept of automated „decision“ to include the generation of outputs based on the process already described.
Despite the bank making the final decision on whether to grant the loan or not, the Court recognizes the value of Schufa’s scoring activity as a decision. In fact, the scoring attributed to individuals has a highly decisive value on the final decision, ensuring that the scoring holds the same importance as the decision itself.
Implications for EU Businesses
Businesses utilizing algorithms or automated-decision-making processes should be mindful of the potential implications of this judgment and take proactive steps towards GDPR compliance (and, in the very near future, AI Act compliance) every time such processes have a determining role in the final choice that affects data subjects.
GDPR compliance in the implementation of AI solutions is therefore crucial for several reasons.
1. Data Accuracy
Negative scores could be assigned based on data processed by the algorithm that may be incomplete or incorrect. In this scenario, the scoring attributed by the algorithm would rely on inaccurate information, producing erroneous outputs that, in the case of Schufa, have a significant impact on citizens‘ personal lives. Businesses must strive to mitigate the risks of data inaccuracies, including through thorough review, updating, and correction of training data.
2. Bias detection and correction
Not only errors but also biases. It is well-known that AI systems inherently carry a risk of producing decisions affected by biases (or „hallucinations“ in AI jargon). While it may be challenging to achieve 100% accuracy in complex systems like AI, it is essential to consider the concept of statistical accuracy, rather than absolute, of the information. However, it is equally important to work as much as possible to mitigate the risks and possibilities of biases occurring.
The consequences of these „erroneous“ decisions can be severe, extending beyond GDPR violations.
Therefore, companies must establish an internal process for verifying and correcting training data. This is a fundamental aspect emphasized in the new AI Act (particularly in Article 15 of the Regulation). Moreover, the importance of this point can also be understood from Article 10(5) of the AI Act, which states: „To the extent that it is strictly necessary for the purposes of ensuring bias detection and correction in relation to the high-risk AI systems (…) providers of such systems may exceptionally process special categories of personal data“ defined in Article 9 of the GDPR. Naturally, this is subject to numerous conditions, limitations, and appropriate security measures.
3. Data Subjects Access Request (DSARs) management
As clarified by the Court, there must be an adequate level of transparency regarding the functioning of the algorithm/AI system to enable understanding of the processing logic. Merely responding to a data access request with useless and generic information, as in the Schufa case, does not facilitate a simpler and clearer understanding of the underlying data processing logic operated by the AI system.
4. Transparency, data security and GDPR compliance
Compliance with the other GDPR principles also remain fundamental: ensuring an adequate level of transparency in data processing (and thus, as discussed, allowing data subjects to understand activities and processes), ensuring appropriate technical and organizational security measures, as well as limiting the retention period of personal data and carefully understand the correct legal base for the processing. And, of course, complying with the requirements of Article 22 GDPR.
Relying on your Data Protection Officer (DPO) or specialized privacy consultants is crucial in such circumstances, in particular to balance the importance of complying with GDPR requirements and at the same time the importance of finding strategies and processes that don’t limit or hinder the development of such innovative services.