The Belgian DPA Issues Guidelines on AI When it comes to Artificial Intelligence (AI) systems, there are two EU regulations that have a significant impact on their use. These two Regulations are the well-known General Data Protection Regulation (GDPR) and the AI Act (AIA), which only came into force on  August 1, 2024.

Since then, in an effort to simplify compliance, the Belgian Data Protection Authority (DPA) has released guidance about the interplay between these two Regulations.

First, it is important to understand what AI is. The Belgian DPA offers a simplified version of the definition of such systems contained in Article 3.1 of the AIA. More specifically, for them, an AI system is a computer system that is specifically designed to analyze data, identify patterns, and use this knowledge to make informed decisions or predictions, and in some cases can learn from data and adapt over time.

The remainder of this blog post will explain how the Belgian DPA exemplified the complementary nature between the GDPR and AIA.

Principles of the GDPR & the AIA

Lawfulness, Transparency & Fairness.

While the GDPR refers to lawful data processing – that is, the use of a legal basis contained in Article 6, without which processing cannot commence – the AIA prohibits certain AI systems, such as social scoring systems. In addition, the GDPR’s principle of fairness is also reinforced by the AIA, with its rules on bias and discrimination. Finally, both the GDPR and the AIA impose transparency obligations. The AIA goes a step further by including a baseline level of transparency for all systems – for instance, disclosing that a human is interacting with an AI system, as well as a higher level of transparency for high-risk AI systems through information obligations.

Purpose limitation and Data minimization:

These principles ensure AI systems do not gather excessive data or use it for purposes other than those for which it was intended. By stressing the necessity of a clearly stated and documented intended purpose, the AIA enhances the GDPR’s purpose limitation principle for high-risk AI systems.

Accuracy:

To avert discriminatory outcomes, the AIA requires high-risk AI systems to employ high-quality, unbiased data, building upon the GDPR principle.

Storage limitation: Although the AIA does not extend this principle, it is nevertheless important to note that it has an impact on AI since it states that personal data should only be stored for as long as is required to fulfill the purposes for which it was obtained in accordance with the GDPR.

 Accountability:

The GDPR mandates organizations to demonstrate accountability for personal data processing through various measures. Although the AI Act does not explicitly address demonstrating accountability, it aligns with the GDPR principles by incorporating elements such as a risk management approach and documentation requirements.

Automated decision-making (ADM)

Regarding ADM, both regulations emphasize the value of human involvement: the GDPR provides individuals the ability to object to decisions that are solely automated, and the AI Act mandates proactive human oversight for high-risk AI systems in order to prevent biases and guarantee the responsible development and application of such systems.

 Security of processing

The GDPR establishes technical and organizational measures (TOMs) to ensure secure processing and compliance, while the AIA complements this by requiring robust security measures for high-risk systems. Although the AIA’s obligations are similar to TOMs, they go further by addressing the specific vulnerabilities unique to AI systems.

Data Subject Rights

The AIA reinforces the data subject rights contained in the GDPR by emphasizing the importance of clear explanations of how data is used in AI systems.

Next Steps

This blog post has been a summary of the guidance provided by the Belgian Data Protection Authority on the interaction between the GDPR and the AI Act.

As the owner of an AI system, it is best to consult both the GDPR and the AIA to ensure compliance. However, this is not a time-sensitive matter, as AIA’s provisions will only apply in 2025 for prohibited and general-purpose AI (GPAI) systems, and in 2027 for high-risk systems.