The European Union has recently introduced the AI Act, poised to become the cornerstone of AI governance across the EU. This groundbreaking regulation is designed to address the risks AI systems pose to health, safety, and fundamental rights, complementing the protections already established by the General Data Protection Regulation (GDPR). Together, these frameworks create a robust regulatory structure aimed at fostering innovation while safeguarding fundamental rights. But how do these two frameworks intersect, and where might they clash?

This blog article explores the nuanced relationship between the GDPR and the AI Act, emphasizing the synergies, conflicts, and compliance strategies organizations must adopt to navigate this evolving landscape.

Different Goals, Shared Principles

The GDPR focuses on protecting individuals‘ rights by regulating the processing of personal data. It emphasizes principles such as lawfulness, transparency, and accountability, ensuring data is processed fairly and securely.

In contrast, the AI Act is a product safety regulation, addressing the technical risks associated with AI systems. It establishes rules to ensure these systems are trustworthy, reliable, and aligned with ethical principles. Despite these distinct objectives, both frameworks share a commitment to the principles of transparency and accountability, particularly in high-risk applications. For example:

  • Under GDPR, organizations must inform individuals about data processing (Articles 13 and 15) and demonstrate compliance (Article 5 para. 2).
  • Similarly, the AI Act requires providers of high-risk AI systems to provide clear, transparent instructions (Article 13) and maintain detailed technical documentation (Article 11).

The GDPR and the AI Act are complementary, not hierarchical, meaning neither supersedes the other. Article 2 para. 7 AI Act ensures that the AI Act is without prejudice to existing Union legislation on data protection, explicitly referencing the GDPR. However, their application depends on the specific context:

The GDPR remains the overarching law for the processing of personal data in the EU. Any AI system that processes personal data must comply with the GDPR’s requirements. This means that principles like transparency, accountability, data minimization, and lawful processing must always be observed.

The AI Act is a specialized framework adding specific rules for AI systems, including those that process personal data. It does not replace the GDPR but builds upon its principles, focusing on AI-specific risks like transparency, human oversight, and robustness.

A Shared Territorial Scope: Extending Beyond EU Borders

Both the GDPR and AI Act adopt an extraterritorial approach. Non-EU entities must comply with these regulations if their services or systems are offered within the EU. The AI Act governs AI systems used or made available in the EU. This alignment ensures global compliance for any organization operating in these jurisdictions.

Roles and Responsibilities: Providers, Deployers, Controllers, and Processors

A notable distinction between the two frameworks is the terminology used to define roles. The GDPR classifies entities as controllers (those who determine the purpose and means of processing) and processors (those who process data on behalf of controllers). The AI Act introduces providers (entities developing AI systems) and deployers (entities integrating these systems into operations).

These roles often overlap. For instance, a company deploying an AI-powered recruitment tool might act as a deployer under the AI Act and a controller under the GDPR, requiring compliance with both frameworks simultaneously. Organizations involved in processing personal data while developing or utilizing an AI system must evaluate their responsibilities under both the GDPR and the EU AI Act.

Potential Conflicts Between the GDPR and the AI Act

Some conflict areas between the two regulations are likely to arise:

Automated Decision-Making and Human Oversight

While GDPR’s Article 22 focuses on protecting individuals from solely automated decisions that produce legal or significant effects, the AI Act provides broader protections by requiring human oversight for all high-risk AI systems, regardless of whether decisions are fully automated. This ensures safeguards are embedded throughout design, deployment, and use of high-risk AI systems, creating a more comprehensive framework. The AI Act also mandates accountability mechanisms for providers and deployers, addressing risks at every stage of an AI system’s lifecycle. This relationship reflects a positive conflict, as the AI Act extends and reinforces GDPR protections. However, there may be some ambiguity regarding what level of oversight qualifies as “solely automated” under GDPR, potentially leading to uncertainties about how to harmonize compliance with both frameworks effectively.

Sensitive Data Processing

The AI Act allows sensitive data processing to correct biases (Article 10 AI Act), a provision that conflicts with GDPR’s stringent restrictions on such processing (Article 9 GDPR). Under the AI Act, organizations may only process special category data for debiasing as far as is „strictly necessary.“ This means organizations must constantly check whether they still need the data for debiasing.
In Article 10, the AI Act states the exception only applies when organizations cannot use other data, such as anonymous or synthetic data. The act follows the definition of personal data in the GDPR: anonymous data are not considered personal data, and as such, the ban under Article 9 GDPR does not apply anyway. Synthetic data are a type of fake data that represent the same, or a similar, distribution of individuals but can no longer be linked to the individuals.

This requires organizations to demonstrate that no alternative data, such as anonymous or synthetic data, can achieve the same purpose. However, this can be subject to interpretation, potentially leading to uncertainty in its application alongside GDPR exceptions.

High risk classifications

Under Article 35 GDPR, data controllers must conduct a Data Protection Impact Assessment (DPIA) when processing personal data poses a high risk to individuals‘ rights. The AI Act requires providers to assess whether their AI systems are high-risk. The conflict arises because:

  • A provider may determine an AI system is not high-risk under the AI Act.
  • A deployer might still need to conduct a DPIA under GDPR if the system processes personal data in a way that risks individuals‘ rights.

This means the same AI system could face different classifications and risk management requirements under the two laws, depending on its specific use.

Synergies Between DPIAs and FRIAs

Both frameworks require assessments to identify and mitigate risks:

  • The GDPR mandates Data Protection Impact Assessments (DPIAs) for high-risk data processing.
  • The AI Act requires Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems, as well as conformity assessments to ensure compliance with technical and ethical standards.

By aligning these assessments, organizations can create a cohesive process, minimizing duplication while ensuring compliance with both frameworks.

Preparing for the Future

The AI Act will roll out in stages, after its publication on 12 July 2024, and its entering into force last August 2024, the next key milestones include:

2 February 2025: Prohibitions on unacceptable-risk AI systems effect. Article 4 requires providers and operators of AI systems to take steps to build AI competence among their employees and authorized users.

2 August 2025: Rules for general-purpose AI models apply, and Member States must designate National Competent Authorities (NCAs).

2 August 2026: Full application of rules for high-risk AI systems. Member States must also implement at least one AI regulatory sandbox.

To navigate this complex regulatory environment, organizations should:

  • Map Roles and Responsibilities: Identify how your organization’s activities align with GDPR and AI Act roles, such as controller, processor, provider, or deployer.
  • Develop Unified Compliance Processes: Integrate GDPR and AI Act requirements, particularly for impact assessments and documentation for compliance.
  • Train employees in the compliant use of AI tools to mitigate risks.
  • Engage Regulators Early: Open dialogue with Data Protection Authorities (DPAs) and NCAs can clarify ambiguities and facilitate smoother compliance.
  • Keep an eye on emerging Standards and Advisory Boards: Engage with harmonized standards to stay ahead of compliance requirements and the AI Board and Office.