As artificial intelligence (AI) increasingly integrates into daily life, its influence on privacy continues to grow. Developing AI models often involves processing vast amounts of data, and such models are now widely involved in numerous processing activities. This trend has raised concerns about privacy, transparency, and fairness.

In response to these challenges, the European Data Protection Board (EDPB) has issued an opinion (“Opinion”) following a request from the Irish Supervisory Authority (SA). Collaborating with various stakeholders, including the EU AI Office, the EDPB provides this guidance to ensure that AI development respects individuals’ rights and freedoms.

Key Takeaways from the Opinion

The Opinion addressed three primary questions:

  1. When can AI models be considered anonymous?
  2. Can legitimate interest serve as a legal basis for data processing during an AI model’s development and deployment?
  3. What are the implications of unlawful data processing by AI models?

Anonymity in AI Models

The EDPB clarified that AI models trained on personal data cannot automatically be considered anonymous. For a model to qualify as anonymous, the likelihood of personal data extraction or reidentification must be negligible. Competent supervisory authorities (SAs) must evaluate anonymity claims on a case-by-case basis using documentation provided by data controllers. Effective methods to ensure anonymity include limiting data collection, reducing identifiability, and enhancing resistance to data extraction.

Legitimate Interest as a Legal Basis

The EDPB proposed a three-step test to evaluate legitimate interest as a basis for processing personal data during AI development and deployment:

  1. Identification of a lawful, clear, and present interest;
  2. Assessment of necessity by demonstrating that the processing is essential and minimally intrusive; and
  3. Balancing interests by weighing the impact on the rights and freedoms of data subjects against the stated interest. The nature of the AI model’s processing and the expectations of data subjects regarding the use of their personal data are also crucial factors in this evaluation. Mitigation measures may reduce impacts on data subjects, but these must also be assessed on a case-by-case basis by the SAs.

Unlawful Data Processing by AI Models

The Opinion also outlined three scenarios for assessment of unlawful data processing:

  • If the same controller processes personal data during development and deployment of an AI model, the distinction between purposes in development and deployment and the impact of a lack of legal basis on subsequent processing, should be assessed case by case.
  • If a different controller processes personal data during deployment, SAs should consider whether the controller fulfilled accountability obligations under the General Data Protection Regulation (GDPR) by appropriately assessing whether the AI model was developed through lawful personal data processing.
  • If personal data is processed unlawfully during development but anonymized before deployment, subsequent operations using the anonymized data fall outside GDPR’s scope. However, any further personal data processed during deployment must comply with the GDPR, and the initial unlawful processing must not affect lawful subsequent processing of different personal data.

Conclusion

The Opinion highlights the complexities of AI technologies and the critical need to align innovation with data protection laws. It stresses the importance of rigorous, case-by-case assessments, accountability, and transparency to safeguard data subjects’ rights while facilitating the lawful use of AI models. These measures aim to balance technological progress with the fundamental principles of GDPR compliance.