The Federal Commissioner for Data Protection and Freedom of Information of Germany, also known as BfDI (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit), recently published an opinion on the topic of Generative Artificial Intelligence (AI). In a previous article, we discussed Generative AI, which refers to artificial intelligence applications capable of generating new content, rather than simply recognizing patterns. Examples of Generative AI applications include well-known chatbot ChatGPT and the image generator MidJourney, among others. While this technology is fascinating from a technological standpoint, it poses significant challenges in terms of regulation, particularly in the field of personal data protection.

In this article, we will discuss some key points raised by the Commissioner that we believe are most relevant from a data protection perspective.

Use of Generative AI in the workplace

Generative AI is gradually finding its way into everyday work applications such as word processors, code copilots, and search engines. This integration lowers the barrier for the use of Generative AI, making it increasingly commonplace. In this regard, employers have a responsibility to:

  • Always adhere to the General Data Protection Regulation (GDPR).
  • Avoid indiscriminate use of personal data when utilizing Generative AI, even if it simplifies work processes.
  • Only process personal data through Generative AI applications when there is a legal basis for such processing.

The Commissioner also emphasizes that employers have a duty to ensure that employees are trained and educated on these matters.

Children’s data

Children are particularly vulnerable in terms of their personal data processing since they often lack awareness of the risks and consequences involved and may struggle to exercise their data subject rights effectively. As a general rule, the personal data of minors should not be included as a training base for generative AI systems. It is the responsibility of developers to implement measures to ensure this, for example, by filtering training data to exclude minors‘ personal data.

Regarding the type of content children can access via generative AI applications, it could be beneficial to establish appropriate limits within the AI system, similar to how inappropriate content is handled through features like Safe Search, which is a Google Search feature that filters out potentially pornographic and offensive content. However, the BfDI mentions that imposing strict limits, such as mandatory identification requirements for use, should be avoided as it eliminates the possibility for users to keep their anonymity.

As the use of generative AI becomes the norm via its integration into everyday applications, the priority should be educating and raising awareness among minors about the risks, as well as the opportunities and potentials of the technology. Parents, guardians, and teachers must actively participate in this education process by teaching minors about the critical and responsible use of generative AI systems.

Image generators

Generative AI applications specialized in generating images pose a great risk from a data protection point of view since they can be used to generate deepfakes and spread false information about natural persons. Furthermore, images generated by AI can be very difficult to discern from real images, increasing the difficulty to correct misinformation. Even if the false images are stated as false post-facto, the correction rarely has as much range as the original lie, making it almost impossible to undo the damage.

While marking and recognition tools that signal when an image is an AI product may be implemented to address social problems, they are not a comprehensive solution, particularly in cases where generative AI is intentionally used for propaganda or reputation-damaging purposes and the creators have no intention of disclosing the means. Watermarks or digital signatures are also not an ideal solution since they can lead to privacy concerns. For example, in the case of whistleblowers who wish to maintain their anonymity, a digital signature may come as a hindrance.

The primary focus should be on educating and raising awareness among the population about these issues. Individuals must be motivated to engage with fact-checking mechanisms and always confirm sources of information. Organizational measures should also be considered, such as the labeling of AI-generated media, similar to how copyright attribution is practiced. Additionally, education and awareness initiatives are crucial in promoting the responsible use of AI-generated media and mitigating its negative effects.

Conclusion

The rapid advancement of generative AI technology has prompted public authorities worldwide to take notice and address its implications. The recently published opinion by the BfDI sheds light on the critical data protection considerations surrounding generative AI. As this technology becomes increasingly integrated into everyday applications, it is crucial to strike a balance between its potential benefits and the need to protect personal data. As we navigate the evolving landscape of generative AI, public awareness and comprehensive educational initiatives must remain at the forefront.

To access the full opinion of the BfDI (in German), you can go to this link.