Following a data breach reported on March 20th by OpenAI, the US company that develops and manages the ChapGPT platform, the Italian Garante decided on Thursday March 31st to temporarily limit the processing of Italian users’ personal data by the platform. As a result of the Garante’s decision, the company has blocked ChatGPT in Italy at short notice.
The data breach affected users’ conversations and information regarding payments and triggered the investigation initiated by the Italian Data Protection Authority (Garante per la Protezione dei Dati Personali).
Preliminary findings of the Garante
- Non-compliance with transparency obligations: no appropriate information is provided to data subjects whose data are collected by Open AI;
- Non-compliance with the principle of “Lawfulness of Processing”: there appears to be no legal base for the processing of users’ personal data for the secondary purpose of training the algorithm on which the platform relies;
- Non-compliance with Art. 8 GDPR, namely, with the conditions applicable to child’s consent in relation to information society services: since the platform lacks a mechanism of age verification, it can potentially expose children to responses that may be inappropriate for their age and developmental stage.
All three findings would constitute serious breaches of the GDPR. The further use by Open AI of the personal data provided by the user for the purpose of “training” the algorithm on which ChatGPT relies, constitutes a further use of the personal data which, from my perspective, is not compatible with the original purpose in the terms of Art. 6.4 GDPR.
The consent of the data subject or another legal base must be sought in order to process such personal data for the secondary purpose of training the tool. No further legal base was sought by OpenAI and neither are data subjects informed of the fact that their personal data – provided to the tool in order to get a reply regarding any and all sorts of matters – is being further used for the purposes of OpenAI.
Lack of a legal base
A very interesting question that OpenAI will have to reply to relates to the appropriate legal base for the processing and retention of users’ personal data. One may immediately think about the legal base provided by Art. 6.1.f) GDPR, namely, the legitimate interest of the Data Controller. I wonder, however, if the above-mentioned processing of personal data – the primary or the secondary – including retention periods and further processing, would “pass” a legitimate interest test.
Furthermore, if sensitive personal data is provided or gathered in order to use the service, the legitimate interest of the Data Controller would no longer apply as an appropriate legal base, since Art. 9 GDPR is far more restrictive than Art. 6 GDPR with regard to applicable legal bases for the processing of “special categories” of personal data.
Non-compliance with Article 8 GDPR
Non-compliance with Art. 8 GDPR is another very serious matter that will require Open AI’s legal team’s attention. How will ChatGPT appropriately assess the age of the child in order to provide appropriate – and attractive – answers to its younger audience? What interests a 6-year-old is irrelevant for an 8-year-old. How will Open AI layer consent requirements to make sure that they are compliant with the law, on the one side, and continue to provide a quality and interesting service to children and youths, on the other?
These are the open questions for which the Italian Garante expects sufficient and convincing answers in 20 days. Open AI may face otherwise a fine of 20 million Euros or 4% of Open AI’s annual turnover, as stated on the Garante’s website in the press release of March 31st 2023.
It remains to be seen how other EU supervisory authorities will react to the Garante’s decision. If the Garante convincingly demonstrates a breach of the GDPR, other European authorities would also have to take action. The German Data Protection Authority already contacted the Garante for more information on the results of the investigation.