OpenAI, the company behind the successful ChatGPT, has been in the spotlight recently due to privacy concerns, particularly in the European Union. Italy’s data protection authority, known as the Garante, imposed a temporary ban on the platform on 31 March, following reports of a data breach that affected ChatGPT users‘ conversations and payment information. As a result of the ban, OpenAI engaged in discussions with the authority to address compliance concerns and made strides towards enhancing its services.

On 28 April, the Garante announced that it had reinstated ChatGPT in Italy because OpenAI had cooperated in addressing its concerns. But what exactly has the company done to merit the withdrawal of the ban?

New settings for ChatGPT improve data privacy

Until recently, all conversations taking place between private users and the bot were subject to being used for training the algorithm, which opened the possibility of user inputs being used by the machine for future public responses. This posed one of the biggest data protection risks of the software, as personal data fed into the system could potentially be revealed to another user. It was also a red flag for proprietary information, especially corporate secrets and code, which could be inadvertently made public. OpenAI was aware of this problem and required users not to share „any sensitive information“ in conversations, but many still did, causing controversy within companies like Amazon and Samsung.

On 25 April, OpenAI announced changes in this regard. Users can now change their privacy settings and turn off the „Chat History and Training“ option. When this setting is off, the conversations between the user and the bot will not be used for training the algorithm and will not appear in the chat history. This reduces the risk of chat information being accidentally leaked to third parties.

OpenAI also published a new privacy policy, made it accessible from the registration page for new users, and introduced a „welcome back“ page in Italy that includes links to the new privacy policy and information notices on the processing of personal data for the purpose of training algorithms.

In response to concerns about the accuracy of the data provided by the bot about natural persons, the company created a mechanism that allows data subjects to request the correction of false or misleading information that the bot spreads about them. If it is technically impossible to correct the problem, data subjects can also request that their data be removed from ChatGPT’s output.

Finally, OpenAI has implemented an Age Gate for users to self-certify they are 18 years of age or older, or else that they are between 13 and 17 years of age and have obtained parental consent to use the service. This measure is designed to protect minors from accessing inappropriate information through the bot.

Despite the changes, concerns remain

While the changes are a welcome step towards data protection compliance, the situation is still far from perfect. Even if a user has disabled the „Chat History and Training“ option to avoid having their conversations used to train the algorithm, these can be accessed by OpenAI’s employees for moderation purposes. This means that it is still crucial to be mindful of the input you provide to the machine and avoid writing anything that should not be read by third parties, including personal data, sensitive information, and trade secrets.

Furthermore, all of OpenAI’s servers are located in the United States, a country that does not protect privacy rights to the same level as the GDPR. Every time that personal data is fed into the ChatGPT system, an international transfer to an insecure country takes place, potentially putting the data at risk.

Finally, the Garante has asked OpenAI to implement an age verification system and to conduct an information campaign to inform Italians about what has happened and their right to opt-out of the processing of their personal data for algorithm training.

Lessons learned: data protection concerns influence companies‘ behavior

This case serves as a real-life example of how data protection rules and timely checks by authorities can influence the behavior of companies and improve the protection of users‘ fundamental rights and freedoms in real time. OpenAI seems to be striving for a more compliant approach with its products but there is still much work to be done to ensure that AI technologies are used in a responsible and ethical manner. We look forward to seeing what new changes are implemented in the coming weeks.

As we continue to address the challenges posed by new technologies, it is important for users to remain vigilant and take steps to protect their personal information when interacting with AI systems. Opting out from having conversations used to train chatbots, as OpenAI now allows, is one way to do this, as is avoiding sharing sensitive information and using pseudonyms. By working together, regulators, companies, and users can help to ensure that the benefits of AI technologies are realized while minimizing the risks to individuals‘ privacy and security.