The Italian Data Protection Authority (Garante) has taken urgent action against Clothoff, an AI-powered app capable of generating hyper-realistic “deep nude” images based on pictures of real people. On 3 October the regulator has issued an immediate order blocking the app – developed by a company based in the British Virgin Islands – from processing the personal data of Italian users.

The Clothoff Case

Clothoff is a deepfake app, that allows users, including minors, to upload a photo of a person and instantly create a nude or sexually explicit deepfake of the person in the photo. Those manipulations of images are generated by an artificial intelligence (AI), that is the core technology under the app specifications.

The Garante’s urgent assessment revealed a number of serious shortcomings, notably the failure to implement age-related restrictions, the lack of any system for checking consent, and the omission of information clarifying that the output is created by using AI.

According to the Garante’s opinion, the above elements would result in a significant threat to personal dignity and privacy – particularly when minors are involved. Recent media reports in Italy have highlighted a surge in misuse of these tools, contributing to growing public concern.

The Garante’s Procedure Against the Deepfake App

On 3 October 2025, the Garante announced an emergency order restricting AI/Robotics Venture Strategy 3 Ltd., the company behind Clothoff, from handling Italian users’ data. The action follows an official investigation launched in August 2025.

The decision was issued under Article 58 para. 2 lit. f GDPR, which allows a data protection authority to impose an immediate restriction on data processing when there is a high risk to individuals’ rights and freedoms.

The decision is an interim protective measure applied while the broader investigation continues, and it is not yet a final sanction. Of course, the outcome of the investigation may not differ too much from the initial decision, unless critical infringements like the ones identified by the authority are not substantially remedied.

The Infringements Identified

Lack of consent and transparency: people whose images were transformed into explicit deepfakes had not provided consent, nor were they informed about how the images would be processed or transformed violating Art. 6 para. 1 lit. a GDPR and other underlying data protection principles.

Ineffective security measures: because the pictures elaborated with AI were not effectively anonymized or watermarked, there would be a high risk of further misuse including unauthorized disclosure to other parties without any type of reasonable context.

Failure to cooperate: the controller allegedly failed to provide adequate documentation to the Garante, undermining its accountability responsibilities.

These findings led the Garante to conclude that Clothoff violated key GDPR principles, enshrined in Articles 5 para. 1 lit. a, 5 para. 2, and 25, which resulted in the order to immediately suspend all processing of Italians’ personal data by the app until further review.

AI and Ethical Implications

The message from the Italian authority is clear: the regulators are more and more engaged to intervene drastically, when AI technologies threaten fundamental rights. This means that compliance with data protection laws is a critical pillar, when AI technologies are developed. Companies should not underestimate this requirement.

EU data protection authorities have increasingly issued strong restrictive orders against applications or services that pose risks to individuals’ rights and freedoms. Famous examples include Italy’s temporary ban of the Replika chatbot and ChatGPT in 2023 over concerns about minors and transparency, multiple EU-wide prohibitions against Clearview AIs facial-recognition database, and the emergency restrictions on TikTok for failing to enforce age-verification. Regulators have also halted Meta’s launch of Facebook Dating in Ireland in 2020 due to inadequate documentation and intervened in other cases where apps mishandled sensitive or biometric data. The Hambourg DPA prevented Google from continuing Street View data collection until remediation of the violations (2010-2011).  Collectively, these actions show a growing willingness by DPAs to suspend or block services when fundamental data protection principles are at stake.

Companies developing AI systems that process personal images or biometric data, should consider these requirements very seriously because they are an integral part of fundamental safeguards. Organisations need to ensure that individuals genuinely understand and consent to how their personal data, including and especially images, will be used. Controllers have also the obligation to plan and build effective technical safeguards, that start from the installation of the app (for example age-verification methods) to the use of it (including anonymisation or pseudonymisation). Transparency of course is a critical obligation, as a pillar of compliant personal data processing.

AI developers should embed GDPR and personal data protection criteria in the planning phase of the app creation. This is the only method of designing a compliant tool which would grant a lawful processing and a respectful use of images and personal data within the functioning of the apps.

Conclusion

The Clothoff case is another example of how important it is for European DPAs to intervene where new products, innovative applications or highly sophisticated technologies are putting the rights, integrity and dignity of the EU citizens at risk. Those should always take precedence over technology developments.

The fast development of AI seems to go hand in hand with the intervention of the regulators, not only from the perspective of the law proliferation, but also in their role of watchdogs and responsible bodies for compliance enforcement.