The Spanish Data Protection Authority (AEPD) recently imposed a €950,000 fine on a company offering digital identity and age verification services that rely on facial analysis technology. The decision is particularly relevant for organisations deploying facial analysis technologies, including AI-based age estimation and identity verification systems that generate biometric templates, as it illustrates how regulators assess these technologies under existing GDPR rules.

The judgment itself is complex and addresses a wide range of legal and technical issues, including biometric processing, consent mechanisms, system architecture, and data retention practices. For the purposes of this article, we focus on a few practical and widely applicable compliance lessons that organisations deploying similar technologies should consider.

The Service

The company provides an application that allows users to prove their identity or age online. To do this, a user downloads the app and scans their face with their phone camera. The app performs a liveness check to confirm that the image comes from a real person rather than a photograph or video. The user is then asked to scan an identity document such as a passport or ID card.

The system compares the facial image from the camera with the image on the identity document. If the match is successful, the app creates a digital identity profile for the user. As part of this process, the system generates a biometric template, a mathematical representation of the user’s facial features.

This biometric template is stored in the user’s digital ID account within the app. In the future, when the user needs to prove their age or identity on another website or service (for example, an age-restricted platform), the app can confirm that it is the same person by performing another facial scan and comparing it to the stored template.

Following an investigation, the AEPD concluded that the company had infringed several key provisions of the GDPR.

Biometric Data Rules Apply Even When the Output Is Not Identification

The company argued that its facial age estimation tool did not constitute biometric identification because the AI system only estimates a person’s age rather than identifying who they are.

However, the AEPD examined the purpose and architecture of the service, not only the technical output of the AI model. The system created a facial biometric template, a mathematical representation of a user’s facial features, which was stored in the user’s digital identity account and later used to verify that the same person was using the application.

The company argued that it was not processing special category data because the system was not intended to identify or authenticate individuals. The AEPD rejected this argument, noting that the biometric template was expressly created and used to confirm that the person interacting with the app was the same user who created the account.

As a result, even if one component of the system performs age estimation, the overall service architecture relies on facial data to generate and store biometric templates that are later used to authenticate users. The processing therefore falls under Article 9 GDPR as biometric data used for identification or authentication.

Obtaining Valid Consent

The decision also highlights regulatory scrutiny of how personal data are reused to improve AI systems. In this case, certain processing activities, including the use of data for research, analytics, and internal development purposes, were enabled through settings that were activated by default. This means that the processing would occur unless the user actively changed the settings to opt out.

Under the GDPR, consent must involve a clear affirmative action by the user. It cannot be inferred from pre-ticked boxes or default settings, particularly where sensitive data such as biometric information are involved and where minors may be affected.

Why This Case Matters

This enforcement action highlights important compliance risks that organisations must consider when processing sensitive data and deploying AI-enabled systems. When processing sensitive data such as biometric information, particularly where the data are used to uniquely identify a natural person and to develop or improve AI systems, privacy by design and GDPR compliance become critical from the outset.

Given the complexity and risks associated with biometric and AI-driven processing, organisations should involve their DPO or privacy counsel early when designing such systems. Early legal assessment can help identify compliance risks and prevent technical design choices that may later result in costly regulatory consequences.