On 1 June 2025, China’s Cyberspace Administration (CAC) brought into force the Measures for the Security Management of Face Recognition Technology Applications (the „Measures“). This landmark regulation is the first piece of dedicated legislation in China governing the use of biometric facial data, and it carries significant implications for any organization processing face recognition data within Chinese territory.

Why Now?

China already had a general legal framework in place. The Personal Information Protection Law (PIPL), the Data Security Law, the Cybersecurity Law, and the Network Data Security Regulations all touch on how personal and sensitive data should be handled. The Measures build directly on these, filling a gap with rules specifically designed for face recognition.

Face recognition technology has quietly become part of everyday life in China. Airports and train stations use it for identity checks. Residential compounds and office buildings rely on it for access control. Banks and payment platforms deploy it to authenticate transactions. Public security agencies use it to track and identify suspects.

All of this has brought real convenience. But it has also raised serious concerns. Unlike a password, facial data cannot be reset once it is leaked. The growing risk of identity fraud, unauthorized profiling, and covert surveillance eventually pushed regulators to act.

Who Does This Apply To?

According to Article 2 of the Measures, the regulation covers any entity or individual applying face recognition technology to process facial information within mainland China. There is one notable exception. Organizations doing research and development or algorithm training with facial data are not covered by the Measures, though they still have to comply with PIPL and other relevant laws. That said, once the technology moves from the lab into a real-world application, full compliance obligations apply.

The Core Compliance Requirements

  • Mandatory Disclosure (Article 5): Before collecting facial data, organizations must clearly inform individuals of who is processing the data and how to contact them, why the data is being processed and how, how long it will be kept, why it is necessary and what impact it may have on the individual, and how individuals can exercise their rights.
  • Separate Consent (Articles 6 and 7): Where consent is the legal basis for processing, it must be freely given, informed, and obtained specifically for the facial data in question. It cannot be bundled into a general set of terms and conditions. This standalone consent requirement is consistent with PIPL’s higher standard for sensitive personal information. For children under 14, a parent or guardian must give consent. Individuals also have the right to withdraw consent at any time, and the controller must make this straightforward. Withdrawal does not affect anything that was lawfully processed beforehand.
  • Store Data on the Device, not in the Cloud (Article 8): Unless the law says otherwise or the individual separately consents, facial data must stay on the face recognition device itself and must not be transmitted externally over the internet. The default assumption shifts from cloud-first to device-first. Organizations that want to store or transmit facial data in the cloud will need to obtain separate, explicit consent on top of everything else.
  • Personal Information Protection Impact Assessment (Article 9): Before deploying any face recognition system, organisations must carry out a Personal Information Protection Impact Assessment (PIPIA) and keep a record of their findings for at least three years. The assessment needs to cover whether the processing is lawful and necessary, what risks exist for data breach or misuse, and whether the protective measures in place are adequate. Compared to earlier drafts, the final version of the Measures adds a specific requirement to assess the risk of illegal sale or use of facial data.
  • Face Recognition Cannot Be the Only Option (Article 10): This is one of the more distinctive rules in the Measures. Where an equivalent non-biometric verification method can achieve the same purpose, organisations cannot make face recognition the only choice. This is a direct response to the widespread practice of forcing users into facial scanning as a condition of access to services.
  • Technical Security Measures (Article 14): Face recognition systems shall use data encryption, access controls, authorization management, security auditing, and intrusion detection. Raw biometric samples should generally not be stored. Where possible, data should be de-identified and stored separately from identity information.
  • Registration Requirement (Article 15): Organizations that have stored facial data for 100,000 or more individuals must register with the provincial-level (or above) CAC authority within 30 working days of crossing that threshold. The filing needs to include basic information about the controller, the purpose and method of processing, the volume of data stored, security measures in place, processing methods and procedures, and the PIPIA report. If material changes occur, or if the organization stops using face recognition altogether, updated or deregistration filings are due within 30 working days.

What This Means in Practice

For businesses operating in China, the Measures send a clear message: face recognition is a high-risk activity that needs proactive governance, not just a policy update. Concretely, organizations should audit all current face recognition use cases against necessity and proportionality standards, redesign consent flows to ensure separate and easily withdrawable consent, make alternative verification methods available wherever face recognition is currently the only option, review data architecture to align with the device-first storage default, conduct PIPIAs before any new deployment, and track cumulative storage volumes so the registration filing is triggered at the right time.