When organizations prepare for the EU ArtificiaI Intelligence Regulation (or AI Act), the conversation usually gravitates toward high-level themes: ethical frameworks, bias mitigation, and human oversight. However, one of the most compelling obligations is remarkably easy to overlook: the requirement for systematic record-keeping through logs.
While logs are usually treated as a technical byproduct of software use, the AI Act elevates them into an explicit compliance mechanism for high-risk AI systems. Without robust logs, demonstrating compliance may become an impossible task.
What Are “Logs” and Why Do They Matter for AI Compliance?
In practical terms, logs are the automated record of what an AI system actually does. They are chronological records of events, actions, and relevant data inputs generated during the operation of the system. Under Recital 71 of the AI Act, high-risk AI systems must be designed so that they technically allow for the automatic recording of events throughout their lifetime.
Logs provide an evidence trail of how an AI interacted with the world. The importance of logs goes beyond simply complying with the law; for the purpose of AI risk mitigation, logs provide traceability. You cannot fix what you cannot trace.
Article 12 para. 2 AI Act clarifies why logging capabilities are required. They must enable the recording of events relevant for three core purposes:
- Identifying Risks: Logging must allow the identification of situations that may result in a risk within the meaning of the AI Act. Consider an AI system used to filter job applications. If the logs reveal that the system has begun systematically rejecting candidates from a particular geographic area, this may signal a potential risk to fundamental rights. Without logs, such patterns may go unnoticed.
- Post-Market Monitoring: Logs support the ongoing assessment of how the system performs once deployed. For example, if a hospital uses AI to prioritize emergency room patients, reviewing logs may reveal that performance degrades during high-volume night shifts. That information enables corrective action, such as retraining or recalibration.
- Monitoring Operations: Logging must support monitoring of how the system is used in practice. Take a bank relying on AI for credit scoring. Logs can show whether human reviewers meaningfully assess the AI’s recommendations or simply approve them automatically. If logs record who reviewed a decision and when, they help to ensure that human oversight is real rather than formalistic rubber-stamping.
Essentially, logs turn an opaque „black box“ into a transparent process, allowing humans to oversee the system effectively and intervene when things go off the rails.
Provider Obligations: Designing, Documenting, and Enabling Logging
For a provider of a high-risk AI system (the entity developing the AI or placing it on the market under its own name) compliance obligations begin at the design phase.
- Technical Integration: The system must be engineered to automatically record events throughout its lifetime.
- Specific Requirements for Biometrics: When a system is used for „remote biometric identification“, the logging standards are significantly more rigorous. Providers must ensure the logs capture the period of each use, the reference databases involved, the specific input data that triggered a match, and the identification of the natural persons responsible for verifying the results.
- Retention of Logs: Providers must keep the automatically generated logs for a period appropriate to the intended purpose of the system and, in any event, at least six months, unless applicable law provides otherwise.
- Transparency to Deployers: Recording the data is only half the battle; the provider must also „hand over the keys“: the provider is required to supply the deployer (entities using the AI on a professional capacity) with a clear description of the mechanisms that allow for the proper collection, storage, and interpretation of these logs.
- Cooperation with Authorities: In the event of a regulatory inquiry, the provider (or its authorized representative) is legally bound to grant competent authorities access to these automatically generated logs to demonstrate that the system operates in conformity with the law.
Deployer Obligations: Keeping and Governing Automatically Generated Logs
While the provider must build the high-risk AI system so that it automatically records events, deployers are responsible for keeping and managing those automatically generated logs in practice.
Once the system is operational, log retention becomes a shared compliance obligation: providers and deployers must each keep automatically generated logs to the extent those logs are under their control.
- Operational Custodianship: Deployers are required to keep the logs automatically generated by high-risk AI systems to the extent that such logs remain under their control. This ensures that a record of how the system was actually used in the field remains available for review.
- Retention Mandate: Deployers must keep automatically generated logs under their control for a period appropriate to the intended purpose of the system and, in any event, at least six months, unless applicable law provides otherwise.
- Sector-Specific Integration: For financial institutions subject to EU financial services law, the logs must be maintained as part of the broader documentation and internal governance arrangements already required by existing financial regulations.
Log Retention and GDPR Interaction
One of the most concrete requirements in the AI Act is the retention period for logs. The explicit mandate for a six-month minimum provides a „legal baseline“ for data retention schedules.
Under the General Data Protection Regulation (GDPR), personal data must not be kept longer than necessary, yet „necessity“ is often difficult to quantify. Because many AI logs contain personal data (such as the names of operators, biometric match data, or specific user inputs) they fall under data protection rules.
By establishing a statutory requirement in Articles 19 and 26, the AI Act provides organizations with a clear justification for their retention schedules. It allows privacy professionals to point to a statutory retention floor under the AI Act, which can support the necessity analysis for keeping logs for at least six months where the logs are under the organization’s control. However, this must always be balanced against the specificities of the data; as the AI Act notes, this period applies unless other EU or national laws, particularly those regarding personal data protection, provide otherwise.
Conclusion
Whether your company is a provider or a deployer, logs should not be treated as a secondary IT concern. They are a primary compliance asset that bridges the gap between substantive system safety and auditable reality.