This blog post explores two topics currently attracting significant attention in practice: ChatGPT and the AI Act. Using a practical example, we will show how ChatGPT can be classified under the Act.
Our scenario: A company wants to provide its employees with ChatGPT and has chosen the paid ChatGPT Team licence. In this post, we will walk through the classification process step by step and highlight how the assessment can be carried out and documented in our management tool DSN port (AI Module). DSN port is our software solution designed to document all key aspects of data protection (GDPR) and artificial intelligence (AI Act). The guiding questions from our example can support you in the classification process—even if you choose not to use DSN port.
Preliminary Considerations
What is ChatGPT?
The AI Act addresses two relevant areas: AI models and AI systems. Without going into unnecessary detail here, ChatGPT qualifies as an AI system. The underlying models include GPT-4, GPT-4o mini, and o1-pro.
What role does the example company play?
The AI Act defines four possible roles in relation to AI systems: providers, deployers, distributors, and importers. In our scenario, the company is a deployer – it uses ChatGPT but does not develop, sell, or import it.
This interim finding is important, as the role directly shapes the obligations under the AI Act.
What can ChatGPT do?
At first glance, the answer seems obvious: ChatGPT is a chatbot that generates text. However, it can also generate images, in addition to text content. This detail is relevant for classification purposes.
Documentation in DSN port
With these preliminary considerations in mind, we create a new entry in the AI module of DSN port, labelled “ChatGPT (Team Licence)”.
We describe the AI system in the designated form.
And then we select the role deployer.
This step influences the subsequent questions asked during classification.
Now We Can Dive Into the Classification Process
Step 1: Excluding Prohibited AI Practices
The AI Act (Article 5) prohibits certain AI practices. While the Act itself is worded in a complex way, the test can be simplified: if all key questions can be answered with “no,” the practice is not prohibited.
In our case, the company quickly concludes that no prohibited practices apply.
It is worth noting that the company had already issued an AI policy. This is particularly relevant since ChatGPT is a general-purpose AI system. While prohibited uses cannot be fully ruled out in theory, the existence of an AI policy provides guidance and safeguards. DSN port also offers a draft AI policy under the Documents tile.
Step 2: AI Systems for Specific Purposes
Article 50 of the AI Act sets out transparency obligations for certain AI systems. Since these obligations also depend on the role documented earlier, only three of the five initial questions apply in this case.
For our example, the solution is straightforward.
Step 3: High-Risk or Not?
A central question in classification is whether the system qualifies as a high-risk AI system under Article 6.
Again, the AI Act provides several screening questions. If all are answered “no,” the system is not high-risk; a “yes” would trigger additional checks.
In our example, ChatGPT is not classified as high-risk. The company’s existing AI policy and employee training measures also support this conclusion, which we documented under Explanatory note.
Had the system been classified as high-risk, DSN port would have required selecting the relevant catalogues of obligations and documenting compliance. In this case, however, no such obligations apply.
Step 4: General Obligations
Beyond prohibited practices, transparency requirements, and classification, other aspects must also be considered:
- Data protection: If personal data is processed, GDPR compliance becomes essential.
- Confidentiality: Companies must evaluate whether sensitive information might be entered into the AI system.
- Contracts: The provider’s contractual terms (in this case, OpenAI’s) also play a role.
These considerations go beyond the AI Act but can still be documented in DSN port – for example, under the Directory of Processing Activities.
In our case, the company also complies with Article 4 of the AI Act, which requires measures to ensure adequate AI literacy. The company established this by providing employees with DSN train’s eLearning course “Basic Training AI Competence”.
Training records and certificates are seamlessly integrated with DSN port via our DSN traincenter learning platform – everything from a single source, but usable independently if required.
Result
Our review revealed no obstacles to the intended use of ChatGPT – all lights are green.
As a forward-looking step, and since the company plans to allow personal data input in the future, we added a corresponding measure which is incorporated into the action planning and monitoring of DSN port. The cockpit view then provides a clear overview of compliance and ongoing obligations.
Conclusion
In this example, the company successfully classified ChatGPT under the AI Act, documented the process in DSN port, and ensured accountability at every step.
If you’d like to see this in practice, we invite you to try a free trial of DSN port. This includes access to our draft AI policy and, if you mention this blog post, we will gladly create the sample entries shown here for you – or provide a guided introduction to the system and its full range of functions.









