On July 24, 2025, the German Federal Office for Information Security (BSI), through authors Dr. Jonas Ditz und Elmar Lichtmeß, published an interesting white paper “Bias in Artifical Intelligence” (currently only available in German) that provides developers, providers, and operators of AI systems with an introduction to the issue of bias.
What does the term “bias” mean?
The term “bias” describes unequal and consequently discriminatory treatment of users or companies in the context of AI systems, BSI, p. 5.
Why is this problematic?
AI systems are used, among other things, as standalone systems (e.g., ChatGPT, Google Gemini, Microsoft Copilot, etc.) or embedded as support in all kinds of possible applications. In order for AI applications to deliver accurate results, the results should be as unbiased and unadulterated as possible. In its white paper, the BSI states that bias in AI systems can lead to discriminatory and/or erroneous decisions, which can result in substantial claims for damages, disruptions to business operations, or even jeopardize IT security in general, BSI, p. 5, p.29.
One of many examples cited is that biometric access controls may not recognize people of a certain ethnicity and thus may unjustifiably deny or grant access, BSI, p. 6.
There may also be consequences for a company’s cybersecurity. The BSI describes the impact of existing biases on cybersecurity on page 27 of the white paper in relation to the three most important protection goals of the CIA triad: “Confidentiality, Integrity, and Availability.” Here, the authors describe how attackers can use existing biases as a gateway to “extract” sensitive data from AI models, for example, or as so-called “attack vectors,” BSI, pp. 27, 28.
Possible types of bias
In its white paper, the BSI provides a comprehensive, but non-exhaustive list of possible types of bias. Bias can occur in all life cycles of an AI system, starting with data collection, during model development and model training, through to deployment of the AI system, BSI, p. 5.
For example, the existence of a “historical bias” (BSI, p. 7) can lead to the use of outdated data, which in turn leads to results that are no longer relevant or acceptable today. One example cited here is the use of outdated data within an applicant selection system that has data from a time when significantly more men than women were hired. This could result in future candidate decisions being influenced by outdated data, leading to women being underrepresented, see BSI, p. 8.
Similar results can also arise from the fact that certain groups are not represented at all in the AI data, known as representation bias, see BSI, p. 8, but also from an over- or under-emphasis of data values, see BSI, loc. cit. Unfortunately, it is not possible to summarize the description of the multitude of possible biases in AI systems comprehensively here without repeating the contents of the BSI white paper. It is therefore highly recommended that you read it for yourself.
The scientist and founder of the Algorithmic Justice League Joy Buolamwini, who is also listed in the extensive references at the end of the white paper, describes in her book “Unmasking AI” how her interest in the topic of bias in IT applications and AI was triggered by the example of facial recognition software that does not recognize the faces of Black people and thus completely excludes them from use, described as “coded gaze”, Buolamwini, “Unmasking AI”, p. xii/xiii.
The German Federal Anti-Discrimination Agency cites online retail and finance as areas where AI is most likely to cause discrimination. Existing biases can lead to discriminatory decisions being made about who is allowed to make purchases on credit, who is granted a loan, or what price is charged for insurance products, for example.
What needs to be done?
Ultimately, the BSI requires all developers, providers, or operators of AI systems to take actions such as learning about bias issues, clarifying responsibilities, combating bias starting with the data, minimizing undesirable bias in AI models, and following further developments in this area, BSI, loc. cit., p. 3.
To identify existing bias (bias detection), the BSI also provides extensive information and helpful additional questions that an organization should ask itself, see BSI, from p. 13. This also includes information on bias mitigation, i.e., the reduction of existing bias, BSI, from p. p. 19. In this context, the white paper also contains a non-exhaustive list of open-source fairness toolboxes from the context of machine learning, BSI, p. 19, footnote 11.
Conclusion
Even though the BSI’s recommendations and guidelines are not generally binding for companies, they do set relevant good practice standards with regard to existing documentation requirements and can therefore be relevant in the context of compliance with legal IT/data protection and/or cyber security requirements. Ultimately, it is in all of our interests to minimize, or better still, eliminate bias-based results when using AI. Here, one can only agree with the BSI’s conclusion that systems should operate securely and reliably.
As always, we will continue to monitor developments in this area and keep you informed.