Skip to main content
9 – 17 UHR +49 8031 3508270 LUITPOLDSTR. 9, 83022 ROSENHEIM
DE / EN

AI Competency

Fabian Artmann Fabian Artmann | | 8 min read

With February 2, 2025, after the AI Regulation came into force on August 1, 2024, regulations of the AI Regulation will actually apply for the first time. This discrepancy between entry into force and applicability should already be familiar to most from the General Data Protection Regulation. Here too, a 2-year implementation period was granted to companies between entry into force and applicability.

As of February 2, 2025, Chapter I and Chapter II of the AI Regulation become applicable. These are the general provisions and the prohibition of certain uses of AI (the so-called “Prohibited Practices”). The general provisions also include the requirement for AI competency. Some call this person an “AI Officer,” others an “AI Manager.”

Regulation of the AI Regulation on AI Competency

What does the AI Regulation regulate regarding AI competency?

Providers and operators of AI systems must now ensure “AI competency.” Art. 4 of the AI Regulation states:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”

According to Art. 3 No. 56, AI competency is legally defined as

“skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause.”

What not everyone realizes: “Deployer” is somewhat misleadingly formulated. What is meant is not just hosting the AI system or AI model yourself, but simply using common AI systems like ChatGPT, Microsoft Copilot, or GitHub Copilot is enough to be considered a deployer. Every company that uses AI is already a deployer in the sense of the AI Regulation. However, if you use AI purely privately, you are not a deployer. Similar to data protection, the household exception applies here.

A company becomes a provider when it develops an AI system or model (for example, OpenAI as provider of ChatGPT, Anthropic of Claude, Mistral, Microsoft of Copilot) or has it developed (when a company has an AI system developed by a service company) and places it on the market under its own name or trademark, or puts an AI system into service under its own name or trademark (this is the case when a company brands an existing AI system under its own name, for example a Company GPT).

Significance for Companies

Art. 4 of the AI Regulation is thus relatively far-reaching, even if the regulation seems very unassuming:

The moment I use AI as a company, I must ensure from February 2, 2025, that AI competency exists in my company. This is intended to ensure that the people who use AI in the company know what the AI does, ensuring human oversight.

What does this mean concretely for companies starting February 2?

  • Who is affected? Most companies should now be using AI due to the immense significance since the release of ChatGPT in November 2022. Even if a company does not provide any AI tools (which should be increasingly rare), it becomes a deployer when employees use their own AI tools as part of shadow IT (so-called “Bring Your Own AI”). According to general surveys, this is increasingly the case, with all the dangers associated for companies regarding data protection and trade secret protection.

  • Who must have the competency? The competency must exist among staff or among persons who use AI on behalf of the company (that would be service providers who are not employees). The competency must be ensured continuously during the use of AI, so it is particularly important to ensure that competency is maintained even during vacation or sick leave. Companies should therefore have multiple people who meet the AI competency requirement, depending on their size. This is also a difference from the Data Protection Officer, where you typically only need to appoint one and don’t need a substitute. The AI Regulation itself does not speak of an “AI Officer” or “AI Manager,” but of “AI competency.” However, these terms are generally used interchangeably in practice for people who have AI competency. Often the terms are used in training programs, including in the AI Officer Training from innFactory AI Consulting GmbH. This expresses that very specific people are being trained for this competency, which firstly makes proof easier and secondly allows specific people in the company to be tasked with the business development of AI, which has strategic advantages for the company.

  • How high must the AI competency be? How must I prove it? This depends particularly on the context in which AI is used and which people are affected. The higher the danger in AI use, the more competency must be demonstrated. Thus, the requirements for AI competency for high-risk AI systems are higher, where the requirements for “human oversight,” as Art. 14 of the AI Regulation also calls it in the heading, are higher. The AI Regulation thus does not precisely specify how high the AI competency must be, but leaves this to the individual case. If I use AI in healthcare or a nuclear power plant, i.e., in areas where the danger is greater, a company must demonstrate more AI competency than if I only use ChatGPT for writing texts that a human then reviews.

  • Isn’t this just another bureaucratic obligation? One might think that this obligation is just another bureaucratic duty that burdens companies – as is often claimed in current election campaigns. In fact, AI competency is not intended to be another bureaucratic hurdle, but the regulation was also enacted to derive the greatest possible benefit from AI systems (Recital 20 of the AI Regulation). AI competency is therefore also intended to provide benefits for the company and, for example, ensure its future viability through the use of AI.

  • How is the obligation enforced and what are the consequences of a violation? The Data Protection Officer must be reported by the company to the data protection supervisory authority, and a violation of the obligation to appoint and designate is punishable with up to 10 million euros or up to 2% of total worldwide annual turnover (Art. 83 para. 4 lit. a GDPR). Such an obligation to designate a person with AI competency to authorities does not exist, nor does the AI Regulation provide for fines if AI competency is lacking. The fines under the AI Regulation are quite high (depending on the violation, in a range between up to 7.5 – 35 million euros or respectively 1-7% of worldwide annual turnover), but AI competency is not mentioned in the fine catalog. However, if I am an operator of a high-risk AI, then a violation is also subject to fines (Art. 99 para. 4 lit. e, Art. 26 para. 2 AI Regulation). The fine in this case is up to 15 million euros or up to 3% of total worldwide annual turnover.

  • Is it therefore an obligation without consequences for most companies? No. The obligation to AI competency is not subject to fines except in the case of high-risk AI. At the same time, however, it is a legal obligation; Art. 4 AI Regulation is not optional but mandatory: Providers and deployers “shall take” measures to ensure AI competency. “To their best extent” does soften this somewhat. In the literature, it is therefore disputed what practical significance this obligation has when enforcement is not possible with a fine.

However, it is still not a “toothless tiger”:

  • Anyone can file a complaint with the market surveillance authority under Art. 85 para. 1 of the AI Regulation if provisions of the AI Regulation are violated, including Art. 4 AI Regulation. Personal affectedness is not even required; complaints based on suspected or observed objective violations are sufficient.
  • If another obligation of the AI Regulation is violated that is subject to fines, AI competency can play a role in the context of fault.
  • In civil law, a violation of the provision can have consequences. A violation can constitute a civil law breach of duty that leads to a claim for damages if damage occurs.
  • Under criminal law, negligence can also be assumed subjectively in the case of a legal interest violation.
  • Under competition law – if Art. 4 AI Regulation is to be considered a market conduct norm within the meaning of § 3a UWG, which is likely the case – a competition violation may exist; however, the latter is unlikely to lead to cease-and-desist waves due to difficulty of proof.

Let’s Get Started on AI Competency

innFactory AI Consulting GmbH offers a two-day training course for AI Officers. The training provides the technical and legal knowledge within the meaning of Art. 4 AI Regulation and also shows how an AI Officer can sensibly implement AI in the company. In addition to meeting legal requirements, you can start fully leveraging AI in your company and not miss this crucial competency for the future.

Fabian Artmann
Written by

Fabian Artmann

Co-CEO, M.Eng.

Fabian Artmann, M.Eng. ist Mitgründer und Co-CEO der innFactory AI Consulting GmbH. Als Wirtschaftsingenieur vereint Fabian Artmann technisches Know-how mit wirtschaftlichem Verständnis und prozessorientiertem Denken. Als KI-Berater hat er sich darauf spezialisiert auf Basis des innFactory AI Innovation Cycle Ineffizienzen in bestehenden Abläufen zu identifizieren, Veränderungsprozesse zu strukturieren, Mitarbeiter einzubinden und dafür zu sorgen, dass die neuen KI-Technologien nahtlos in die optimierten Geschäftsprozesse integriert werden können. Fabian Artmann verfügt über eine breite Expertise an den Schnittstellen zwischen Technologie, Projektmanagement und Geschäftsprozessen. Im Rahmen seiner beruflichen Laufbahn durfte Fabian Artmann bereits Digitale Projekte für die BMW Group, IWC Schaffhausen sowie MTU Aero Engines umsetzen.

LinkedIn