Skip to main content
9 – 17 UHR +49 8031 3508270 LUITPOLDSTR. 9, 83022 ROSENHEIM
DE / EN

AI Act Now Applicable in More Parts: Regulation of ChatGPT & Co. Now in Effect

Tobias Jonas Tobias Jonas | | 4 min read

Since Saturday, August 2, 2025, the second basket of regulations of the AI Act has become applicable. This article provides an overview of the regulations.

As of this date, the following applies:

  • the regulations concerning the authorities responsible for monitoring and evaluating high-risk AI systems become applicable (Chapter III Section 4);
  • General-Purpose AI Models are regulated (Chapter V);
  • the regulations on AI Governance (Chapter VII) become applicable;
  • Sanctions come into force (Chapter XII) and
  • the regulations on Confidentiality (Art. 78 AI Act) become applicable.

Since February 2, 2025, Chapter I (Subject matter, scope, definitions, AI literacy) and Chapter II (Prohibited practices) of the AI Act have already been in effect.

Authorities for Monitoring and Evaluation of High-Risk AI Systems

In one year, from August 2, 2026, certain AI systems listed in Annex III will be regulated as high-risk AI systems. This requires authorities to carry out the conformity assessment procedure. The regulations necessary for this are set out in Chapter III Section 4 and are now applicable. National authority structures for monitoring and evaluating high-risk AI systems are being established, called “notifying authorities.” These authorities designate and monitor the conformity assessment bodies, i.e., the bodies that assess whether a high-risk AI system complies with the AI Act.

Member States were required to name the competent authorities by August 2. In Germany, however, the KIMÜG (Act on Market Surveillance and Ensuring Conformity of Artificial Intelligence Systems – AI Market Surveillance Act) is still delayed. The Federal Network Agency (Bundesnetzagentur) is designated as the market surveillance authority, and the notifying authority will likely be the German Accreditation Body (DAkkS).

Regulation of General-Purpose AI Models

Originally, ChatGPT and similar systems were not supposed to be regulated by the AI Act at all – only prohibited practices and high-risk AI systems. However, after it became clear with the general availability of ChatGPT in November 2022 that these models are also very powerful and therefore pose a risk, the EU saw a need for regulation here as well and included them in the AI Act in Chapter V.

General-Purpose AI Models

A general-purpose AI model is defined in Art. 3 No. 63 AI Act as:

an AI model – including where such an AI model is trained with a large amount of data using self-supervision at scale – that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development activities or prototyping purposes before they are placed on the market;

This includes ChatGPT, Claude, Copilot, Mistral, and similar AI systems. The obligations differ depending on whether a so-called “systemic risk” exists or not and only affect providers of AI systems. As long as you only use them (“deployer”) and do not develop or place them on the market under your own name (“provider”), you have no obligations. Whether a systemic risk exists depends on the capability level of the AI model. If the AI model presents such a systemic risk according to the criteria specified, the obligations are greater.

The regulations for classification and obligations are now applicable.

AI Governance

With the:

  • AI Office (Goal: Creating EU expertise and capabilities in the field of AI);
  • the European Artificial Intelligence Board ("AI Board", consisting of representatives of the Member States, advising and supporting the EU and Member States in applying the AI Act);
  • the Advisory Forum (Advising the AI Board and the EU Commission with technical expertise) and
  • the Scientific Panel

structures are being created to strengthen the EU’s innovation in AI and apply the regulation.

Sanctions

The catalog of fines for violations of the AI Act comes into force. The highest sanction applies to those who operate a prohibited AI system. Here, fines of up to 35 million euros or up to 7% of total worldwide annual turnover are threatened. For violations of other obligations, fines of up to 15 million euros or up to 3% of annual turnover are provided, and for false information to authorities, fines are up to 7.5 million euros or up to 1% of annual turnover.

However, fines for providers of general-purpose AI models are explicitly still excluded and will only be applicable from August 2, 2026 (Art. 113 para. 3 lit. b AI Act).

Regulations on Confidentiality

With the applicability of Art. 78 AI Act, the EU Commission, market surveillance authorities, and notified bodies, as well as all persons involved in applying the AI Act, are obligated to confidentiality. This is intended to protect trade secrets of AI system providers in particular when they must disclose such secrets in the course of implementing the regulation.

Outlook

In one year, on August 2, 2026, the AI Act will become generally applicable.

Only the high-risk AI systems that are classified as such due to certain product law regulations will be excluded, and these regulations will only apply from August 2, 2027. This is to take into account that conformity assessment for such products is complex.

Tobias Jonas
Written by

Tobias Jonas

Co-CEO, M.Sc.

Tobias Jonas, M.Sc. ist Mitgründer und Co-CEO der innFactory AI Consulting GmbH. Er ist ein führender Innovator im Bereich Künstliche Intelligenz und Cloud Computing. Als Co-Founder der innFactory GmbH hat er hunderte KI- und Cloud-Projekte erfolgreich geleitet und das Unternehmen als wichtigen Akteur im deutschen IT-Sektor etabliert. Dabei ist Tobias immer am Puls der Zeit: Er erkannte früh das Potenzial von KI Agenten und veranstaltete dazu eines der ersten Meetups in Deutschland. Zudem wies er bereits im ersten Monat nach Veröffentlichung auf das MCP Protokoll hin und informierte seine Follower am Gründungstag über die Agentic AI Foundation. Neben seinen Geschäftsführerrollen engagiert sich Tobias Jonas in verschiedenen Fach- und Wirtschaftsverbänden, darunter der KI Bundesverband und der Digitalausschuss der IHK München und Oberbayern, und leitet praxisorientierte KI- und Cloudprojekte an der Technischen Hochschule Rosenheim. Als Keynote Speaker teilt er seine Expertise zu KI und vermittelt komplexe technologische Konzepte verständlich.

LinkedIn