EU AI Act Enforcement Model Splits Duties, Tightens Rules for Agencies and White-Label Providers
The EU AI Act splits enforcement between national authorities and the European Commission, with GPAI models like GPT-5 and Gemini 3 facing exclusive Commission oversight.

The European Parliament's Members' Research Service published a detailed explainer on March 18 clarifying how the EU Artificial Intelligence Act distributes enforcement authority, a question with direct consequences for agencies and white-label AI providers operating across the bloc.
The AI Act, adopted in 2024, establishes rules for AI systems and general-purpose AI models placed on the EU's internal market. Enforcement of those rules is not unified under a single authority. Instead, the Act creates a hybrid model with both a centralised and a decentralised part, splitting responsibilities between EU member states and the European Commission depending on what type of AI is involved.
For conventional AI systems, the risk-based approach is enforced at national level, with member-state authorities receiving support and advice from centralised entities including the Commission. General-purpose AI, however, follows a different path entirely: GPAI rules are exclusively supervised and enforced by the Commission, not by national bodies. That distinction matters significantly for providers whose products fall within the GPAI definition.
The explainer specifies that GPAI models are those capable of performing a wide range of tasks and being integrated into a variety of systems or applications. The definition explicitly includes generative AI models, naming OpenAI's GPT-5, Google's Gemini 3, and Mistral Large 3 as examples. Agencies and white-label providers that build products on top of these models, or that offer similar foundation-model capabilities under their own branding, sit squarely within the scope of Commission-level scrutiny.

The stakes escalate further for models designated as posing systemic risks. The Act considers a GPAI model to carry systemic risk if it reaches a defined capability threshold or is formally designated as such by the Commission. Potential systemic risks cited in the Act include negative effects on democratic processes and on public and economic security. Models that clear that threshold face extra requirements, specifically model evaluation and risk assessment obligations.
Three governance bodies assist with enforcement across both tracks: the European AI Board, the scientific panel of independent experts, and the AI advisory forum. Their role is advisory and supportive rather than independently coercive, but their input will shape how the Commission and national authorities interpret and apply the rules.
Researchers behind the Members' Research Service analysis flagged one structural concern: the decentralised pattern remains dominant in the Act, which could lead to challenges around uneven enforcement across the EU. For white-label providers and agencies with clients in multiple member states, that unevenness could translate into inconsistent compliance expectations depending on where products are deployed and which national authority has jurisdiction over AI systems in a given market.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

