TL;DR
The EU AI Act entered into force on 1 August 2024. Prohibited AI practices have been banned since 2 February 2025. General-purpose AI model obligations applied from 2 August 2025. High-risk system requirements — the most demanding tier — apply from 2 August 2026. Most SaaS products using AI for customer service or analytics are limited-risk or minimal-risk and face transparency requirements rather than full conformity assessments.
Compliance Blog
What Just Changed in the EU AI Act: a Plain-English Summary for SaaS Founders
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It applies to providers and deployers of AI systems anywhere in the world when those systems are used in the European Union. If your SaaS product uses AI and has EU users, the Act applies to you in some form. The question is which tier of requirements applies and when they kick in.
This post summarises the current state as of mid-2026: what is already in force, what is coming, and what each tier means in practice for a typical SaaS company. It is not a comprehensive legal analysis. For obligations specific to your product architecture and user base, consult a qualified attorney.
The four-tier structure
The Act categorises AI systems into four risk tiers. Each tier carries different obligations and different compliance timelines.
Unacceptable risk (prohibited practices). Certain AI applications are banned outright under Article 5. These include social scoring systems used by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), AI that exploits psychological vulnerabilities or subconscious behaviour, and systems that manipulate individuals against their own interests. Prohibited practices have been banned since 2 February 2025. If your product does any of these, it was already non-compliant before this article was written.
High-risk systems (Article 6 and Annex III). These are AI systems used in eight specific domains: biometric identification and categorisation, critical infrastructure management, education and vocational training, employment and workers management, access to essential private services and public benefits, law enforcement, migration and border control management, and administration of justice. If your product makes or materially influences decisions in these areas, you likely fall here. High-risk obligations apply from 2 August 2026.
General-purpose AI (GPAI) models (Article 51 and following). These rules apply to providers of large foundation models, not to companies that use them via API. If you build on top of GPT-4, Claude, or Gemini, you are a deployer, not a GPAI provider. GPAI obligations applied from 2 August 2025.
Limited-risk and minimal-risk systems. Most SaaS AI features — chatbots, recommendation engines, content generation tools, analytics dashboards with AI summaries — fall here. The primary obligation is transparency: users must be told they are interacting with an AI system (Article 50). No conformity assessment is required.
What is already in force as of mid-2026
As of the date of this post, two phases are already active:
- Prohibited practices (since 2 February 2025): The Article 5 bans on social scoring, covert manipulation, real-time biometric identification, and exploitation of vulnerabilities are fully in force.
- GPAI model rules (since 2 August 2025): Providers of general-purpose AI models must publish technical documentation, comply with EU copyright law, and publish a summary of training data. Models with systemic risk (training compute above 10^25 FLOPs) face additional obligations including adversarial testing and incident reporting.
- Transparency obligations for limited-risk systems (since 2 August 2026 for some; interim provisions from entry into force): If you operate a chatbot, disclose it. If you generate synthetic media, label it. Article 50 obligations are largely already expected in practice even before the formal deadline.
What is coming in August 2026
The high-risk system obligations under Chapter III of the Act apply from 2 August 2026. These are the most demanding requirements in the Act and include:
- Risk management system: A documented, ongoing process for identifying and mitigating risks throughout the AI system lifecycle (Article 9).
- Data governance: Training, validation, and testing datasets must be subject to governance practices addressing biases, gaps, and representativeness (Article 10).
- Technical documentation: Before placing a high-risk system on the market, providers must prepare and maintain technical documentation demonstrating compliance (Article 11).
- Conformity assessment: Either self-assessment or third-party audit depending on the category. Self-assessment is available for most high-risk systems other than biometric identification (Article 43).
- EU registration: High-risk systems must be registered in the EU AI database (Article 71).
- Human oversight: High-risk systems must be designed to allow human operators to understand, oversee, and intervene in outputs (Article 14).
Interaction with GDPR
The EU AI Act does not replace GDPR. The two regulations overlap significantly where AI systems process personal data. Automated decision-making that produces legal or similarly significant effects on individuals is already governed by GDPR Article 22, which requires a legal basis, the right to explanation, and the right to contest. The AI Act adds additional requirements on top.
If your high-risk AI system processes personal data, both frameworks apply simultaneously. Data protection impact assessments (DPIAs) under GDPR and the risk management documentation required by the AI Act overlap but are not identical. See the GDPR checklist for the data subject rights requirements that apply regardless of your AI Act tier.
How to determine if your AI system is high-risk: a brief decision flow
Work through these questions in order to determine whether your product falls into the high-risk category:
- Is the system used in the EU? If no EU users, the Act does not apply in its current form. If yes, continue.
- Is the system used as a safety component in products covered by EU product legislation? Machinery, medical devices, aviation equipment, vehicles, and similar categories have separate product regulations that the AI Act interfaces with. If yes, high-risk rules apply.
- Is the system listed in Annex III? Check whether your use case falls in one of the eight domains: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice. If yes, high-risk rules apply.
- Does it make or materially influence individual decisions? A system used purely for aggregated analytics with no individual-level decision output is less likely to be classified as high-risk even within a listed domain.
- If not high-risk, is there a transparency obligation? Chatbots and synthetic media generation always require disclosure under Article 50.
If your system is not high-risk and is not a GPAI model, your current obligations are: disclose AI interactions to users, do not engage in prohibited practices, and comply with GDPR if you process personal data. That is a much shorter list than what the Act's full complexity might suggest.
What to do this quarter
For most SaaS founders not building high-risk systems, the practical steps for mid-2026 are:
- Confirm your product is not engaging in any prohibited practice under Article 5. This requires a deliberate review, not an assumption.
- Ensure user-facing AI interactions are clearly disclosed. If you run a chatbot, label it. If you generate AI summaries of user data, say so.
- If you use a GPAI model via API (OpenAI, Anthropic, Google, Meta Llama, etc.), check the provider's AI Act compliance documentation. Deployers can rely on provider documentation for upstream obligations, but you retain responsibility for your system's use.
- If August 2026 is approaching and you think you might fall in a high-risk category, get legal advice now. The conformity assessment and documentation process for high-risk systems is not a short-term project.
- Monitor the European AI Office (ai-office.eu) for implementing acts, delegated acts, and guidance documents.
FAQ
When does the EU AI Act apply to my SaaS product?
It depends on your product's risk classification. Prohibited practices have been banned since February 2025. GPAI model obligations applied from August 2025. High-risk system obligations apply from August 2026. Limited-risk systems (most chatbots and recommendation engines) face transparency obligations but lighter requirements.
Is my AI chatbot a high-risk system under the EU AI Act?
Probably not, unless it is used for decisions about employment, credit, education, essential services, law enforcement, or critical infrastructure. General-purpose chatbots for customer service or content generation are typically limited-risk or minimal-risk systems.
Does the EU AI Act apply to AI systems built on third-party models like GPT-4 or Claude?
Yes. If you deploy an AI system to EU users that meets the definition of a high-risk system, you are a deployer and have obligations under the Act regardless of which underlying model you use. The model provider has separate obligations as a GPAI provider.
What is a GPAI model under the EU AI Act?
A General-Purpose AI (GPAI) model is a model trained on large amounts of data using self-supervision and capable of serving a wide range of purposes. GPT-4, Claude, Gemini, and Llama models all qualify. Companies that provide GPAI models to others must publish technical documentation, comply with copyright law, and publish training data summaries.