← Back to articles PL

AI Act 2026 — What Your Company Must Do: A Practical Guide

Regulation (EU) 2024/1689 — known as the AI Act — is the world's first comprehensive legislation regulating artificial intelligence. If your company uses ChatGPT, AI-powered recruitment tools, credit scoring, or any other AI system, this article explains what you need to do before August 2, 2026 — the date when most obligations come into force.

Note: The obligation to train employees on AI literacy has been in force since February 2, 2025. If your company uses AI and has not conducted training — you are already in breach.

AI Act Implementation Timeline

The AI Act is being phased in gradually. Some obligations are already in force, others will apply from August 2026 or later. Here are the key dates:

  • August 1, 2024 — AI Act enters into force
  • February 2, 2025 — ban on prohibited AI systems (Art. 5) + AI literacy obligation (Art. 4)
  • August 2, 2025 — obligations for GPAI model providers (Arts. 53–55), governance rules, Member States designate competent authorities
  • August 2, 2026high-risk AI systems listed in Annex III, transparency obligations (Art. 50), full enforcement powers (including EU-level fines for GPAI providers under Art. 101)
  • August 2, 2027 — high-risk AI systems that are safety components of products regulated under EU sectoral legislation (Annex I), e.g. medical devices, civil aviation, machinery. Also the final compliance deadline for GPAI models already on the market before August 2025

The pulsing red dot next to August 2, 2026 is intentional — this is the most important deadline for most companies. From that date, high-risk AI systems (recruitment, credit scoring, medical diagnostics, and others listed in Annex III) must meet full regulatory requirements. However, if your AI system is embedded into a regulated product (e.g., a medical device or aviation system), your deadline is August 2, 2027.

Who Does the AI Act Apply To?

The AI Act has extraterritorial reach — similar to the GDPR. It applies to any company that:

  • Provides AI systems placed on the EU market (provider)
  • Deploys and uses AI systems within the EU (deployer)
  • Imports or distributes AI systems in the EU
  • Manufactures products that integrate AI systems
  • Is outside the EU but whose AI system's output is used within the EU

In practice: if your company uses ChatGPT, Copilot, marketing automation tools, or any other AI-based system — the AI Act applies to you. The question is: to what extent.

Provider vs. Deployer — The Key Distinction

Provider — develops, creates, or makes an AI system available. E.g., OpenAI, a company building an AI-powered recruitment tool.

Deployer — uses an AI system in its operations. E.g., an HR firm using a CV screening tool, a bank using AI credit scoring.

Product manufacturer — integrates an AI system into a hardware product and places it on the market under their own name.

Most companies are deployers — they use off-the-shelf AI tools. But if you fine-tune a model or build your own AI solution for clients, you may also be a provider. If you're based outside the EU, you must appoint an authorised representative established in the EU to carry out your obligations.

Risk Classification — The Foundation of the AI Act

The AI Act classifies AI systems into four risk categories. The category determines the scope of your obligations.

Category Examples Consequences
Prohibited Social scoring, subliminal manipulation, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), facial recognition database scraping, emotion recognition at work/school (except for medical or safety purposes) Total ban. Fines up to EUR 35M or 7% of turnover
High risk AI in recruitment/HR, credit scoring, medical diagnostics, education/grading, critical infrastructure, access to public services Full compliance: technical documentation, risk management, human oversight, EU database registration
Limited risk Chatbots, content generation (text, images), deepfakes Transparency obligation: inform users of AI interaction, label AI-generated content
Minimal risk Spam filters, product recommendation systems, games No additional obligations (voluntary codes of conduct)

Important nuances on prohibited systems:

Real-time remote biometric identification in publicly accessible spaces is prohibited for law enforcement, but Art. 5(1)(h) provides narrow exceptions — e.g., targeted search for specific missing persons, prevention of a specific and imminent threat to life, or locating suspects of serious crimes. Use outside law enforcement or in non-public spaces may still be classified as high-risk rather than prohibited.

Emotion recognition at the workplace and in educational settings is prohibited under Art. 5(1)(f), but with an explicit exception for systems used for medical or safety purposes (e.g., monitoring driver fatigue or patient wellbeing).

High-Risk AI Systems — Annex III

This is the most critical category from a compliance perspective. Annex III of the AI Act lists 8 domains where AI systems are automatically classified as high-risk:

  1. Biometric identification — facial recognition, fingerprint, voice recognition systems (beyond one-time verification)
  2. Critical infrastructure — traffic management, water/gas/energy supply
  3. Education and training — grading systems, student sorting, plagiarism detection
  4. Employment and HR — CV screening, candidate ranking, employee monitoring, promotion/termination decisions
  5. Public and private services — credit scoring, insurance risk assessment, pricing, benefit eligibility
  6. Law enforcement — profiling, recidivism risk assessment, evidence analysis
  7. Migration and border control — visa application analysis, risk detection
  8. Justice and democratic processes — systems supporting legal interpretation

If your company uses AI in recruitment, financial scoring, medical diagnostics, or education — there is a high probability you are dealing with a high-risk system.

Note: A system falling within an Annex III domain is not automatically high-risk if it performs only a narrow procedural or preparatory task (Art. 6(3)). But the burden of proof lies with you — and the classification must be documented and made available to regulators upon request.

Obligations for High-Risk AI Systems

A company deploying a high-risk AI system must ensure:

  • Human oversight (Art. 14) — designate competent individuals to supervise the system, capable of intervention and shutdown
  • Risk management system (Art. 9) — continuous identification, analysis, and mitigation of AI-related risks
  • Technical documentation (Art. 11) — maintain documentation covering the system's purpose, training data, performance metrics
  • Record-keeping (Art. 12) — automatic logging of system operations enabling audit trails
  • Transparency (Art. 13) — ensure the system is sufficiently understandable for users
  • Fundamental rights impact assessment (Art. 27) — required before deploying the system
  • EU database registration (Art. 49) — high-risk systems must be registered

Providers have additional obligations: conformity assessment, CE marking, quality management system, post-market monitoring, and serious incident reporting.

AI Literacy — An Obligation Already in Force

Article 4 of the AI Act requires companies to ensure an appropriate level of AI literacy among their staff. This obligation has been in force since February 2, 2025 — it applies to all companies using AI, regardless of risk category.

In practice, this means your company should:

  1. Identify which employees use AI systems
  2. Conduct training tailored to each role and context of use
  3. Document the training provided
  4. Update knowledge as new AI tools are adopted

AI literacy is not a one-off box-ticking exercise — it is an ongoing process. Failure to train is a violation that can already serve as grounds for sanctions today.

Penalties for AI Act Violations

The AI Act provides three tiers of penalties, depending on the severity of the breach:

Violation Max fine SMEs / startups
Prohibited AI systems (Art. 5) EUR 35M or 7% of turnover (whichever is higher) Whichever is lower
High-risk systems, other operator obligations, transparency EUR 15M or 3% of turnover (whichever is higher) Whichever is lower
Supplying incorrect information to authorities EUR 7.5M or 1% of turnover (whichever is higher) Whichever is lower

SME treatment explained: Under the standard rules, the fine is the higher of the fixed amount or the percentage of turnover. For SMEs and startups, this is inverted — the fine is capped at the lower of the two. This is a significant concession, not merely a "proportional reduction."

Penalties are imposed by national supervisory authorities. Note that EU-level fines for GPAI model providers under Art. 101 (up to EUR 15M or 3% of turnover) apply from August 2, 2026.

GPAI Models — Additional Obligations Since August 2025

If your company provides a General-Purpose AI (GPAI) model, you have been subject to additional obligations under Articles 53–55 of the AI Act since August 2, 2025:

  • Maintain technical documentation of the model
  • Ensure transparency toward downstream providers
  • Implement a policy for compliance with EU copyright law
  • Publish a training data summary (using the Commission's template)

Models posing systemic risk (above 10²⁵ FLOP, as defined in Art. 51) face even stricter obligations — adversarial testing, incident reporting, and cybersecurity measures (Art. 55).

Transition period: GPAI models already on the market before August 2, 2025 benefit from a transition period — full compliance is required by August 2, 2027. However, providers must demonstrate they are actively taking steps toward conformity during this period.

Practical Implementation Plan — 7 Steps

You do not need to implement everything at once. Here is a prioritized action plan:

Step 1: AI Systems Inventory (now)

Create a register of all AI systems used in your company. For each system, record: the provider, purpose, input/output data, and who uses it. Without this, you cannot assess risk.

Step 2: Risk Classification (now)

For each system in the register, determine the risk category (prohibited / high / limited / minimal). Cross-reference with the Annex III list. Remember that an Annex III use case may not qualify as high-risk if the system performs only a narrow procedural task — but document your reasoning. If in doubt — consult a lawyer.

Step 3: AI Literacy (immediately — obligation already in force)

Train employees who use AI. Training should cover: how AI works, tool limitations, AI Act obligations, and your company's AI policy. The European Commission has published Q&A guidance on what AI literacy training should cover.

Step 4: Human Oversight and Procedures (Q2 2026)

Designate individuals responsible for overseeing high-risk AI systems. Develop intervention and shutdown procedures.

Step 5: Documentation and Risk Management (Q2 2026)

Prepare technical documentation, risk assessments, and testing procedures. For deployers — ensure your provider supplies the required documentation.

Step 6: Review AI Vendor Contracts (Q2 2026)

Verify that contracts with AI vendors (OpenAI, Microsoft, Google, HR/finance tool providers) address AI Act requirements — particularly access to documentation, information obligations, and liability allocation.

Step 7: Registration and Monitoring (by Aug 2, 2026)

Register high-risk systems in the EU database. Implement ongoing compliance monitoring.

Check Your Company's AI Act Readiness

Free AI Act Quick Scan — answer a few questions and receive a risk classification report, readiness checklist, and AI systems registry template. 10 minutes, no strings attached.

Check AI Act Readiness →
or book a call with a lawyer

Digital Omnibus on AI — Will the AI Act Be Softened?

On 19 November 2025, the European Commission published the Digital Omnibus on AI — a targeted amending regulation that proposes simplifications to the AI Act's implementation. Key proposals include:

  • Extended deadlines for high-risk systems: The application of Annex III high-risk obligations would become conditional on the readiness of harmonised standards, with a long-stop date of 2 December 2027 (instead of August 2026). For AI embedded in regulated products: 2 August 2028 (instead of August 2027).
  • Expanded SME relief: Regulatory simplifications currently available to SMEs would be extended to small mid-cap companies (SMCs).
  • Simplified documentation and governance: Streamlined quality management system requirements, eased database registration obligations for systems performing narrow tasks, and a more flexible post-market monitoring regime.
  • Centralised enforcement: The EU AI Office would gain exclusive supervisory authority over certain AI systems, including those based on GPAI models and systems embedded in very large online platforms.
  • Broader use of regulatory sandboxes and real-world testing.

However, waiting for the Digital Omnibus before implementing is not advisable. The Omnibus is a Commission proposal that must go through full legislative negotiation with the European Parliament and the Council. As of early 2026, the Cyprus Presidency of the Council has already proposed substantive changes in its compromise texts, and Members of Parliament have signaled further debate. Even if adopted, the Omnibus would not eliminate core obligations — it adjusts timelines and procedures while keeping the AI Act's fundamental architecture intact.

Current obligations — AI literacy, prohibited system bans, GPAI model requirements — are already in force and unaffected by the Omnibus. Start implementing now.

Summary

The AI Act is a regulation that cannot be ignored — especially if your company uses AI in recruitment, finance, medical diagnostics, or education. Key takeaways:

  • AI literacy is already mandatory — employee training is priority number one
  • August 2, 2026 — the deadline for high-risk system compliance (Annex III); August 2, 2027 — for AI in regulated products (Annex I)
  • Inventory and classify — do it now to understand which obligations apply to you
  • Penalties are real — up to EUR 35M for prohibited systems, up to EUR 15M for high-risk violations (SMEs pay the lower of the fixed amount or percentage)
  • Don't wait for the Digital Omnibus — the fundamentals of the AI Act will not change, and current obligations are already enforceable

Need Legal Help With the AI Act?

Take the free AI Act Quick Scan or book a consultation with a technology law attorney.

AI Act Quick Scan — free →

Related Articles