A program sponsored by DVWHA
Member Login
AI Compliance11 min read

EU AI Act Compliance: What Your Company Must Do Before August 2026

CAIBS Institute· Editorial TeamApril 19, 2026

The clock is ticking. On August 2, 2026, the EU AI Act's provisions for high-risk AI systems become fully enforceable. Companies that deploy, distribute, or import AI systems into EU markets must comply or face penalties of up to 35 million euros or 7% of annual global turnover — whichever is higher.

This is not a distant regulatory threat. It is four months away.

What the EU AI Act Actually Requires

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI law. It uses a risk-based approach to classify AI systems and impose proportional requirements.

The Risk Categories

Unacceptable Risk (Banned)

These AI practices are prohibited entirely as of February 2, 2025:

  • Social scoring systems by governments
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • AI that exploits vulnerabilities of specific groups
  • AI that uses subliminal techniques to distort behavior

High Risk (Heavy Regulation — Deadline: August 2, 2026)

AI systems in these areas must meet strict requirements:

  • Biometric identification and categorization
  • Critical infrastructure management
  • Education and vocational training (access, assessment)
  • Employment (recruitment, performance evaluation, promotion)
  • Essential services (credit scoring, insurance, social benefits)
  • Law enforcement
  • Migration and border control
  • Administration of justice

Limited Risk (Transparency Obligations)

AI systems that interact with humans must disclose that they are AI:

  • Chatbots must identify themselves as AI
  • AI-generated content must be labeled
  • Emotion recognition systems must inform users

Minimal Risk (No Specific Obligations)

Most AI systems fall here — spam filters, AI-powered games, inventory management.

The August 2026 Compliance Checklist

If your AI system is classified as high-risk, here is what must be in place by August 2, 2026:

1. Risk Management System (Article 9)

You must establish and maintain a continuous risk management system that:

  • Identifies and analyzes known and foreseeable risks
  • Estimates and evaluates risks that may emerge during use
  • Adopts risk mitigation measures
  • Tests the system against the identified risks

2. Data Governance (Article 10)

Training, validation, and testing data sets must be:

  • Relevant, representative, and free of errors
  • Appropriate for the intended purpose
  • Examined for possible biases
  • Subject to appropriate data governance practices

3. Technical Documentation (Article 11)

Before placing a high-risk AI system on the market, you must prepare technical documentation that includes:

  • General description of the system
  • Detailed description of development process
  • Information about monitoring, functioning, and control
  • Description of the risk management system
  • Description of changes made during the system's lifecycle

4. Record-Keeping (Article 12)

High-risk AI systems must be designed to automatically record events (logs) during operation, including:

  • Period of use
  • Reference database used
  • Input data that led to a match
  • Identification of natural persons involved in verification

5. Transparency and Information (Article 13)

High-risk AI systems must be designed to be sufficiently transparent for users to:

  • Interpret the system's output
  • Use it appropriately
  • Understand its capabilities and limitations

6. Human Oversight (Article 14)

Systems must be designed to allow effective human oversight, including:

  • Ability to fully understand the system's capacities and limitations
  • Ability to correctly interpret outputs
  • Ability to decide not to use the system or override its output
  • Ability to intervene or stop the system

7. Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must achieve appropriate levels of:

  • Accuracy for their intended purpose
  • Robustness against errors and inconsistencies
  • Cybersecurity against unauthorized access or manipulation

8. Conformity Assessment (Article 43)

Before placing a high-risk AI system on the market, you must undergo a conformity assessment. For most high-risk systems, this can be done through internal assessment. For biometric identification systems, a third-party assessment by a notified body is required.

9. EU Database Registration (Article 71)

High-risk AI systems must be registered in the EU database before being placed on the market. This registration is public and includes information about the system, its provider, and its conformity assessment.

How CAIBS Certification Helps

The CAIBS B.I.T. Framework aligns with the EU AI Act's requirements in several critical ways:

EU AI Act RequirementCAIBS B.I.T. Dimension
Risk Management (Art. 9)Decision Impact + Accountability
Transparency (Art. 13)Accountability + Actionability
Human Oversight (Art. 14)Accountability
Accuracy & Robustness (Art. 15)Real-World Results
Behavioral ImpactBehavior Change (unique to CAIBS)

While CAIBS certification does not replace the formal EU conformity assessment, it provides a structured behavioral impact evaluation that addresses dimensions the EU AI Act requires but does not specify how to measure — particularly around behavioral impact and real-world outcomes.

Companies that complete CAIBS certification have a documented, third-party-verified assessment that can serve as supporting evidence in their EU AI Act conformity assessment.

Timeline: What to Do When

WhenAction
Now (April 2026)Classify all your AI systems by EU risk category
May 2026Complete risk management documentation for high-risk systems
June 2026Conduct conformity assessment (internal or third-party)
July 2026Register high-risk systems in EU database
August 2, 2026Full compliance required

Do Not Wait

Companies that begin compliance work now have four months to prepare. Companies that wait until July will be scrambling — and the conformity assessment process alone can take weeks.

Start by rating your AI systems at caibsinstitute.org to understand their behavioral impact profile, then use that assessment as a foundation for your EU AI Act compliance documentation.


Sources: EU AI Act (Regulation 2024/1689), European Commission AI Act Implementation Timeline, Modulos AI Compliance Guide (2026), Eversheds Sutherland Global AI Regulatory Update (April 2026)

Related Resources

Preparing for the EU AI Act? These guides provide additional context:

Start Your EU AI Act Compliance Today

EU AI ActcomplianceregulationAugust 2026high-risk AI

Share this article

Help spread the word about AI behavioral standards

Share

Ready to Certify Your AI Tool?

Get a free preliminary B.I.T. Framework rating in minutes.

CAIBS

Center for AI Behavioral Standards

The gold standard for AI accountability. Defining, measuring, and certifying AI systems based on their ability to influence human behavior and drive real-world outcomes.

A program sponsored by DVWHA
Operated by Weerussh LLC

Trademark & Intellectual Property Notices

CAIBS™, Center for AI Behavioral Standards™, B.I.T. Framework™ (Behavioral Impact Test), CAIBS-5 Exemplary™, and all associated logos, certification marks, badge designs, and visual identities are trademarks and/or service marks of Weerussh LLC, operating as the CAIBS Institute under the DVWHA program. All rights reserved.

The CAIBS certification badges, stickers, seals, and marks (including CAIBS-1 through CAIBS-5 designations) are proprietary certification marks. Unauthorized use, reproduction, display, or distribution of any CAIBS certification mark, badge, or seal is strictly prohibited and may result in legal action under applicable trademark and intellectual property laws.

Use of CAIBS certification marks is granted exclusively to organizations that have completed the official CAIBS certification process and maintain an active, valid certification status. Certification marks must be displayed in accordance with the CAIBS Brand Usage Guidelines. Any misuse, alteration, or unauthorized display of certification marks will result in immediate revocation of certification and potential legal proceedings.

Certification Disclaimer

CAIBS certification is an independent, third-party evaluation of AI systems based on the proprietary B.I.T. Framework™. CAIBS certification does not constitute legal advice, regulatory approval, government endorsement, or guarantee of compliance with any specific law, regulation, or standard (including but not limited to the EU AI Act, NIST AI RMF, or any other regulatory framework). CAIBS certification is a voluntary industry standard and should be considered as one factor among many in evaluating AI systems. Organizations are responsible for ensuring their own compliance with all applicable laws and regulations. CAIBS ratings and scores reflect the evaluation at the time of assessment and may change upon re-evaluation.

© 2026 Weerussh LLC d/b/a CAIBS Institute. All rights reserved.

A program sponsored by DVWHA (Disabled Veterans Whole Health Association), a 501(c)(4) membership association.

L

Laureen

AI Certification Consultant

Chat with Laureen, your AI Certification Consultant

Powered by CAIBS Institute · Privacy Policy

AI Assistant

Need help? 💬

Chat with Laureen, our AI consultant

Live Support