home_warning_banner Check now
Guide

AI Act: Understanding the Law in 10 Minutes

No time to read 400 pages? Here's the essential guide to the EU AI regulation: risk classification, obligations, and timeline.

Author Marie Dupont
January 10, 2025 10 min read
AI Act: Understanding the Law in 10 Minutes
The AI Act explained simply

The European AI Regulation (AI Act) came into force on August 1, 2024. With its 113 articles and numerous annexes, it may seem intimidating. Here is a summary to understand the essentials in 10 minutes.

1. What is the AI Act?

The AI Act (Regulation EU 2024/1689) is the world's first comprehensive AI legislation. It applies to any AI system marketed or used in the European Union, regardless of the provider's origin.

Its goal: to ensure that AI is safe, transparent, and respectful of fundamental rights, while promoting innovation. It adopts a risk-based approach: the higher the risk, the stricter the obligations.

Who is affected?

Any company that develops, deploys, or uses AI systems in the EU, including non-European companies targeting the European market.

2. The Risk Pyramid

The AI Act classifies AI systems into 4 risk levels. This classification determines your obligations:

Unacceptable Risk - PROHIBITED
High Risk - Strict obligations
Limited Risk - Transparency required
Minimal Risk - No obligation

The majority of AI systems (chatbots, recommendations, translations) fall into the "minimal" or "limited" categories. Only systems affecting sensitive areas (HR, health, credit) are classified as "high risk."

3. What is Prohibited (Article 5)

Certain uses of AI are completely prohibited in the EU:

Subliminal Manipulation

Techniques that exploit psychological vulnerabilities to manipulate behaviors.

Biometric Surveillance

Real-time facial recognition in public spaces (except for security exceptions).

Social Scoring

Evaluating citizens based on their social behavior like in China.

Predictive Scoring

Crime prediction based solely on profiling.

4. High-Risk Systems (Annex III)

A system is classified as "high risk" if it operates in one of these areas:

  • Recruitment and HR management (CV screening, performance evaluation)
  • Access to education (grading, academic guidance)
  • Credit assessment and financial scoring
  • Essential public services (benefits, welfare)
  • Law enforcement (identification, risk assessment)
  • Medical devices (diagnosis, treatment)

These systems must undergo a conformity assessment before market placement and comply with strict requirements for documentation, transparency, and human oversight.

5. Your Main Obligations

Based on your risk level, here are your obligations:

Obligation Article Applies to
Risk management Art. 9 High risk
Data governance Art. 10 High risk
Technical documentation Art. 11-12 High risk
Human oversight Art. 14 High risk
User transparency Art. 50 All (if interaction)

6. Timeline and Penalties

The AI Act enters into application gradually:

2024

August 2024: Entry into force

2025

February 2025: Prohibitions applicable

2026

August 2026: All obligations

Penalties provided
  • Prohibited practices: up to €35M or 7% of global turnover
  • High-risk non-compliance: up to €15M or 3% of turnover
  • False information: up to €7.5M or 1% of turnover

Assess your compliance in 5 minutes

Our automated audit determines your risk level and specific obligations.

Start free audit

The AI Act represents a major turning point for the AI industry in Europe. Even if most companies are not affected by the heaviest obligations, it is crucial to verify your situation now to anticipate 2026.

Share:

Related articles

Stay informed

Receive our AI Act analysis and guides directly in your inbox.