Table of contents
Regulation (EU) 2024/1689, commonly known as the "AI Act", is the world's first comprehensive legislation governing artificial intelligence. Adopted on June 13, 2024, it establishes a harmonized legal framework for the development, marketing and use of AI systems in the European Union. This guide explains everything you need to know to prepare.
What is the AI Act?
The AI Act is a European regulation directly applicable in all Member States. Unlike a directive, it does not need to be transposed into national law. Its main objective is to ensure that AI systems placed on the European market are safe and respect the fundamental rights of citizens.
The regulation adopts a risk-based approach: the more risks an AI system poses to health, safety or fundamental rights, the stricter the obligations.
The AI Act applies to any organization that develops, markets or uses AI systems in the EU, regardless of where they are established.
Risk-based classification
The AI Act defines four risk levels, each with specific obligations:
Unacceptable risk
Completely prohibited practices: subliminal manipulation, exploitation of vulnerabilities, government social scoring, real-time biometric identification in public spaces.
High risk
Systems subject to strict requirements: conformity assessment, technical documentation, human oversight, transparency, etc. Includes recruitment, education, healthcare.
Limited risk
Transparency obligations: users must be informed they are interacting with an AI system (chatbots, deepfakes, generated content).
Minimal risk
No specific obligations, but providers are encouraged to voluntarily adopt codes of conduct (spam filters, video games, etc.).
High-risk AI systems
Annex III of the regulation exhaustively lists the areas where AI systems are considered high risk:
- Biometric identification and categorization of persons
- Management and operation of critical infrastructure
- Education and vocational training (assessment, orientation)
- Employment and worker management (recruitment, evaluation)
- Access to essential services (credit, insurance, public services)
Providers of these systems must implement a risk management system, ensure the quality of training data, maintain detailed technical documentation, and undergo conformity assessment.
Obligations by your role
The AI Act distinguishes several actors, each with their own responsibilities:
| Role | Main obligations |
|---|---|
| Provider | Compliance before market placement, CE marking, technical documentation, quality system, post-market surveillance |
| Deployer | Use according to instructions, human oversight, log retention, information to affected persons |
| Importer | Verification of provider compliance, adequate storage, cooperation with authorities |
Penalties
Non-compliance with the AI Act can result in significant administrative fines:
- Prohibited practices: up to EUR 35 million or 7% of annual turnover
- High-risk obligations: up to EUR 15 million or 3% of annual turnover
- Incorrect information: up to EUR 7.5 million or 1% of annual turnover
How to prepare?
Here are the recommended steps to anticipate the regulation's entry into force:
- Inventory all your current and planned AI systems
- Classify each system according to risk levels
- Identify compliance gaps
- Implement a prioritized action plan
Our automated audit tool allows you to complete this analysis in minutes and get a personalized action plan.
Ready to assess your compliance?
Our intelligent questionnaire analyzes your AI systems and provides a detailed report.
Start free auditThe AI Act represents a major change for the artificial intelligence ecosystem in Europe. By anticipating your compliance now, you transform this regulatory constraint into a competitive advantage and strengthen the trust of your customers and partners.