EU AI Act
European Union Artificial Intelligence Act (Regulation 2024/1689)
European Union | In force since August 1, 2024 — phased obligations through 2027
The world's first comprehensive legal framework for AI applies a risk-based approach to regulate AI systems across the EU. RAIL Score checks content against 9 key EU AI Act requirements covering risk management, data governance, transparency, human oversight, accuracy, fundamental rights, content labelling, prohibited practices, and GPAI obligations.
Official Resources
- eur-lex.europa.eu — Full regulation text
- European AI Office — EC Digital Strategy
- artificialintelligenceact.eu — Implementation timeline
- AI Act Service Desk — European Commission support
Penalty Structure
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | EUR 35M or 7% global annual turnover |
| Other obligations (high-risk, transparency, GPAI) | EUR 15M or 3% global annual turnover |
| Supplying incorrect or misleading information to authorities | EUR 7.5M or 1% global annual turnover |
Penalty regime active since August 2, 2025. GPAI-specific penalties activate August 2, 2026.
Implementation Timeline
| Date | What Applies | Status |
|---|---|---|
| Aug 1, 2024 | AI Act enters into force | Done |
| Feb 2, 2025 | Prohibited AI practices banned; AI literacy obligations | Active |
| Aug 2, 2025 | GPAI model obligations; governance; penalty regime active | Active |
| Aug 2, 2026 | High-risk AI systems (Annex III) full compliance; transparency rules (Art. 50) | Upcoming |
| Aug 2, 2027 | High-risk AI in regulated products (Annex I); GPAI legacy systems | Future |
| Dec 31, 2030 | Large-scale IT systems in Annex X | Future |
The European Commission missed its Feb 2, 2026 deadline to publish Article 6 guidance on high-risk classification. The Digital Omnibus package proposes delaying high-risk enforcement by up to 16 months if harmonised standards are not ready.
Risk Categories
Prohibited (Banned since Feb 2, 2025)
AI systems that are absolutely banned under the Act:
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Social scoring systems by public authorities
- Subliminal manipulation causing harm
- Exploitation of vulnerable groups
- Predictive policing based solely on profiling
- Workplace and educational emotion recognition systems
- Untargeted scraping of facial images to build recognition databases
High-Risk (Full compliance required Aug 2, 2026)
AI systems in Annex III categories face the most demanding obligations:
- Biometric identification and categorisation of natural persons
- Safety components of critical infrastructure (water, electricity, traffic)
- Educational and vocational training (access, assessment)
- Employment, worker management, self-employment access
- Essential services (credit scoring, social benefits, emergency dispatch)
- Law enforcement (risk assessment, evidence evaluation)
- Migration, asylum, border control
- Administration of justice and democratic processes
High-risk providers must implement:
- Risk management system (ongoing) — Art. 9
- Data governance requirements — Art. 10
- Technical documentation — Art. 11
- Record keeping and logging — Art. 12
- Transparency and user instructions — Art. 13
- Human oversight measures — Art. 14
- Accuracy, robustness, cybersecurity — Art. 15
- Conformity assessment, CE marking, EU database registration
- Post-market monitoring
Limited Risk (Transparency obligations from Aug 2, 2026)
- AI chatbots must disclose they are AI
- Deepfakes must be labelled
- AI-generated content must be machine-readable marked
- Emotion recognition systems must inform users
Minimal Risk
Most AI systems (spam filters, games, etc.) — no mandatory requirements.
GPAI Model Obligations
Active since August 2, 2025
Providers of General-Purpose AI models must:
- Provide technical documentation
- Comply with EU copyright law (training data)
- Publish sufficiently detailed training data summaries
- Systemic risk models (compute > 10^25 FLOPs): adversarial testing, incident reporting, cybersecurity measures
26 major AI providers — including Microsoft, Google, Amazon, OpenAI, and Anthropic — signed the GPAI Code of Practice in August 2025.
Requirements Checked by RAIL Score
| ID | Article | Requirement |
|---|---|---|
| EUAI-001 | Art. 9 | Risk management system |
| EUAI-002 | Art. 10 | Data governance |
| EUAI-003 | Art. 13 | Transparency to users |
| EUAI-004 | Art. 14 | Human oversight |
| EUAI-005 | Art. 15 | Accuracy and robustness |
| EUAI-006 | Art. 22 | Fundamental rights impact |
| EUAI-007 | Art. 50 | AI-generated content labelling |
| EUAI-008 | Art. 5 | No prohibited practices |
| EUAI-009 | Art. 53 | GPAI obligations |
2025–2026 Enforcement Snapshot
- Feb 2025: Prohibited practices enforcement began. Multiple investigations underway into workplace emotion recognition and social scoring systems; no public penalties yet as of March 2026
- Jan 2026: Finland became the first EU member state with fully operational AI Act enforcement powers at the national level
- Oct 2025 (Italy): Law 132/2025 entered force — national AI law with fines up to EUR 774,685 and criminal penalties for unlawful dissemination of AI-generated content (1–5 years)
- Feb 3, 2026: European Commission missed deadline to publish Article 6 guidance on high-risk classification
- Mar 5, 2026: Commission publishes second draft of Code of Practice on Marking and Labelling of AI-generated content
RAIL Dimension Mapping
| RAIL Dimension | EU AI Act Articles | Focus |
|---|---|---|
| Transparency | Art. 13, 50 | User disclosure, AI labelling |
| Fairness | Art. 10, 5 | Data governance, no prohibited discrimination |
| Safety | Art. 9, 15 | Risk management, robustness |
| Accountability | Art. 11, 12, 17 | Documentation, logging, corrective action |
| Reliability | Art. 14, 15 | Human oversight, accuracy |
| User Impact | Art. 14, 22 | Fundamental rights, oversight mechanisms |
API Example
See the Compliance API reference for full endpoint documentation, parameters, and response schema.
from rail_score_sdk import RailScoreClient
client = RailScoreClient(api_key="YOUR_RAIL_API_KEY")
result = client.compliance_check(
content="""
Our AI hiring tool automatically scores candidate CVs and ranks applicants
for shortlisting. The model was trained on historical hiring decisions.
Recruiters receive a ranked list with no ability to view individual scores.
""",
framework="eu_ai_act",
context={
"domain": "general",
"decision_type": "automated",
"high_risk_indicators": ["automated_decisions", "employment_screening"]
},
strict_mode=True
)
print(f"EU AI Act Score: {result.compliance_score.score}/10")
# Likely high-risk (Annex III: employment/worker management)
# Expect FAIL on: human oversight (Art. 14), transparency (Art. 13)
for issue in result.issues:
print(f"[{issue.severity.upper()}] {issue.description}")Sources: EU AI Act · EC Digital Strategy · artificialintelligenceact.eu