Documentation
eu_ai_act

EU AI Act

European Union Artificial Intelligence Act (Regulation 2024/1689)

European Union | In force since August 1, 2024 — phased obligations through 2027

The world's first comprehensive legal framework for AI applies a risk-based approach to regulate AI systems across the EU. RAIL Score checks content against 9 key EU AI Act requirements covering risk management, data governance, transparency, human oversight, accuracy, fundamental rights, content labelling, prohibited practices, and GPAI obligations.

Official Resources

Penalty Structure

ViolationMaximum Fine
Prohibited AI practicesEUR 35M or 7% global annual turnover
Other obligations (high-risk, transparency, GPAI)EUR 15M or 3% global annual turnover
Supplying incorrect or misleading information to authoritiesEUR 7.5M or 1% global annual turnover

Penalty regime active since August 2, 2025. GPAI-specific penalties activate August 2, 2026.

Implementation Timeline

DateWhat AppliesStatus
Aug 1, 2024AI Act enters into forceDone
Feb 2, 2025Prohibited AI practices banned; AI literacy obligationsActive
Aug 2, 2025GPAI model obligations; governance; penalty regime activeActive
Aug 2, 2026High-risk AI systems (Annex III) full compliance; transparency rules (Art. 50)Upcoming
Aug 2, 2027High-risk AI in regulated products (Annex I); GPAI legacy systemsFuture
Dec 31, 2030Large-scale IT systems in Annex XFuture

The European Commission missed its Feb 2, 2026 deadline to publish Article 6 guidance on high-risk classification. The Digital Omnibus package proposes delaying high-risk enforcement by up to 16 months if harmonised standards are not ready.

Risk Categories

Prohibited (Banned since Feb 2, 2025)

AI systems that are absolutely banned under the Act:

  • Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
  • Social scoring systems by public authorities
  • Subliminal manipulation causing harm
  • Exploitation of vulnerable groups
  • Predictive policing based solely on profiling
  • Workplace and educational emotion recognition systems
  • Untargeted scraping of facial images to build recognition databases

High-Risk (Full compliance required Aug 2, 2026)

AI systems in Annex III categories face the most demanding obligations:

  • Biometric identification and categorisation of natural persons
  • Safety components of critical infrastructure (water, electricity, traffic)
  • Educational and vocational training (access, assessment)
  • Employment, worker management, self-employment access
  • Essential services (credit scoring, social benefits, emergency dispatch)
  • Law enforcement (risk assessment, evidence evaluation)
  • Migration, asylum, border control
  • Administration of justice and democratic processes

High-risk providers must implement:

  • Risk management system (ongoing) — Art. 9
  • Data governance requirements — Art. 10
  • Technical documentation — Art. 11
  • Record keeping and logging — Art. 12
  • Transparency and user instructions — Art. 13
  • Human oversight measures — Art. 14
  • Accuracy, robustness, cybersecurity — Art. 15
  • Conformity assessment, CE marking, EU database registration
  • Post-market monitoring

Limited Risk (Transparency obligations from Aug 2, 2026)

  • AI chatbots must disclose they are AI
  • Deepfakes must be labelled
  • AI-generated content must be machine-readable marked
  • Emotion recognition systems must inform users

Minimal Risk

Most AI systems (spam filters, games, etc.) — no mandatory requirements.

GPAI Model Obligations

Active since August 2, 2025

Providers of General-Purpose AI models must:

  1. Provide technical documentation
  2. Comply with EU copyright law (training data)
  3. Publish sufficiently detailed training data summaries
  4. Systemic risk models (compute > 10^25 FLOPs): adversarial testing, incident reporting, cybersecurity measures

26 major AI providers — including Microsoft, Google, Amazon, OpenAI, and Anthropic — signed the GPAI Code of Practice in August 2025.

Requirements Checked by RAIL Score

IDArticleRequirement
EUAI-001Art. 9Risk management system
EUAI-002Art. 10Data governance
EUAI-003Art. 13Transparency to users
EUAI-004Art. 14Human oversight
EUAI-005Art. 15Accuracy and robustness
EUAI-006Art. 22Fundamental rights impact
EUAI-007Art. 50AI-generated content labelling
EUAI-008Art. 5No prohibited practices
EUAI-009Art. 53GPAI obligations

2025–2026 Enforcement Snapshot

  • Feb 2025: Prohibited practices enforcement began. Multiple investigations underway into workplace emotion recognition and social scoring systems; no public penalties yet as of March 2026
  • Jan 2026: Finland became the first EU member state with fully operational AI Act enforcement powers at the national level
  • Oct 2025 (Italy): Law 132/2025 entered force — national AI law with fines up to EUR 774,685 and criminal penalties for unlawful dissemination of AI-generated content (1–5 years)
  • Feb 3, 2026: European Commission missed deadline to publish Article 6 guidance on high-risk classification
  • Mar 5, 2026: Commission publishes second draft of Code of Practice on Marking and Labelling of AI-generated content

RAIL Dimension Mapping

RAIL DimensionEU AI Act ArticlesFocus
TransparencyArt. 13, 50User disclosure, AI labelling
FairnessArt. 10, 5Data governance, no prohibited discrimination
SafetyArt. 9, 15Risk management, robustness
AccountabilityArt. 11, 12, 17Documentation, logging, corrective action
ReliabilityArt. 14, 15Human oversight, accuracy
User ImpactArt. 14, 22Fundamental rights, oversight mechanisms

API Example

See the Compliance API reference for full endpoint documentation, parameters, and response schema.

Python — EU AI Act compliance check
from rail_score_sdk import RailScoreClient

client = RailScoreClient(api_key="YOUR_RAIL_API_KEY")

result = client.compliance_check(
    content="""
    Our AI hiring tool automatically scores candidate CVs and ranks applicants
    for shortlisting. The model was trained on historical hiring decisions.
    Recruiters receive a ranked list with no ability to view individual scores.
    """,
    framework="eu_ai_act",
    context={
        "domain": "general",
        "decision_type": "automated",
        "high_risk_indicators": ["automated_decisions", "employment_screening"]
    },
    strict_mode=True
)

print(f"EU AI Act Score: {result.compliance_score.score}/10")
# Likely high-risk (Annex III: employment/worker management)
# Expect FAIL on: human oversight (Art. 14), transparency (Art. 13)

for issue in result.issues:
    print(f"[{issue.severity.upper()}] {issue.description}")