Back to Knowledge Hub
Industry

EU AI Act Compliance in 2025: What Organizations Need to Know

Navigating the world

RAIL Research Team
November 3, 2025
16 min read
EU AI Act: risk classification tiers

Unacceptable Risk

Social scoring, real-time biometric surveillance, subliminal manipulation

Prohibited

High Risk

Hiring tools, credit scoring, medical devices, law enforcement

Strict requirements

Limited Risk

Chatbots, deepfakes, emotion recognition

Transparency obligations

Minimal Risk

Spam filters, AI-powered games, recommendation systems

No specific requirements

All high-risk AI systems deployed in the EU must meet conformity requirements from August 2026.

The EU AI Act: A New Era of AI Regulation

On August 1, 2024, the European Union's Artificial Intelligence Act entered into force, creating the world's first comprehensive legal framework for AI. This landmark regulation affects any organization that develops, deploys, or uses AI systems in the EU market—regardless of where the organization is based.

Key dates to remember:

  • February 2, 2025: Prohibitions and AI literacy obligations now in effect
  • August 2, 2025: Governance rules and obligations for General Purpose AI (GPAI) models in effect
  • August 2, 2026: Full regulation applies (18 months away)
  • If your organization develops or uses AI, compliance planning must begin now.

    Understanding the Risk-Based Framework

    The AI Act categorizes AI systems into four risk tiers, each with different compliance requirements:

    Unacceptable Risk (Prohibited)

    These AI systems are banned in the EU as of February 2, 2025:

    Social Scoring

  • Government or private sector systems that evaluate or classify people based on social behavior or personal characteristics
  • China-style "social credit" systems
  • Workplace behavior scoring that affects access to services
  • Biometric Categorization

  • Inferring sensitive characteristics (race, political opinions, sexual orientation) from biometric data
  • Exception: Labeling biometric datasets for bias detection
  • Real-Time Remote Biometric Identification in Public Spaces

  • Live facial recognition in public by law enforcement
  • Limited exceptions for serious crimes (terrorism, kidnapping)
  • Requires judicial authorization
  • Predictive Policing

  • Systems that predict individual criminal behavior based on profiling
  • Based on prohibited characteristics or criminal history
  • Emotion Recognition in Workplaces and Educational Institutions

  • AI systems that detect emotions in employment or education settings
  • Exception: Medical or safety reasons
  • Untargeted Scraping of Facial Images

  • Indiscriminate collection of facial images from internet or CCTV
  • Exploitation Systems

  • AI exploiting vulnerabilities of children, elderly, or disabled persons
  • Manipulative or deceptive AI systems
  • Penalties for violations: Up to €35 million or 7% of global annual turnover, whichever is higher.

    High-Risk AI Systems

    High-risk systems face stringent compliance requirements. These include:

    Employment & HR

  • Recruitment and selection systems
  • Promotion and termination decision systems
  • Task allocation and performance monitoring
  • Example: Resume screening AI, employee monitoring systems
  • Education & Vocational Training

  • Admission and enrollment systems
  • Assessment and evaluation tools
  • Exam proctoring systems
  • Example: Automated essay grading, plagiarism detection with consequences
  • Essential Services

  • Credit scoring and creditworthiness assessment
  • Insurance risk assessment and pricing
  • Emergency service dispatching
  • Example: Mortgage approval algorithms, insurance premium calculators
  • Law Enforcement

  • Individual risk assessment for offense prediction
  • Polygraph and similar tools
  • Evidence reliability assessment
  • Example: Recidivism prediction tools, crime pattern analysis
  • Migration & Border Control

  • Asylum and visa application assessment
  • Lie detection systems
  • Risk assessment for security
  • Administration of Justice

  • Legal research and case outcome prediction affecting court decisions
  • Example: Sentencing recommendation systems
  • Critical Infrastructure

  • Safety component management in road traffic, water, gas, electricity
  • Example: AI controlling power grid distribution
  • Healthcare

  • Medical device AI for diagnosis or treatment decisions
  • Example: AI that interprets medical images for clinical decisions
  • Limited Risk (Transparency Requirements)

    These systems must provide clear disclosure of AI involvement:

    Chatbots

  • Users must be informed they're interacting with AI
  • Exception: Obvious from context
  • Emotion Recognition Systems

  • Users must be notified when AI detects or infers emotions
  • Biometric Categorization

  • Users must be informed of biometric categorization
  • Generated Content (Deepfakes)

  • AI-generated or manipulated images, audio, video must be labeled
  • Particularly synthetic media resembling real persons, places, events
  • Example compliance:

    \