Back to Knowledge Hub
Industry

AI Hiring Bias: Real Cases, Legal Consequences, and Prevention

From $365K settlements to class action lawsuits—what every employer needs to know

RAIL Research Team
November 7, 2025
14 min read
Key AI hiring bias cases: 2018 to 2024
2018

Amazon

Recruiting AI downgraded resumes from women's colleges

System scrapped

2021

HireVue

Facial analysis in video interviews flagged for bias

FTC investigation

2022

iTutorGroup

AI screener rejected applicants over age 55

$365K EEOC settlement

2023

Workday

Third-party ATS flagged for systematic race and disability bias

Class action filed

2024

Multiple firms

EEOC issues AI hiring enforcement guidance

Sector-wide compliance requirements

The Explosion of AI Hiring Discrimination Cases

Artificial intelligence has transformed recruitment, with 99% of Fortune 500 companies now using some form of AI in their hiring process. But 2024-2025 has seen an explosion of lawsuits, EEOC enforcement actions, and regulatory scrutiny revealing a troubling pattern: AI hiring tools are systematically discriminating against protected classes.

The scale of the problem:

  • First-ever AI hiring discrimination settlement: $365,000 (iTutorGroup, 2023)
  • First class action certification for AI bias (Workday, May 2025)
  • Multiple state laws enacted specifically targeting AI hiring bias
  • EEOC explicitly stating AI vendors can be held liable
  • This isn't theoretical—these are real cases with real consequences for real people and companies.

    Mobley v. Workday, Inc. (2024-2025) - Class Action Certified

    What Happened: On February 20, 2024, Derek Mobley filed a class action lawsuit against Workday, Inc., alleging the company's AI-enabled applicant screening system engaged in a "pattern and practice" of discrimination based on race, age, and disability.

    The Algorithm: Workday's system used AI to automatically screen and rank job applicants. Plaintiffs alleged the algorithm:

  • Disproportionately rejected older applicants
  • Screened out candidates with disabilities
  • Showed racial bias in candidate selection
  • Legal Milestone: In May 2025, the U.S. District Court for the Northern District of California took the precedent-setting step of certifying a collective action in this AI bias case.

    EEOC Involvement: The U.S. Equal Employment Opportunity Commission (EEOC) told the court that Workday should face claims regarding the biased algorithm-based applicant screening system.

    Key Legal Precedent: The Court concluded:

    "Drawing an artificial distinction between software decision-makers and human decision-makers would potentially gut anti-discrimination laws in the modern era."

    Status: Ongoing as of July 2025, with potential exposure in the millions if plaintiffs prevail

    Implications: This ruling establishes that:

  • AI systems are not exempt from anti-discrimination laws
  • Companies cannot hide behind "the algorithm made the decision"
  • Collective actions (affecting many applicants) can proceed for AI bias
  • EEOC v. iTutorGroup (2023) - First-Ever Settlement

    What Happened: In August 2023, the EEOC settled the first-of-its-kind AI employment discrimination lawsuit against iTutorGroup, a virtual tutoring company.

    The Discrimination: iTutorGroup programmed its recruitment software to automatically reject applicants based on age:

  • Women over age 55: automatically rejected
  • Men over age 60: automatically rejected
  • No human review of these rejections
  • Settlement: $365,000 paid to affected applicants

    Source: EEOC press release and settlement documents, August 2023

    The Problem: The company explicitly coded age thresholds into the algorithm. This wasn't subtle bias—it was deliberate discrimination automated through software.

    EEOC Statement: EEOC Chair Charlotte A. Burrows stated:

    "This settlement is a reminder that employers cannot rely on AI to make employment decisions that discriminate against applicants on the basis of protected characteristics."

    Key Takeaway: Even if AI automates discrimination, the employer is still liable under federal law.

    ACLU v. Intuit/HireVue (March 2025)

    What Happened: In March 2025, the ACLU Colorado filed a complaint with the EEOC and the Colorado Civil Rights Division against Intuit, Inc. and its AI vendor HireVue.

    The Victim: An Indigenous and deaf job applicant applied for a position at Intuit.

    The AI System: HireVue's AI-powered video interview platform analyzed:

  • Facial expressions
  • Speech patterns
  • Word choices
  • "Micro-expressions"
  • The Discrimination: After the AI interview, the applicant was:

  • Automatically rejected
  • Given feedback that she needed to "practice active listening"
  • Denied accommodation for her disability
  • The Absurdity: Telling a deaf applicant they need to work on "active listening" based on AI analysis of a video interview demonstrates how these systems can produce discriminatory outcomes without understanding context.

    Legal Theory: The complaint alleges:

  • Disability discrimination
  • Failure to provide reasonable accommodation
  • Use of AI tools that inherently discriminate against people with disabilities
  • Status: Under investigation by EEOC and Colorado Civil Rights Division as of July 2025

    Broader Impact: This case highlights how AI hiring tools may systematically disadvantage people with disabilities who cannot conform to the narrow "ideal candidate" profile the AI was trained to recognize.

    CVS Settlement (2024) - Video Analysis Discrimination

    What Happened: CVS settled a case in 2024 after its AI-powered video interviews allegedly rated facial expressions for "employability."

    The System: AI analyzed:

  • Facial movements
  • Eye contact patterns
  • Emotional expressions
  • Speaking cadence
  • Legal Violation: Massachusetts law prohibiting certain automated decision-making in employment

    The Problem: Facial expression analysis is:

  • Pseudoscientific (no reliable correlation with job performance)
  • Culturally biased (facial expressions vary by culture)
  • Disability-discriminatory (people with autism, facial paralysis, etc. affected)
  • Race-biased (emotion recognition AI performs worse on non-white faces)
  • Settlement Terms: Undisclosed, but CVS agreed to discontinue the practice

    Source: Verified news reports, 2024

    Research Evidence of Systematic Bias

    University of Washington Study

    Methodology: Researchers submitted identical job applications to AI screening systems, varying only the applicant's name.

    Names used:

  • Clearly white-associated names (e.g., Brad, Emily)
  • Clearly Black-associated names (e.g., Jamal, Lakisha)
  • Results:

  • AI preferred white-associated names: 85% of the time
  • AI preferred Black-associated names: 9% of the time
  • Neutral/unclear: 6%
  • Statistical Significance: These results far exceed what would occur by chance and demonstrate clear racial bias.

    Source: University of Washington Computer Science Department study, 2024

    Implication: Even when credentials are identical, AI hiring systems systematically favor white applicants over Black applicants.

    How the Bias Gets Embedded

    AI hiring tools learn from historical hiring data. If past hiring showed bias (which research consistently demonstrates), the AI learns to replicate that bias.

    Example feedback loop:

  • Company historically hired more white men for executive roles
  • AI learns "successful executive" profile skews white and male
  • AI systematically ranks white male candidates higher
  • Company continues hiring white men based on AI recommendations
  • Bias is reinforced and amplified
  • Regulatory Landscape

    Colorado AI Act (May 17, 2024)

    First state law specifically addressing AI bias in employment

    Key Requirements:

  • Impact assessments: Employers must conduct assessments before deploying "high-risk" AI systems
  • Transparency: Applicants must be notified when AI is used in hiring decisions
  • Right to opt-out: Applicants can request human review
  • Vendor liability: AI vendors can be held liable for discriminatory tools
  • Effective Date: February 1, 2026 (companies should comply now)

    EEOC Enforcement Position

    The EEOC has made clear:

    1. AI Does Not Exempt Employers from Liability

  • Title VII, ADA, and ADEA apply to AI-driven decisions
  • "The algorithm did it" is not a defense
  • 2. Vendors Can Be Held Liable

  • AI tool providers can be sued directly
  • Vicarious liability for discriminatory tools
  • 3. Disparate Impact Standard Applies

  • Even unintentional bias violates law if it has discriminatory effect
  • Employers must validate AI tools don't have disparate impact
  • 4. Reasonable Accommodation Required

  • AI systems must accommodate disabilities
  • Cannot use disability-blind AI as excuse to deny accommodations
  • Federal Legislation (Proposed)

    Algorithmic Accountability Act (reintroduced 2025):

  • Mandatory bias audits for AI systems
  • Public reporting of AI impact assessments
  • FTC enforcement authority
  • Status: Under consideration in Congress

    Common Sources of Bias in AI Hiring Tools

    1. Resume Screening AI

    How it works: AI scans resumes for keywords, education, experience patterns

    Bias sources:

  • School names: Overweights prestigious schools (correlates with wealth/race)
  • Employment gaps: Penalizes caregivers (disproportionately women)
  • Zipcode: May use address as proxy for race
  • Names: Can be used to infer race, ethnicity, gender
  • Real example: Amazon scrapped its resume AI in 2018 after discovering it penalized resumes containing the word "women" (as in "women's chess club")

    2. Video Interview AI

    How it works: Analyzes facial expressions, speech patterns, word choice

    Bias sources:

  • Facial analysis accuracy: Lower for darker skin tones
  • Speech recognition: Higher error rates for non-native speakers
  • Cultural differences: Expressions of confidence vary by culture
  • Disability impact: Penalizes neurodivergent communication styles
  • Scientific validity: None. No peer-reviewed evidence that facial expressions predict job performance.

    3. "Culture Fit" AI

    How it works: Compares applicants to current employees

    Bias sources:

  • Perpetuates homogeneity: If current workforce lacks diversity, AI replicates it
  • Undefined metrics: "Culture fit" often encodes bias
  • Confirmation bias: AI looks for similarities, not complementary skills
  • 4. Assessment Game AI

    How it works: Analyzes performance on game-like tasks or puzzles

    Bias sources:

  • Socioeconomic bias: Puzzle-solving styles vary by educational background
  • Neurodiversity impact: May disadvantage ADHD, autism
  • Cultural bias: Games may favor certain cognitive styles
  • Liability Exposure

    1. Disparate Impact Claims

  • Plaintiffs must show AI tool has discriminatory effect on protected class
  • Employer must prove tool is "job-related and consistent with business necessity"
  • Employer must show no less discriminatory alternative exists
  • Burden of proof: Once disparate impact shown, burden shifts to employer to justify the tool

    2. Disparate Treatment Claims

  • AI explicitly considers protected characteristics (like iTutorGroup age cutoffs)
  • Easier to prove, but less common
  • 3. Disability Discrimination

  • Failure to accommodate in AI-driven process
  • AI tools that inherently disadvantage people with disabilities
  • 4. State Law Violations

  • Colorado AI Act and similar emerging state laws
  • May have stricter requirements than federal law
  • Damages and Penalties

    Compensatory Damages:

  • Lost wages
  • Emotional distress
  • Reasonable accommodation costs
  • Punitive Damages (if intentional discrimination or recklessness):

  • Can be substantial
  • Designed to punish and deter
  • Attorney's Fees:

  • Prevailing plaintiffs entitled to legal fees
  • Can exceed damages
  • Injunctive Relief:

  • Court orders to stop using discriminatory AI
  • Required changes to hiring practices
  • Ongoing monitoring
  • Prevention Strategies

    1. Pre-Deployment Validation

    Conduct bias audits before deploying AI hiring tools:

    \

    AI Hiring Bias: Real Cases, Legal Consequences, and Prevention | Knowledge Hub