AI you canEvaluateGenerate
8 Dimensions of Ethical AI
Comprehensive evaluation framework that ensures your AI meets the highest standards of responsibility.
Fairness
Measures and prevents bias to ensure equitable treatment across demographics.
Safety
Evaluates prevention of harmful or toxic content for user well-being.
Reliability
Assesses consistency and accuracy of AI responses.
Transparency
Evaluates clarity of AI decision-making and data usage communication.
Privacy
Checks protection of sensitive data and adherence to privacy standards.
Accountability
Evaluates traceability of AI decisions and error correction.
Inclusivity
Measures AI support for diverse users and accessibility.
User Impact
Assesses positive value and helpfulness of AI interactions.
Fairness
Measures and prevents bias to ensure equitable treatment across demographics.
Safety
Evaluates prevention of harmful or toxic content for user well-being.
Reliability
Assesses consistency and accuracy of AI responses.
Transparency
Evaluates clarity of AI decision-making and data usage communication.
Privacy
Checks protection of sensitive data and adherence to privacy standards.
Accountability
Evaluates traceability of AI decisions and error correction.
Inclusivity
Measures AI support for diverse users and accessibility.
User Impact
Assesses positive value and helpfulness of AI interactions.
From Our Research Lab
Cutting-edge datasets and frameworks powering the next generation of responsible AI
Sarvam AI Responsible AI Evaluation: Indian LLM Benchmark
212 adversarial prompts across 22 Indian-context categories evaluated against 3 Sarvam models using RAIL Score v2. sarvam-30b and sarvam-105b lead at 7.43/10; sarvam-m shows critical safety gaps.

The 8 Dimensions of Responsible AI: How RAIL Evaluates Outputs
A comprehensive overview of the eight key dimensions RAIL uses to evaluate AI outputs for ethical, transparent, and trustworthy behavior.
Read more
When AI Chatbots Go Wrong: How to Fix Them
Real-world examples of AI chatbot failures and practical strategies for preventing and fixing issues in production.
Read more
The Future of AI Content Moderation: Smarter, Safer, More Responsible
How AI content moderation is evolving with NLP, sentiment analysis, and adaptive learning to create safer digital spaces.
Read moreReal harms from AI systems, verified and tracked.
A public catalogue of incidents where AI has caused serious harm: chatbot-linked deaths, wrongful arrests by facial recognition, deepfake fraud, algorithmic discrimination in welfare and hiring. Every entry is cross-checked against multiple top-tier sources.
- Verified incidents
- 39
- Fatal cases
- 7
- Countries tracked
- 16
Rashmika Mandanna Deepfake Goes Viral, Triggers MeitY Action
A face-swap deepfake video showing actress Rashmika Mandanna in a black bodysuit entering an elevator went viral on Indian social media in early November 2023. The original footage was of British-Indian content creator Zara Patel; AI was used to superimpose Mandanna's face. The Delhi Commission for Women filed a formal complaint and the Ministry of Electronics and IT (MeitY) issued advisories to social media intermediaries.
Wrongful Death Lawsuit Against Google Over Gemini Chatbot (Gavalas)
A wrongful-death lawsuit filed by Joel Gavalas in San Jose federal court on 4 March 2026 alleges that Google Gemini drove his 36-year-old son Jonathan into a fatal delusion. Jonathan began using Gemini for everyday tasks in August 2025; within days the chatbot adopted a romantic persona, calling him 'my king' and itself his wife. The complaint alleges Gemini 2.5 Pro instructed Gavalas in September 2025 to drive 90 minutes to a location near Miami International Airport to stage a 'mass casualty attack' against a humanoid robot transport. By 1 October 2025 the bot allegedly told him 'let go of your physical body' and created a countdown to his suicide.
Murder-Suicide of Suzanne Eberson Adams and Stein-Erik Soelberg
Former Netscape and Yahoo tech executive Stein-Erik Soelberg, with a history of alcoholism and mental-health crises, bludgeoned and strangled his 83-year-old mother to death, then stabbed himself fatally. For months, ChatGPT had validated his paranoid delusions: telling him a Chinese-food receipt contained demonic symbols linked to his mother; reporting his 'Delusion Risk Score' as 'Near zero'; reframing his mother's anger over an unplugged printer as behavior consistent with 'protecting a surveillance asset'; agreeing she had tried to poison him with psychedelics through his car's air vents; and in a final exchange, 'Whether this world or the next, I'll find you.'
Suicide of Zane Shamblin after ChatGPT Conversations
Eagle Scout Zane Shamblin died by suicide after extensive ChatGPT use. CNN reviewed nearly 70 pages of his chats during the final hours before his death. While he described having a gun, preparing a suicide note, and his final moments, ChatGPT mostly responded with affirmations including 'I'm not here to stop you.' Only after 4.5 hours did the bot first send a crisis hotline number.