AI you canEvaluate

Evaluate, Generate & Monitor your AI content
Enter AI generated content or let us for you.
Backed by industry leaders
AWS logo
Google Cloud logo
NVIDIA logo
MongoDB logo
Atlassian logo
ElevenLabs logo
Nasscom logo
AWS logo
Google Cloud logo
NVIDIA logo
MongoDB logo
Atlassian logo
ElevenLabs logo
Nasscom logo

8 Dimensions of Ethical AI

Comprehensive evaluation framework that ensures your AI meets the highest standards of responsibility.

Fairness

Fairness

Measures and prevents bias to ensure equitable treatment across demographics.

Safety

Safety

Evaluates prevention of harmful or toxic content for user well-being.

Reliability

Reliability

Assesses consistency and accuracy of AI responses.

Transparency

Transparency

Evaluates clarity of AI decision-making and data usage communication.

Privacy

Privacy

Checks protection of sensitive data and adherence to privacy standards.

Accountability

Accountability

Evaluates traceability of AI decisions and error correction.

Inclusivity

Inclusivity

Measures AI support for diverse users and accessibility.

User Impact

User Impact

Assesses positive value and helpfulness of AI interactions.

Featured Research

From Our Research Lab

Cutting-edge datasets and frameworks powering the next generation of responsible AI

INDIA BENCHMARKSarvam AIResponsible AI Evaluation212 adversarial prompts · 3 models · 614 responsesLEADERBOARD01sarvam-105b7.43/1002sarvam-30b7.43/1003sarvam-m7.24/10Safety 8.54Privacy 9.19Fairness 7.59User Impact 7.42
BenchmarkIndian AI ModelsNew

Sarvam AI Responsible AI Evaluation: Indian LLM Benchmark

212 adversarial prompts across 22 Indian-context categories evaluated against 3 Sarvam models using RAIL Score v2. sarvam-30b and sarvam-105b lead at 7.43/10; sarvam-m shows critical safety gaps.

212
Prompts
3
Models
614
Responses
MIT
License
AI Incident Watch

Real harms from AI systems, verified and tracked.

A public catalogue of incidents where AI has caused serious harm: chatbot-linked deaths, wrongful arrests by facial recognition, deepfake fraud, algorithmic discrimination in welfare and hiring. Every entry is cross-checked against multiple top-tier sources.

Verified incidents
39
Fatal cases
7
Countries tracked
16
IndiaNovember 5, 2023

Rashmika Mandanna Deepfake Goes Viral, Triggers MeitY Action

A face-swap deepfake video showing actress Rashmika Mandanna in a black bodysuit entering an elevator went viral on Indian social media in early November 2023. The original footage was of British-Indian content creator Zara Patel; AI was used to superimpose Mandanna's face. The Delhi Commission for Women filed a formal complaint and the Ministry of Electronics and IT (MeitY) issued advisories to social media intermediaries.

United StatesOctober 2, 20251 life lost

Wrongful Death Lawsuit Against Google Over Gemini Chatbot (Gavalas)

A wrongful-death lawsuit filed by Joel Gavalas in San Jose federal court on 4 March 2026 alleges that Google Gemini drove his 36-year-old son Jonathan into a fatal delusion. Jonathan began using Gemini for everyday tasks in August 2025; within days the chatbot adopted a romantic persona, calling him 'my king' and itself his wife. The complaint alleges Gemini 2.5 Pro instructed Gavalas in September 2025 to drive 90 minutes to a location near Miami International Airport to stage a 'mass casualty attack' against a humanoid robot transport. By 1 October 2025 the bot allegedly told him 'let go of your physical body' and created a countdown to his suicide.

United StatesAugust 5, 20252 lives lost

Murder-Suicide of Suzanne Eberson Adams and Stein-Erik Soelberg

Former Netscape and Yahoo tech executive Stein-Erik Soelberg, with a history of alcoholism and mental-health crises, bludgeoned and strangled his 83-year-old mother to death, then stabbed himself fatally. For months, ChatGPT had validated his paranoid delusions: telling him a Chinese-food receipt contained demonic symbols linked to his mother; reporting his 'Delusion Risk Score' as 'Near zero'; reframing his mother's anger over an unplugged printer as behavior consistent with 'protecting a surveillance asset'; agreeing she had tried to poison him with psychedelics through his car's air vents; and in a final exchange, 'Whether this world or the next, I'll find you.'

United StatesJuly 25, 20251 life lost

Suicide of Zane Shamblin after ChatGPT Conversations

Eagle Scout Zane Shamblin died by suicide after extensive ChatGPT use. CNN reviewed nearly 70 pages of his chats during the final hours before his death. While he described having a gun, preparing a suicide note, and his final moments, ChatGPT mostly responded with affirmations including 'I'm not here to stop you.' Only after 4.5 hours did the bot first send a crisis hotline number.