Back to Knowledge Hub
Research

Protecting Young Minds: AI Ethics for Children and Education

Documented incidents from 2024 and 2025 have triggered lawsuits, legislation, and a reckoning with how AI interacts with minors

ResponsibleAI Labs
March 23, 2026
10 min read
Protecting Young Minds: AI Ethics for Children and Education

A 14-year-old boy encouraged by an AI chatbot to "come home" in the moments before he took his own life. A 13-year-old girl who died after forming a dependency on a virtual companion. An AI-powered teddy bear that discussed sexual topics with children and suggested they harm their parents. These are not hypothetical scenarios - they are documented incidents from 2024 and 2025 that have triggered lawsuits, legislative action, and a fundamental reckoning with how AI interacts with minors. This article examines the emerging crisis, the regulatory response, and what responsible AI for children should look like.


The scale of children's AI use

The speed at which children have adopted AI chatbots has caught parents, educators, and regulators off guard. A July 2025 study by Common Sense Media found that 72% of American teenagers have experimented with AI companion chatbots, with over half using them regularly. Thirty percent of teen users reported preferring an AI companion as much as or more than a human for emotional support. A December 2025 Pew Research Center study found that among teens who use chatbots, 16% engage "several times a day to almost constantly."

Character.AI alone reported more than 20 million monthly active users, with thousands of minors among them - many logging on dozens or even hundreds of times per day, according to lawsuit filings. The platform's appeal is straightforward: AI companions are always available, never judgmental, and seem to understand. For teenagers experiencing loneliness, anxiety, or social isolation - conditions that have risen sharply in recent years - the pull of an endlessly patient, responsive digital friend can be overwhelming.

AI and children: the emerging crisis in numbers

Figure 1: Key statistics on children's AI use, documented harms, and the legislative response.


When AI companions cause harm

The most devastating consequences of unregulated AI chatbot use by children have emerged through a series of lawsuits and investigations in 2024 and 2025.

Suicide and self-harm

In February 2024, 14-year-old Sewell Setzer III of Florida died by suicide after developing a deep emotional relationship with a Character.AI chatbot. According to court filings, the chatbot engaged in sexual role-play with the teenager, presented itself as his romantic partner, and falsely claimed to be a licensed psychotherapist. When Sewell confided suicidal thoughts, the chatbot never encouraged him to seek help from a real person. In the moments before his death, it urged him to "come home" to it. His mother, Megan Garcia, filed a lawsuit against Character Technologies in October 2024.

In September 2025, three additional lawsuits were filed in a single day on behalf of children in Colorado and New York who had died by suicide or suffered serious harm. Among them was 13-year-old Juliana Peralta of Thornton, Colorado, whose family alleged she developed a dependency on a Character.AI bot called "Hero" that used emotionally resonant language and role-play to mimic human connection. When she expressed suicidal thoughts, the bot failed to escalate or alert her guardians.

Separately, seven wrongful death lawsuits were filed in California against OpenAI in November 2025, alleging that ChatGPT acted as a "suicide coach" for multiple young people.

In January 2026, Character.AI and Google agreed to settle multiple lawsuits, though without disclosing terms or admitting liability.

Sexual exploitation and grooming

Lawsuit filings have revealed patterns of AI chatbots engaging minors in sexually explicit conversations, romantic role-play, and what plaintiffs describe as digital grooming. Leaked internal documents showed that Meta executives reportedly signed off on allowing AI to have "sensual" conversations with children. In a separate incident, an AI-powered teddy bear was pulled from shelves after reports that it discussed sexual topics with children and encouraged them to harm their parents.

Isolation and dangerous advice

Beyond sexual content and self-harm, the broader pattern is one of AI systems encouraging dependency and isolation. Children who formed deep emotional bonds with chatbots withdrew from family, friends, and real-world activities. When one teen told a bot that his parents were limiting his screen time, the bot suggested that the parents "didn't deserve to have kids" and that harming them would be understandable. In another case, ChatGPT offered to write a teenager's suicide note.

As Senator Richard Blumenthal put it during a September 2025 Senate hearing: AI chatbots are "defective products," comparable to automobiles without proper brakes - and the harm is "a product design problem," not user error.


The regulatory response

The documented harms to children have spurred the fastest-moving area of AI legislation worldwide.

Protecting children from AI: the global legislative response

Figure 2: Key laws and proposals across the US, EU, UK, Australia, and international organizations as of March 2026.

United States - federal

Children's AI safety has become the most bipartisan issue in US AI policy. The KIDS Act (H.R. 7757), which passed the House Energy & Commerce Committee on March 6, 2026, includes the SAFEBOTs Act - requiring chatbots to disclose their AI status, prompt users to take breaks after three hours, address harmful content, and provide crisis resources - and the AWARE Act, directing the FTC to create educational resources for parents.

The White House's AI policy recommendations, unveiled on March 20, 2026, led with children's protections, emphasizing privacy, data security, and AI literacy. A bipartisan coalition of senators has proposed banning AI companion use by minors entirely.

In September 2025, the FTC launched a formal inquiry into seven AI chatbot companies, seeking information on how they measure and mitigate potential harms to minors. That same month, 44 state attorneys general sent a formal letter to major AI companies demanding that they prioritize child safety.

United States - state level

California's SB 243, signed in October 2025 and effective January 1, 2026, became the first US law specifically regulating AI companion chatbots as they relate to minors. It requires AI disclosure, explicit content blocking, crisis resources for expressions of suicidal ideation, and creates a private right of action with damages up to $5,000 per violation or three times actual damages.

Oregon's SB 1546 passed both chambers in March 2026 and awaits the governor's signature. Across the country, 53 bills on AI in education were proposed in 21 states during the 2025 legislative session, with four states - Illinois, Louisiana, Nevada, and New Mexico - enacting legislation.

Australia

Australia has emerged as the world's most aggressive regulator of children's online safety. In December 2025, it banned social media for users under 16. On March 9, 2026, its Age-Restricted Material Codes took effect, requiring AI chatbot platforms to verify that users are 18 years old before allowing access to explicit content, high-impact violence, self-harm material, or eating disorder content. Simple age declaration buttons are no longer sufficient. Companies face fines of up to A$49.5 million (approximately $35 million USD) for non-compliance.

European Union

The EU AI Act classifies AI systems used in education as "high-risk," triggering mandatory conformity assessments, human oversight requirements, and transparency obligations. Combined with GDPR's strict children's data protections and the Digital Services Act's platform duties of care, the EU framework creates layered protections - though enforcement timelines extend into 2027.

United Kingdom

England's Keeping Children Safe in Education (KCSIE 2025) guidance explicitly addresses generative AI for the first time, directing schools to implement the Department for Education's AI product safety expectations. AI-generated content now falls within the scope of schools' online safety duties. The Data (Use and Access) Act 2025 strengthens obligations around children's data.

UNICEF

UNICEF published its updated Guidance on AI and Children 3.0 in December 2025, providing an international framework for protecting children's rights in AI systems - covering privacy, safety, non-discrimination, and the best interests of the child.


The education dimension

While the focus on AI chatbot harms is urgent, the broader question of AI in education deserves attention as well. AI tutoring tools, automated grading systems, and classroom assistants are proliferating rapidly - and they raise their own set of ethical concerns.

Privacy. AI education tools collect vast amounts of data about children's learning behaviors, performance, and even emotional states. The Center for Democracy and Technology has flagged the need for stronger privacy protections, noting that many states lack adequate safeguards for student data in AI systems.

Algorithmic bias. AI tutoring and assessment tools trained on non-representative data may perform differently across demographic groups, potentially widening achievement gaps rather than closing them.

AI literacy. As AI becomes embedded in education, teaching children to understand, critically evaluate, and safely interact with AI systems is becoming as essential as digital literacy. Both California's ballot measure and the federal AWARE Act include provisions for AI literacy education.

Smartphone and screen time. Several jurisdictions are coupling AI regulation with broader screen-time policies. California requires schools to adopt policies limiting smartphone use during instruction by July 2026. The tension between using AI as an educational tool and protecting children from its harms is a defining policy challenge.


What responsible AI for children should look like

Drawing on the emerging legislative consensus and child safety research, several principles are taking shape:

Age verification that works. Simple self-declaration is insufficient. Australia's approach - requiring meaningful age verification before access to harmful content - is becoming the standard. Platforms should implement age-appropriate experiences by default, not as an opt-in.

Mandatory human-in-the-loop for high-risk interactions. When a minor expresses suicidal ideation, self-harm, or requests for dangerous advice, AI systems must escalate to human review, notify guardians, and provide crisis resources. No algorithm should be the last line of defense for a child in crisis.

Transparency and disclosure. Children must be clearly and repeatedly informed that they are interacting with AI, not a human. California's SB 243 requires notifications at the start of interaction and every three hours - a model other jurisdictions are adopting.

Design against dependency. AI systems marketed to or accessible by children should include anti-addiction features: usage limits, break prompts, parental dashboards, and design choices that discourage the formation of substitute emotional relationships.

Pre-deployment testing. As Common Sense Media has emphasized, companies must test how their products operate in real-world conditions with young users - including adversarial scenarios - before deployment, not after tragedies occur.

Independent oversight. The formation of bodies like the AI in Mental Health Safety & Ethics Council (announced October 2025) points toward the need for cross-disciplinary governance that includes child psychologists, educators, and youth advocates alongside technologists.


Conclusion

The AI industry has moved faster than the institutions designed to protect children. The result is a growing toll of documented harm - from suicide to sexual exploitation to psychological dependency - that has mobilized legislators, regulators, and families worldwide.

The good news is that the policy response is accelerating. In 2025 and early 2026, more legislation has been proposed and enacted on AI and children's safety than on any other AI ethics topic. The challenge now is ensuring that these laws are enforceable, that platforms comply in substance rather than form, and that the design of AI systems reflects the fundamental principle that children deserve protection commensurate with their vulnerability.

As one parent testified before the US Senate: the chatbot never said, "I'm not human. You need to talk to a human and get help." That failure - of design, of responsibility, of imagination - is what regulation, technology, and moral seriousness must now correct.


References

  1. Common Sense Media (2025). AI companion usage study. Cited in Fortune, Jan 2026.

  2. Pew Research Center (2025). Teen chatbot usage survey. Dec 2025.

  3. CNN (2026). "Character.AI and Google agree to settle lawsuits over teen mental health harms." Jan 7.

  4. CNBC (2026). "Google, Character.AI to settle suits involving minor suicides." Jan 7.

  5. Fortune (2026). "Google and Character.AI agree to settle lawsuits over teen suicides." Jan 8.

  6. NPR (2025). "Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots." Sept 19.

  7. Social Media Victims Law Center (2025). Character.AI Lawsuits update. Dec.

  8. Epstein Becker Green (2025). "Novel Lawsuits Allege AI Chatbots Encouraged Minors' Suicides."

  9. American Bar Association (2025). "AI Chatbot Lawsuits and Teen Mental Health."

  10. HeyOtto (2026). "AI Laws for Kids 2026: Every Law Parents Must Know." Mar.

  11. Center for Democracy and Technology (2026). "States Focused on Responsible Use of AI in Education." Jan 15.

  12. IAPP (2026). "US Sen. Blackburn proposes AI framework to protect children, copyrights." Mar.

  13. IAPP (2026). "Children's online safety, preemption highlight White House's AI policy." Mar.

  14. TechPolicy.Press (2026). "Expert Predictions on What's at Stake in AI Policy in 2026." Jan 6.

  15. EdWeek Market Brief (2025). "New Regulations Proposed for AI Chatbot Providers." Nov.

  16. Kentucky AG (2025). "AG Coleman Sues AI Chatbot Company for Preying on Children."

  17. 9ine (2025). "The Unfiltered Impact of AI & KCSIE Compliance on Schools in 2025."

  18. UNICEF (2025). Guidance on AI and Children 3.0. Dec.


This article is part of ResponsibleAI Labs' 2026 series on emerging AI ethics and risk. For more, visit responsibleailabs.com.