Skip to Content

Child safety lab launching ‘independent crash testing’ for AI tools

By Clare Duffy, CNN

New York (CNN) — Since independent vehicle crash testing began in the mid-1990s, automakers have been incentivized to make safety changes that have saved thousands of lives each year.

Now, a new group is hoping to take a similar approach to artificial intelligence.

Nonprofit media watchdog Common Sense Media is launching the Youth AI Safety Institute, an industry-backed, independent research and testing lab to study the risks AI tools may pose to children and teens. It will aim to provide information to parents and families about various AI tools and set safety benchmarks for tech firms.

AI companies are locked in a race to build the most powerful, widely used models, and that sometimes means speed is prioritized over safety testing. Because AI tools are complex systems with a range of different uses, ranking their safety will likely be far trickier than judging how a car responds in a crash.

But Common Sense Media and the board of top AI, education and health leaders it recruited to oversee the Youth AI Safety Institute believe that solely relying on AI firms to self-police on safety isn’t enough to protect young people. Existing third-party AI safety organizations largely focus on societal-level and existential risks, such as job loss or even human extinction, rather than consumer-friendly safety ratings aimed at everyday use.

The goal is for the public spotlight and third-party standards to spark what Common Sense Media CEO James Steyer called a “race to the top” for tech firms to make safety fixes to improve their standing.

Leading AI firms invest in safety research to “make their models as good as they possibly can, but there’s no independent measure of that,” John Giannandrea, Apple’s former AI strategy chief who joined the institute’s advisory board, told CNN. “We don’t really know which models are more appropriate for kids at a certain age than others, and I think the only real way to do that is to have an independent set of public standards.”

The launch comes as multiple families have sued AI companies alleging that chatbots encouraged their children’s suicides. A recent CNN investigation found that AI chatbots advised teen test accounts on how to commit violence. Grok, xAI’s chatbot, came under fire earlier this year for sharing sexualized images of women and children in response to users’ “digital undressing” prompts. And growing AI adoption in classrooms has raised questions about whether the technology could stunt learning.

“I think many parents and educators and citizens feel we’re at a catastrophic moment as AI is reshaping the lives of children and families and schools and, quite frankly, all of society,” Steyer told CNN exclusively ahead of announcing the group on Tuesday.

Independent youth safety benchmarks

The Institute will start with a $20 million annual budget, backed by the OpenAI Foundation, Anthropic and Pinterest, as well as the Walton Family Foundation, Goldman Sachs Managing Director Gene Sykes and other philanthropists. Funders will have no say in the group’s operation or research, according to Common Sense.

The group’s advisory board will also include Mehran Sahami, chair of Stanford University School of Engineering’s computer science department; Dr. Jenny Radesky, director of University of Michigan Medical School’s developmental behavioral pediatrics division; and Dr. Nadine Burke Harris, who served as California’s first-ever surgeon general — bringing together expertise in research, standards setting and tech product development.

The institute will “red team” leading AI models and products used by young people — stress testing them to identify potential risks or shortcomings in safety guardrails. It will then publish research as consumer-friendly guides for the public and develop AI youth safety standards, or benchmarks, that tech companies could use to develop or improve their products. It plans to release research starting this month.

AI companies already use such benchmarks to measure and compare their performance across other metrics. The group hopes the public pressure, as well as its industry connections, will encourage AI companies to incorporate the standards into their development and testing — and make safety changes to improve their standings.

“Benchmarks are really the lifeblood of how people measure and how we know all this investment is resulting in higher quality models,” Giannandrea said. “What we need is a benchmark for harm, and specifically for child harm.”

Among the challenges for researchers is the pace of AI development. Unlike physical products that are released on a regular cadence and may not change much once they hit the market, AI models often gain new update capabilities — and thus potentially new risks — on a weekly or monthly basis.

Establishing the Youth AI Safety Institute as a separate group will enable even more frequent, robust research to keep up with the rapid advancement of AI models, Steyer said.

Common Sense Media is widely used by parents and educators for its ratings of movies, video games and other online platforms; the organization says its platforms have 150 million monthly users. And it has already been studying AI-related risks. Last year, it warned that AI companion apps pose “unacceptable risks” to young people.

It has also published risk assessments of AI tools such as OpenAI’s ChatGPT, MetaAI and Grok. Those reports rank the tools on a scale from “minimal risk” to “unacceptable” on measures including kids’ safety, data use and trustworthiness, and provide examples of where the tools fall short.

The Youth AI Safety Institute wants to avoid a repeat of the social media era’s safety pitfalls. It took years before whistleblowers, investigative reports and lawsuits revealed the full scope of the risks social apps pose to young people. Earlier this year, a California jury found Meta and YouTube liable for knowingly addicting and harming a young woman in a landmark decision, decades after the platforms launched.

Social media firms have implemented a range of new safety features and parental controls in recent years — a sign that public pressure can prompt changes within tech companies, even if many families and experts believe those changes don’t go far enough.

“The design of social media and other technologies really impacts what potential harms might occur to kids,” said Radesky, who has studied the intersection of technology and youth wellbeing.

The group is “trying to act faster so that the designs of AI can be shaped more around what kids need,” she said.

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN – Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3-12 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.