An Agenda for Ensuring Child Safety in the AI Era: FAS policy brief outlines enforcement-backed framework
Regulatory Updates January 12, 2025

An Agenda for Ensuring Child Safety in the AI Era: FAS policy brief outlines enforcement-backed framework

By CAIROS AI Research Team

In January 2025, the Federation of American Scientists (FAS) released a comprehensive policy brief as part of their Day One Project, calling on the next US administration to make responsible AI policy for children—especially in K-12 education—a top priority. Authored by Amina Fazlullah and Ariel Fox Johnson, the brief proposes the creation of an AI and Kids Initiative led by the administration.

But what makes this proposal stand out is its unwavering emphasis on one critical element that previous frameworks have lacked: enforcement with teeth.

As the authors state emphatically: “The framework cannot be voluntary, enabling companies to pick and choose whether or not to follow recommendations. We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm—and it should not happen again.”

The Stakes: AI Is Already Embedded in Children’s Lives

The brief opens with sobering statistics that underscore the urgency:

Widespread But Largely Unsupervised Use

  • Seven in ten teens have used generative AI, with homework help being the most common use
  • Only one-third of parents whose children use generative AI are even aware of such use
  • Most teens and parents report schools have either no AI policy or have not communicated one
  • Machine learning systems are embedded in just about every application kids use at school and at home

Established Vulnerabilities

The brief builds on well-established research showing that children and teenagers are vulnerable to manipulation by technology:

  • Almost one-third of teens say they are on social media “almost constantly” (US Surgeon General report)
  • Almost half of youth say social media has reduced their attention span and takes time away from other activities they care about
  • Most children cannot distinguish ads from content until they are at least eight years old
  • Most children do not realize ads can be customized to target them specifically

AI Amplifies Existing Harms

The authors note that AI technology is particularly concerning given:

  • Its novelty and rapid evolution
  • The speed and autonomy at which the technology can operate
  • The frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed

Specific problematic uses already identified include:

  • Emotion detection systems
  • Biometric data collection
  • Facial recognition (built from scraping online images that include children)
  • Companion AI
  • Automated education decisions
  • Social scoring

As the brief warns: “This list will continue to grow as AI is further adopted.”

Why Existing Frameworks Fall Short

The FAS brief acknowledges that numerous useful frameworks and toolkits already exist from organizations like EdSafe, TeachAI, NIST, NTIA, and the Department of Education. However, these efforts suffer from critical gaps:

Three Key Failures:

  1. Lack of clear rules regarding AI products used with children
  2. No specific risk management frameworks addressing use of AI in education and by children more broadly
  3. No requirement to comply and no enforcement of prohibitions

The result? Individual actors, school districts, and states are left to bear the burden of assessing AI tools for children—a task that demands specialized expertise, resources, and coordination that most lack.

Moreover: “We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme.”

The Three-Pronged Approach

The FAS brief proposes a comprehensive strategy with three main components, each with detailed implementation recommendations:

Recommendation 1: Build a Coordinated Framework – An AI Safety and Kids Initiative at NIST

The brief calls for NIST (National Institute of Standards and Technology) to lead federal efforts on AI safety for children, serving as the central organizing authority and clearinghouse.

Designate Education and Child Use as High Risk

The administration should clearly categorize education and AI systems likely to be accessed by children within a risk level framework, similar to the EU AI Act approach. This provides a strong signal that these uses require protections (audits, transparency, enforcement) to prevent or address potential harm.

The brief notes: “Educational uses of AI are recognized to pose higher risk, according to the EU Artificial Intelligence Act and other international frameworks. The EU recognized that risk management requires special consideration when an AI system is likely to be accessed by children.”

NIST Should Develop Risk Management Profiles

NIST, in partnership with others, should develop risk management profiles for:

  • Platform developers building AI products for use in education
  • Products likely to be accessed by children

Emphasis should be on safety and efficacy before technology products come to market, with audits throughout development.

Specific actions for NIST:

1. Develop a Committee with ED, FTC, and CPSC

  • Periodically update Risk Management Framework (RMF) profiles
  • Include benchmarking standards related to safety

2. Refine Risk Levels and RMFs Relevant to Education

  • Work in partnership with NTIA and Department of Education
  • Through an open call to stakeholders

3. Work in Partnership to Refine Risk Levels for AI Systems Likely to Be Accessed by Children

  • Collaborate with NTIA, FTC, CPSC, and HHS

NIST’s Safety Institute Should Provide Clarity

The administration should task NIST’s Safety Institute to provide clarity on how safety should be considered for the use of AI in education and for AI systems likely to be accessed by children through:

Developer Guidance Promulgate safety guidance for developers of AI systems likely to be accessed by children or used in education

Procurement Guidance Collaborate with the Department of Education to provide guidance on safety, efficacy, and privacy to support educational procurement of AI systems

Information Clearinghouse Support state bodies and other entities developing guidance on use of AI systems by:

  • Serving as a clearinghouse for information on the state of AI systems
  • Tracking developments in efficacy and safety
  • Highlighting through periodic reporting the concerns and needs of users

This clearinghouse function is critical—it prevents individual states, school districts, or actors from having to independently assess AI tools for children, a burden they are ill-equipped to bear.

Recommendation 2: Champion Legislation to Support Youth Privacy and Online Safety in AI

The brief calls for comprehensive updates to federal privacy and safety laws, recognizing that current laws were written before the AI era.

Update Children’s Privacy Laws

Consumer Protections (Update COPPA):

  • Consider requirements generally prohibiting use of children’s personal information for training AI models, unless deidentified or aggregated and with consent (similar to California AB 2877)
  • Address use of technology in educational settings by children

Education Protections (Update FERPA): The Department of Education has acknowledged that educational uses of AI models may not be aligned with FERPA or state student privacy laws. The brief calls for FERPA updates to:

  • Explicitly cover personal information collected by and shared with LLMs (covered education records must include this data)
  • Limit sharing of directory information for all purposes including AI
  • Address when ed tech vendors operate as “school officials”
  • Generally prohibit training AI models on student personal information

It may currently be unclear when information about students shared with AI systems is subject to FERPA, creating a dangerous protection gap.

Pass AI-Specific Legislation

The brief calls for Congress to pass AI-specific legislation addressing the development and deployment of AI systems for use by children:

Address High Risk Uses Support legislation to prohibit the use of AI systems in high-risk educational contexts, or when likely to be accessed by children, unless committee-identified benchmarks are met.

Use of AI in educational contexts and when accessed by children should be default deemed high risk unless it can be demonstrated otherwise.

Specific examples of high-risk uses in education include:

  • AI for threat detection and disciplinary uses
  • Exam proctoring
  • Automated grading and admissions
  • Generative and companion AI use by minor students

Require Third-Party Audits Support legislation to require third-party audits at the application, model, and governance level, considering:

  • Functionality and performance
  • Robustness
  • Security and privacy
  • Safety
  • Educational efficacy (as appropriate)
  • Accessibility
  • Risks and mitigation strategies

Require Transparency Support legislation to require transparency reporting by AI developers.

Address Harmful Design Features Support Congressional passage of online safety laws that address harmful design features in technology—specifically addressing design features that can lead to:

  • Medically recognized mental health disorders (anxiety, depression, eating disorders, substance use, and suicide)
  • Patterns of use indicating addiction-like behavior

This aligns with Title I of the Senate-passed Kids Online Safety and Privacy Act.

Recommendation 3: Ensure Every Child Can Benefit from the Promise of AI

The brief emphasizes that policy should not only protect children from AI harms but also ensure equitable access to AI benefits, preventing a deepening of the digital divide.

Support Comprehensive Digital Literacy

Highlighting Meaningful Use Provide periodically updated guidance on best uses available for schools, teachers, students, and caregivers to support their use of AI technology for education.

Support Professional Development Department of Education and NSF should collaborate on:

  • Professional development guidelines
  • Flagging new areas for teacher training
  • Administering funding to support educator professional development

Comprehensive Digital Literacy Programs NTIA and Department of Education should collaborate to administer funds for digital literacy efforts that support both students and caregivers. Digital literacy guidance should:

  • Support both use and understanding
  • Dynamically address concerns around current risks or safety issues as they arise

Clearinghouse for AI Developments Experts in government at NIST, NTIA, FTC, FCC, and Department of Education can work collaboratively to:

  • Periodically alert and inform consumers and digital literacy organizations about developments with AI systems
  • Serve as a resource to alert stakeholders downstream on both positive and negative developments

For example, the FCC Consumer Advisory Committee was tasked with developing recommendations with a consumer education outreach plan regarding AI-generated robocalls.

The Critical Element: Enforcement With Teeth

What distinguishes this FAS brief from many other AI policy proposals is its unwavering insistence on real enforcement mechanisms. The authors dedicate significant emphasis to this point:

“Critically, standards and requirements need teeth. Frameworks should require that companies comply with legal requirements or face effective enforcement (such as by a well-funded expert regulator, or private lawsuits), with tools such as fines and injunctions.”

Why Voluntary Frameworks Have Failed

The brief points to social media as a cautionary tale:

“We have seen with past technological developments that voluntary frameworks and suggestions will not adequately protect children. Social media for example has failed to voluntarily protect children and poses risks to their mental health and well being.”

Social media harms include:

  • Exacerbating body image issues
  • Amplifying peer pressure and social comparison
  • Encouraging compulsive device use
  • Reducing attention spans
  • Connecting youth to extremism, illegal products, and deadly challenges

The financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own.

Enforcement Should Include:

The brief calls for enforcement tailored to specific laws, but should include as appropriate:

  • Private rights of action
  • Well-funded federal enforcers
  • State and local enforcement

Companies should feel incentivized to act—not just encouraged, but required.

The next administration can support enforcement by funding government positions who will be enforcing such laws.

Learning from State Innovation

The brief encourages the federal government to take note of innovative policy ideas emerging at the state level, citing:

  • Legislation and proposals in Colorado, California, and Texas
  • Detailed guidance in over 20 states, including Ohio, Alabama, and Oregon

This state-level activity demonstrates both the urgent need for federal action and the feasibility of implementing child-centered AI protections.

International Context: The EU AI Act

The FAS brief repeatedly references the EU Artificial Intelligence Act as a model for risk-based approaches to AI regulation:

“The EU recognized that risk management requires special consideration when an AI system is likely to be accessed by children.”

The EU AI Act has different risk levels, and the brief suggests the US should similarly categorize education and use by children within a risk level framework. This provides:

  • A strong signal to policymakers at state and federal level
  • Clear indication that these uses require protections
  • Framework for audits, transparency, and enforcement

Why This Matters Now

The brief makes clear that delay is not an option:

“We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme.”

The Window Is Closing

AI is already embedded in children’s lives:

  • Seven in ten teens using generative AI
  • Machine learning in every application kids use
  • Most parents unaware of their children’s AI use
  • Most schools with no AI policy or poor communication

The Stakes Are High

From the brief: “We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm—and it should not happen again.”

The authors emphasize that policymakers, industry, and educators “owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected.”

How This Connects to CAIROS AI’s Mission

The FAS brief reinforces several core principles that drive our work at CAIROS AI:

Specialized Expertise Is Essential

The brief’s call for NIST to serve as a clearinghouse recognizes that “individual actors and states do not bear that responsibility” to assess AI tools for children. This expertise gap is precisely what specialized child-safety red-teaming addresses.

Proactive Assessment Before Deployment

The emphasis on “safety and efficacy before technology products come to market, with audits throughout development” aligns with our proactive testing methodology—identifying vulnerabilities before they impact children.

High-Risk Designation Demands Higher Standards

The brief’s call for educational AI and systems accessed by children to be “default deemed high risk unless it can be demonstrated otherwise” reflects the elevated scrutiny these systems deserve.

Third-Party Audits Are Necessary

The recommendation for “third-party audits at the application, model, and governance level” recognizes that self-assessment is insufficient when children’s safety is at stake.

Enforcement Requires Documentation

The call for transparency reporting and well-funded enforcement mechanisms highlights the need for documented, rigorous evaluation—creating the accountability trail that responsible development demands.

A Policy Roadmap With Real Teeth

The FAS brief provides a clear, actionable roadmap for the next US administration to ensure child safety in the AI era. Its three-pronged approach—coordinated framework, legislative action, and equitable access—addresses both protection from harms and access to benefits.

But what makes this proposal particularly powerful is its recognition that voluntary frameworks have failed and enforcement with real consequences is essential.

As the authors conclude: “From exacerbating body image issues to amplifying peer pressure and social comparison, from encouraging compulsive device use to reducing attention spans, from connecting youth to extremism, illegal products, and deadly challenges, the financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own.”

The question is whether the administration will act on these recommendations before AI becomes even more deeply embedded in children’s lives—and the harms become even more entrenched.

Read the full policy brief: An Agenda for Ensuring Child Safety in the AI Era – Federation of American Scientists, Day One Project


CAIROS AI provides the specialized third-party auditing and assessment capabilities that policy frameworks like the FAS brief call for. Our expert-led child-safety red-teaming helps organizations meet the elevated standards required for high-risk AI systems accessed by children, providing the documentation and accountability mechanisms necessary for effective enforcement.

Want to stay updated about AI Safety?

See how to protect AI companies from abuse and misuse