UN launches AI for Safer Children: Global initiative empowers law enforcement to combat child exploitation with AI
Industry News September 15, 2025

UN launches AI for Safer Children: Global initiative empowers law enforcement to combat child exploitation with AI

By CAIROS AI Research Team

In 2020, to help address skyrocketing reports of child sexual exploitation and abuse materials, the United Nations Interregional Crime and Justice Research Institute (UNICRI) Centre for AI and Robotics and the Ministry of Interior of the United Arab Emirates (UAE) launched the AI for Safer Children initiative. This initiative aims to build the capacities of law enforcement worldwide to leverage the positive potential of artificial intelligence and related technology to combat child sexual exploitation and abuse.

The Issue: An Exponential Increase in Child Sexual Abuse Materials

The exponential increase in reports of child sexual abuse materials is overwhelming law enforcement resources and capacities worldwide. The scale of the crisis is staggering:

  • 2010: Approximately 100,000 reports according to the National Center for Missing and Exploited Children (NCMEC)
  • 2023: A staggering 36.2 million reports

This represents a 362-fold increase in just over a decade. Moreover, each case involves hundreds of thousands of images and videos which human investigators must analyze to find potential victims. The workload has become unmanageable through traditional investigative methods alone.

The Solution: AI for Safer Children Global Hub

To help address this overwhelming workload, the AI for Safer Children Global Hub was developed and launched in July 2022—a unique, online platform where investigators working on crimes against children can access information about more than 80 cutting-edge AI tools and how to implement AI responsibly to expedite their workflow.

Key Features of the Global Hub

The platform provides comprehensive resources available in all six UN official languages—Arabic, Chinese, English, French, Russian and Spanish—making it accessible to any agency regardless of technical skill or prior experience with AI tools.

AI Tools Catalogue: Provides law enforcement users with information on the range of AI tools that currently exist and filters to help identify potential tools that meet their specific needs.

Learning Centre: Enables law enforcement agents to learn more about leveraging AI to rescue children faster, investigation techniques to improve their workflow and how to safeguard their mental wellbeing.

Meet Other Officers: Strengthens communication and networking on using AI for combatting child sexual exploitation and abuse throughout the law enforcement community.

Hundreds of investigators from law enforcement agencies in over half the world’s countries have joined, fostering international collaboration and communication of practical use for AI to prevent, detect and prosecute child sexual exploitation and abuse.

Real-World Impact

After integrating AI tools into their workflows, law enforcement officers have commended the technologies on the Global Hub for significantly reducing analysis time, cutting forensic backlogs, and minimizing the emotional toll on investigators dealing with sensitive materials.

“After adopting a tool from the Global Hub’s catalogue, the time we spend on analyzing child abuse images and videos, which used to take 1 to 2 weeks, can now be done in 1 day,” said an Argentinian investigator.

Since no single AI tool can be the only answer, the range of AI tools displayed on the Global Hub supports the successful conclusion of investigations in many different ways—from object detection, voice recognition, geolocation and chat analysis to case management and authenticating evidence, each AI tool brings distinct capabilities.

Extending the Global Hub: Specialized Trainings

In May 2023, the AI for Safer Children initiative extended its capacity-building activities by launching specialized training programs adapted to local or regional law enforcement. These trainings are available free of charge to any Member State and are tailored to the specific needs and contexts of participating law enforcement agencies.

Key aspects of the training program:

  • Can be delivered both online or in-person
  • Cover a broad range of AI tools and techniques across an investigative workflow
  • Include live demonstrations from technology providers
  • Feature guest presentations from relevant stakeholders
  • Adapted to local contexts and specific agency needs

Since their launch, AI for Safer Children trainings have already benefited over 1,000 investigators from many countries across the globe, with many more agencies worldwide lined up for future trainings.

“The benefits we got out of the training - both in relation to the knowledge and skills we acquired, passed on to other units, and AI tools we have obtained - has already been massively impactive in currently ongoing investigations, which we expect to result in several arrests in coming month,” said Mike F, from the UK Online CSEA Covert Intelligence Team.

Ethical Framework and Principles

The development of the AI for Safer Children initiative follows an ethical and legal process and is guided by a set of seven core principles. The initiative ultimately seeks to contribute to realizing Target 2 of Goal 16 of the 2030 Agenda for Sustainable Development, which envisages an end to abuse, exploitation, trafficking and all forms of violence and torture against children.

The Broader Implications for AI Safety

This initiative represents a crucial recognition: AI systems designed to protect children require specialized expertise, robust testing methodologies, and careful ethical oversight. The success of the AI for Safer Children Global Hub demonstrates several key principles that apply across the child-safety AI ecosystem:

1. Specialization Matters

Generic AI tools are insufficient for child-safety applications. Purpose-built solutions developed with domain expertise deliver measurably better outcomes. The 362-fold increase in CSAM reports over 13 years shows that traditional approaches cannot scale to meet the challenge.

2. Proactive Testing is Essential

Before AI tools are deployed in child-safety contexts, they must undergo rigorous evaluation to ensure they work as intended and don’t introduce new risks. The Global Hub’s vetting process for tools reflects this principle.

3. Ethical Frameworks are Non-Negotiable

The initiative’s seven core principles demonstrate that effective child protection through AI requires more than technical capability—it demands ethical grounding, legal compliance, and human rights alignment.

4. Continuous Learning and Adaptation

The expansion from a tools catalogue to include comprehensive training programs shows that technology alone is insufficient. Users need education, support, and ongoing capacity building.

How This Connects to CAIROS AI’s Mission

While the AI for Safer Children initiative focuses on law enforcement investigation tools, CAIROS AI addresses a complementary challenge: ensuring that AI systems themselves don’t become vectors for child exploitation.

Our work sits at the prevention end of the spectrum. Through specialized red-teaming and safety evaluation, we help AI companies:

  • Identify vulnerabilities before they can be exploited to generate CSAM or facilitate grooming
  • Close safety gaps in generative models that could produce harmful content
  • Establish documentation of proactive testing and mitigation efforts
  • Build defensible compliance as regulatory frameworks evolve

The UNICRI initiative demonstrates what happens when prevention fails—investigators drowning in 36.2 million reports annually. Our mission is to reduce that number by ensuring AI systems are hardened against child-safety risks before deployment.

A Global Movement Taking Shape

The AI for Safer Children initiative has been commended in a 2023 Forbes article as “a pivotal force in the global fight against online child sexual exploitation and abuse… a testament to what can be achieved when innovation converges with humanity’s noblest aspirations.”

With law enforcement agencies in over half the world’s countries now participating, and over 1,000 investigators trained, the initiative represents a growing global consensus: AI can be a powerful force for protecting children, but only when deployed thoughtfully, ethically, and with specialized expertise.

Conclusion

The AI for Safer Children initiative demonstrates both the scale of the child exploitation crisis and the potential for AI to help address it. The 362-fold increase in CSAM reports over 13 years demands innovative solutions.

As AI systems become more capable and widespread, we need parallel efforts across the entire ecosystem:

  • Investigation tools that help law enforcement respond to abuse (AI for Safer Children)
  • Prevention systems that ensure AI itself doesn’t become an exploitation vector (CAIROS AI)
  • Regulatory frameworks that establish clear standards and accountability
  • Industry standards that prioritize child safety in AI development

Together, these efforts create a comprehensive approach to child protection in the age of artificial intelligence.

Learn more about the initiative: UNICRI AI for Safer Children

Access the Global Hub: aiforsaferchildren.org


CAIROS AI provides specialized child-safety red-teaming for organizations building or deploying generative AI systems. Our expert-led evaluations help identify vulnerabilities, strengthen defenses, and establish compliance-ready documentation.

Want to stay updated about AI Safety?

See how to protect AI companies from abuse and misuse