UNICEF report: Generative AI presents unprecedented risks and opportunities for children
By CAIROS AI Research Team
Generative AI, now front and center in our digital experiences, is seeing unprecedented uptake. ChatGPT has been one of the fastest-growing digital services of all time. Some children use it daily, for example when doing their homework or choosing what to wear. Its capabilities and adoption set off a storm of reactions around the impact and future of AI broadly—from open letters demanding a pause on AI development to calls to protect children against its harmful content.
In a comprehensive analysis, UNICEF’s Innocenti research center examines how we can empower and protect children in the face of artificial intelligence, acknowledging that while they don’t presume to have all the answers, “given the pace of AI development and adoption, there is a pressing need for research, analysis and foresight to begin to understand the impacts of generative AI on children.”
Rapid Adoption Among Children
Data on how children or youth around the world are using generative AI tools are limited, but initial studies indicate usage is much more than adults:
- A small online poll in the US indicated only 30% of parents say they have used ChatGPT compared to 58% of children aged 12-18
- Many children reported hiding their usage from parents or teachers
- Young adults (aged 18-29) who had heard of ChatGPT reported using it more than older counterparts, doing so for entertainment, learning, and work tasks
Embedding AI into Children’s Digital Lives
Generative AI is rapidly becoming embedded into software and apps already used by children:
Snapchat, used by almost 60% of 13-17-year-olds in the US and almost half of 3-17-year-olds in the UK, introduced an ‘AI friend’ chatbot to its 375 million daily active users.
Meta has announced plans to introduce AI agents into every product used by over 3 billion people every day, including Instagram and WhatsApp.
Educational platforms like Khan Academy have introduced personal tutors fine-tuned specifically for learning conversations.
This integration means generative AI will take an even greater hold of children’s digital experiences, and at a rapid pace.
Key Opportunities for Children
UNICEF identifies several potential benefits that generative AI can offer children and young people:
Personalized Learning
Systems that can explain difficult concepts and help children develop skills, tailored to a child’s specific needs. The systems can adapt according to learning style and speed to maximize the child’s learning experience.
Creative Tools
New ways to support children’s play and creativity, like generating stories, artwork, music or software with no or low coding skills.
Enhanced Accessibility
For children and young people with disabilities who would benefit from new ways to interface and co-create with digital systems.
Healthcare Advances
Early detection of health and developmental issues as children use the systems directly. Indirectly, generative AI systems can provide insights into medical data to support advances in healthcare.
Public Sector Innovation
Governments could augment citizen engagement channels to allow for additional languages and a mix of text, images or audio exchanges for people with low levels of literacy.
Critical Risks and Harms
At the same time, UNICEF identifies clear risks that the technology could be used by bad actors or inadvertently cause harm:
Persuasive Disinformation at Scale
“Generative AI can instantly create text-based disinformation that is indistinguishable from, and more persuasive in swaying people’s opinion than, human-generated content,” the report states.
In a test by the Center for Countering Digital Hate, Google’s Bard generated misinformation without disclaimers when prompted on 78 out of 100 false and potentially harmful narratives, including climate, vaccines, LGBTQ+ hate and sexism.
Looking ahead, synthetic content can potentially be personalized to individual users and be harder to combat. Future chatbots could be programmed to impersonate humans and adapt in real-time conversations to attempt to persuade people about issues or urge them to act.
The deleterious effect of an internet awash with misleading content is an erosion in trust as the authenticity of all content is questioned. With their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation.
Photo-Realistic Child Sexual Abuse Material
“Generative AI is being used to create photo-realistic child sexual abuse material (CSAM),” the report warns. “The AI image generation models, which are open-source and can thus be operated with no protective guardrails, are trained on existing CSAM and photos of children from public social media accounts.”
While the volume of such content is still relatively small, researchers predict the number of cases will only grow. This would:
- Flood law enforcement with new ‘fake’ CSAM
- Detract from their ability to handle CSAM depicting real children
- Complicate victim-identification and rescue operations
A recent FBI alert noted an uptick in reporting of sextortion, including of minors, using AI-generated images. Bad actors use existing photos from social media accounts to generate explicit, sexually themed “deepfakes” with which to harass and blackmail victims.
Developmental and Privacy Concerns
Given the human-like tone of chatbots, where the line between animate and inanimate blurs, what are the impacts on children’s development and privacy when they interact with these systems?
Research indicates they may influence:
- Children’s perceptions and attributions of intelligence
- Their cognitive development
- Social behavior, especially during different developmental stages
When generative AI confidently makes up false information (known as “hallucinations”), what impact does this have on children’s understanding and education? Examples of dangerous outputs:
- Amazon Alexa advised a child to stick a coin in an electrical socket
- Snapchat’s AI friend gave inappropriate advice to reporters posing as children
As children interact with generative AI systems and share their personal data in conversation, the implications for children’s privacy remain deeply concerning.
Behavioral Modification and Manipulation
AI systems already power much of the digital experience, largely in service of business or government interests. Microtargeting used to influence user behavior can limit and heavily influence a child’s worldview, online experience and level of knowledge.
As UNICEF notes, “children are highly susceptible to these techniques which, if used for harmful goals, are unethical and undermine children’s freedom of expression, freedom of thought and right to privacy.”
Historian and philosopher Yuval Noah Harari warns that the online battleground could “shift from attention to intimacy.” Bad actors could use AI bots to manipulate real users in hidden and persuasive ways—first gaining children’s trust and, over time, influencing them in subtle ways for commercial or political gains.
Even without bad intent, research shows that biases in generative AI models nudge users towards or against certain views. Students might be unconsciously pushed to take on certain views when using AI systems to help them learn.
Aggravation of Existing Inequalities
“Since emerging technologies are not evenly distributed within and between countries, they have the potential to aggravate existing inequalities,” UNICEF warns.
Equitable, inclusive and responsible generative AI needs to:
- Cater to different contexts and developmental stages of all children
- Especially serve those from marginalized communities
- Be available to every child when beneficial
- Respect children’s data rights regardless of where data come from
UN Secretary General António Guterres believes that guardrails, grounded in human rights, transparency, and accountability, are needed to ensure that AI development “benefits all.”
Future of Work and Education
AI combined with robotics first impacted blue-collar work through automation. Now it’s predicted that the daily work of teachers, telemarketers, and professionals in legal services and securities will see high exposure to generative AI tools.
This potential disruption has great relevance for:
- Children’s future work life
- How and what education is provided to them today
Additionally, work practices in today’s AI supply chain are questionable. Gig workers, often based in developing countries, perform low-paying, stressful tasks like assessing video clips for sexual content and rating chatbot responses—bringing into question whether they constitute ethical and decent work.
The Urgent Need for Action
UNICEF’s analysis illustrates that the impacts of generative AI will be wide-ranging and demand a proactive response from those who regulate and develop AI systems to ensure these empower and protect children.
The report emphasizes several critical points:
Children’s heightened vulnerability: With cognitive capacities still in development, children are particularly vulnerable to AI risks. Children will experience an increasingly high level of exposure to AI systems over the course of their lives, with impacts in childhood having long-term effects.
Intergenerational responsibility: As noted by Secretary-General Guterres, “present generations have a responsibility to halt and prevent developments that could threaten the survival of future generations, including new technologies.”
Need for transparency and accountability: There needs to be greater transparency, responsible development from generative AI providers and advocacy for children’s rights.
Multi-stakeholder engagement: The report highlights the critical need for strong advocacy and education investment to ensure that communities, children, parents, teachers, policymakers and regulators can engage in meaningful conversation with tech companies towards child-centered AI.
Existing Frameworks and Next Steps
As a starting point, existing AI resources provide direction for responsible AI today:
- UNICEF’s Policy Guidance on AI for Children has nine requirements to uphold children’s rights in AI policies and practices
- World Economic Forum’s AI for Children toolkit provides advice to tech companies and parents
But advances in generative AI mean existing policies must be interpreted in novel contexts, and new guidance and regulations may need to be developed.
Policymakers, tech companies and others working to protect children need to act urgently. They should:
- Support research on the impacts of generative AI
- Engage in foresight—including with children—for better anticipatory governance responses
- Ensure greater transparency in AI development
- Practice responsible development
- Advocate for children’s rights in AI systems
How This Connects to CAIROS AI’s Mission
UNICEF’s comprehensive analysis reinforces the critical importance of proactive child-safety testing for AI systems. The risks they identify—from deepfake CSAM generation to subtle behavioral manipulation—are precisely the vulnerabilities that specialized red-teaming can detect and help mitigate before deployment.
Several key themes align directly with our work:
Prevention over reaction: UNICEF notes that children are already hiding their AI usage and being exposed to harmful outputs from systems that are “live and accessible to children while continuing to produce misleading and harmful content.” Robust pre-deployment testing is essential.
Specialized expertise required: The developmental vulnerabilities of children, the unique risks of synthetic CSAM, and the subtle nature of persuasive manipulation all demand specialized knowledge that generic AI safety testing cannot provide.
Transparency and accountability: UNICEF calls for greater transparency from AI providers. Our red-teaming assessments create documented evidence of proactive testing and identified vulnerabilities—essential for both responsible development and regulatory compliance.
Multi-stakeholder collaboration: Just as UNICEF emphasizes the need for communities, parents, teachers, and policymakers to engage with tech companies, our work facilitates that engagement by providing clear, actionable insights into AI system safety.
Conclusion
UNICEF’s research makes clear that we stand at a critical juncture. Generative AI offers genuine opportunities to enhance children’s learning, creativity, and accessibility. But it also introduces unprecedented risks—from industrialized production of synthetic CSAM to subtle psychological manipulation at scale.
The 58% of children aged 12-18 already using ChatGPT, often hidden from their parents, are navigating an AI landscape that even experts struggle to fully understand. The embedding of AI chatbots into platforms used by billions of children worldwide means the stakes have never been higher.
As UNICEF emphasizes, children are not just users of today’s AI systems—they are the generation that will live with the consequences of how we shape AI today. Present generations have a responsibility to ensure that AI development empowers and protects children rather than exploiting their vulnerabilities.
That responsibility demands urgent action: robust research, anticipatory governance, transparent development practices, and specialized safety evaluation that centers children’s rights and developmental needs.
Read the full UNICEF report: Generative AI: Risks and opportunities for children
CAIROS AI provides specialized child-safety red-teaming for organizations building or deploying generative AI systems. Our expert-led evaluations help identify vulnerabilities, strengthen defenses, and establish compliance-ready documentation that upholds children’s rights.