UNICEF releases comprehensive policy guidance for child-centred artificial intelligence
By CAIROS AI Research Team
In November 2021, UNICEF released version 2.0 of its groundbreaking Policy Guidance on AI for Children—a comprehensive framework designed to ensure that artificial intelligence systems uphold the rights of every child. Developed in partnership with the Government of Finland and informed by consultations with over 200 experts and 250 children from around the world, this guidance provides governments and businesses with concrete requirements and recommendations for child-centred AI.
As UNICEF states in the guidance: “Children interact with or are impacted by AI systems that are not designed for them, and current policies do not address this.”
Why This Guidance Is Critical
AI systems are fundamentally changing children’s lives. Children are already interacting with AI technologies embedded in toys, virtual assistants, video games, chatbots, and adaptive learning software. Algorithms determine what videos they watch, what news they read, what music they hear, and even who they’re suggested to befriend.
Beyond these direct interactions, children’s lives are indirectly impacted by automated decision-making systems that determine welfare subsidies, quality of health care and education access, and their families’ housing applications.
The Policy Gap
Despite this widespread impact:
- Over 60 countries have released AI policy initiatives, but these focus largely on economic growth and national competitiveness
- Over 160 sets of AI principles exist, but even though they mention human rights, very few seriously consider how AI systems actually impact children’s rights
- In UNICEF’s review of 20 national AI strategies, engagement on children’s issues was “immature” with little acknowledgement of how AI affects children
Simply put: Children represent at least one-third of online users, yet their unique needs and rights are largely absent from AI policy and development efforts.
Why Children Need Special Consideration
Children are not just small adults. They have unique characteristics that require special attention:
Developmental Vulnerability
- Children have unique physical and psychological attributes at different developmental stages (early childhood, mid childhood, younger adolescence, older adolescence)
- They are less able to fully understand the implications of AI technology
- Their cognitive capacities are still developing, making them particularly vulnerable to misinformation and manipulation
Lack of Agency and Advocacy
- Children often lack opportunities or avenues to communicate their opinions
- They frequently lack advocates to support them
- They lack resources to respond to instances of bias or rectify inaccuracies in their data
Long-term Impact
- Children will have an increasingly high level of exposure to AI systems over their lives
- Impacts in childhood have long-term effects
- Their development and education will be increasingly mediated and filtered by AI
The Foundation: Three Lenses on Children’s Rights
The guidance builds on the Convention on the Rights of the Child (CRC), viewing AI policy and systems through three complementary lenses:
Protection = { do no harm }
Children need to be protected from any harmful and discriminatory impacts of AI systems and interact with them safely. AI systems should also be leveraged to actively protect children from harm and exploitation.
Provision = { do good }
The opportunities that AI systems bring to children of all ages and backgrounds—such as supporting education, health care, and the right to play—need to be fully leveraged when it is appropriate to use AI systems.
Participation = { include all children }
Ensuring participation means that children are given agency and opportunity to shape AI systems and make educated decisions on their use of AI and the impact it can have on their lives. All children should be empowered by AI and play a leading role in designing a responsible digital future.
Nine Requirements for Child-Centred AI
Building on this foundation, UNICEF provides nine requirements that governments and businesses should meet when developing AI policies and systems:
1. Support Children’s Development and Well-being
“Let AI help me develop to my full potential.”
Key recommendations:
- Prioritize how AI systems can benefit children in AI policies and strategies
- Develop and apply a design-for-child-rights approach
- Leverage AI to support environmental sustainability (children’s futures depend on a healthy planet)
- Integrate well-being metrics and frameworks into AI system design
2. Ensure Inclusion Of and For Children
“Include me and those around me.”
Key recommendations:
- Design for the widest possible range of users regardless of age, gender identities, abilities, or characteristics
- Ensure diversity amongst those who design, develop, collect data, implement, research, regulate, and oversee AI systems
- Adopt an inclusive design approach for AI products
- Support meaningful child participation in design and development processes
3. Prioritize Fairness and Non-Discrimination
“AI must be for all children.”
Key recommendations:
- Actively support the most marginalized children so they may benefit from AI systems
- Develop datasets that include a diversity of children’s data
- Test algorithms continuously for fairness and adjust as needed
- Recognize there is no single optimal technical definition of fairness
The guidance notes: “If the data used to train AI systems does not sufficiently reflect children’s varied characteristics, then the results may be biased against them. Such exclusion can have long-lasting effects for children, impacting a range of key decisions throughout their lifetime.”
4. Protect Children’s Data and Privacy
“Ensure my privacy in an AI world.”
Key recommendations:
- Follow a responsible data approach for handling data for and about children
- Promote children’s data agency (with support from parents/caregivers)
- Adopt a privacy-by-design approach
- Consider protections at the group level, not just individual level
Critical principle: “Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned.”
5. Ensure Safety for Children
“I need to be safe in the AI world.”
Key recommendations:
- Call for mechanisms to assess and continually monitor AI’s impact on children
- Continuously assess throughout the entire AI development life cycle
- Require testing for safety, security, and robustness
- Leverage AI to actively promote children’s safety (e.g., detecting child sexual abuse material)
The guidance emphasizes a risk-based, safety-by-design approach backed by top-level commitment to halt harmful AI practices.
6. Provide Transparency, Explainability, and Accountability
“I need to know how AI impacts me. You need to be accountable for that.”
Key recommendations:
- Explicitly address children when promoting explainability and transparency
- Use age-appropriate language to describe AI
- Make AI systems transparent so children and caregivers can understand the interaction
- Notify children when they’re interacting with AI (not a human)
- Establish oversight bodies and mechanisms for redress
Important principle: “AI should not be used as the only input to determine key life decisions that impact children, for example medical diagnoses, welfare decisions or processing school applications, without a human-in-the-loop to make the final decision.”
7. Empower Governments and Businesses with Knowledge
“You must know what my rights are and uphold them.”
Key recommendations:
- Ensure capacity-building on AI and child rights for policymakers, management, and developers
- Capitalize on customer demand for trusted and transparent AI solutions for children
- Commit to child-centred AI with mechanisms to realize this in practice
The guidance emphasizes: “Knowledge of the opportunities and risks around AI and children must be translated into action.”
8. Prepare Children for Present and Future Developments
“If I am well prepared now, I can contribute to responsible AI for the future.”
Key recommendations:
- Develop education programs that include technical and soft skills needed to flourish in an AI world
- Provide AI literacy education (including basic AI concepts, data literacy, ethics of AI)
- Assess and develop teachers’ AI awareness and skills
- Facilitate collaboration between businesses and educational institutions
- Develop awareness campaigns for parents, caregivers, and society
AI literacy should help children become “conscious users of AI-based systems” while developing critical thinking and emotional intelligence—skills that current AI systems cannot replicate.
9. Create an Enabling Environment
“Make it possible for all to contribute to child-centred AI.”
Key recommendations:
- Support infrastructure development to address the digital divide
- Provide funding and incentives for child-centred AI policies
- Support research on AI for and with children across the system’s life cycle
- Engage in digital cooperation and creation of digital public goods
The guidance warns of an emerging “AI divide” where children with more digital opportunities stand to benefit more from AI systems, while all children share the risks.
Key Opportunities AI Presents for Children
The guidance identifies several important opportunities when AI is developed responsibly:
Educational Benefits
- Personalized learning tailored to each child’s specific needs, learning style, and speed
- Support for children with learning difficulties through one-on-one intelligent tutoring
- Development of critical thinking and problem-solving skills
Health Improvements
- Early detection of health and developmental issues
- Support for children with disabilities (e.g., isolating voices from ambient noise for hearing-impaired children)
- Emotional support detection (under careful ethical oversight)
Creative Expression
- New ways to support children’s play and creativity
- Generating stories, artwork, music, or software with low or no coding skills
Enhanced Accessibility
- New ways for children with disabilities to interface and co-create with digital systems
- More accessible and inclusive services
Critical Risks That Must Be Addressed
The guidance also identifies serious risks that demand urgent attention:
Systemic Discrimination and Bias
“Algorithmic bias is the systemic under- or over-prediction of probabilities for a specific population, such as children.”
Causes include:
- Unrepresentative, flawed, or biased training data
- Context blindness
- Uninformed use of outcomes without human control
Limitation of Opportunities Through Profiling
AI-based profiling can “lock individuals into a user profile” or “confine them to a filtering bubble, which would restrict and confine their possibilities for personal development.”
Microtargeting techniques can:
- Limit a child’s worldview and online experience
- Heavily influence the level of knowledge available to them
- Undermine freedom of expression and opinion
- Reinforce stereotypes and limit possibilities
Privacy Infringement
AI systems challenge traditional notions of consent, purpose limitation, and transparency. Young children may not grasp the concept of privacy and may disclose too much information to AI systems they interact with.
Dangerous or Harmful Outputs
Real examples cited in the guidance:
- Amazon Alexa advised a child to stick a coin in an electrical socket
- Snapchat’s AI friend gave inappropriate advice to reporters posing as children
- Mental health chatbots tested by the BBC failed to properly handle children’s reports of sexual abuse
Behavioral Manipulation
Historian Yuval Noah Harari warns that the online battleground could “shift from attention to intimacy”—bad actors could use AI bots to first gain children’s trust and then influence them over time in subtle ways for commercial or political gains.
Aggravation of Inequalities
“Since emerging technologies are not evenly distributed within and between countries, they have the potential to aggravate existing inequalities.”
The digital divide results in:
- Differential access to AI-enabled services
- Prevention of children from reaching their full potential
- Inability to participate in global governance dialogue
- Difficulty competing with more established market competitors
What Children Think About AI
As part of developing the guidance, UNICEF consulted almost 250 children through nine workshops in Brazil, Chile, South Africa, Sweden, and the United States. Key themes emerged:
Excitement tempered with caution: While there is much about AI that excites children, they don’t want AI to completely replace engagement with humans—especially on sensitive issues.
Parents as key stakeholders: Children felt parents should educate them about AI risks and be more involved in their digital lives. Yet they acknowledged most parents don’t have sufficient knowledge.
High expectations of industry: Children called for greater transparency from companies and voiced the need for companies to educate people about their products. They feel companies should recognize children may use products even if not the intended users.
Privacy concerns: Children are worried that AI systems collect too much data. Some accept a level of privacy loss as a trade-off for using AI-based systems.
Local context matters: For example, children in Johannesburg were particularly worried about AI-based automation causing job losses, reflecting South Africa’s very high youth unemployment rate.
Implementation Tools and Resources
To support practical implementation, UNICEF developed four complementary tools:
- Operationalization roadmap for policymakers - Strategic guidance for policy development
- AI development canvas for software teams - Practical framework for developers
- AI guide for parents - Helping caregivers navigate AI with children
- AI guide for teens - Empowering young people themselves
Pilot Case Studies: Policy in Practice
Following the draft release in 2020, UNICEF worked with organizations worldwide to pilot the guidance. Examples include:
SomeBuddy (Finland/Sweden): CrimeDetector system helps children who experience online harassment, using NLP with human-in-the-loop review to prevent false positives.
Allegheny County (USA): Hello Baby prevention initiative uses a predictive risk model with strong safeguards—families can opt out, and algorithm scores aren’t kept on file.
Helsinki University Hospital (Finland): Milli chatbot for adolescent mental health, co-designed with students who recommended making the avatar unmistakably virtual to increase trust.
AI Sweden: Collaborated with three Swedish municipalities to evaluate the guidance and define components for a national framework supporting child-centred AI.
Honda Research Institute/European Commission: Haru robot consulted children in Japan and Uganda about fairness and explainability concepts, which varied widely by culture.
Scotland Leads the Way
In March 2021, the Government of Scotland launched its national AI strategy and announced its formal adoption of UNICEF’s policy guidance—the first country to do so. This signals the growing recognition and validity of the guidance.
The Urgent Need for Action
UNICEF’s guidance makes clear that we are at a critical juncture. As the document states:
“Children will experience an increasingly high level of exposure to AI systems over the course of their lives, with impacts in childhood having long-term effects. Addressing their needs is critical.”
Moreover: “The way AI is shaped today will have significant bearings on future generations. Present generations have a responsibility to halt and prevent developments that could threaten the survival of future generations, including new technologies.”
The guidance emphasizes several critical imperatives:
Multi-stakeholder approach: Since AI impacts many aspects of society, coordinated efforts crossing organizational and departmental boundaries are essential—including children and child rights advocates as stakeholders.
Adaptation to local context: While the requirements are universal, governments and companies must adapt them according to local contexts, including alignment with national development plans.
Balancing indivisible rights: All child rights are indivisible, but upholding them equally and simultaneously requires striking a delicate balance (e.g., protecting privacy while collecting sufficient health data).
Moving beyond principles to practice: While there is growing consensus about what ethical AI principles require, far less is known about how to effectively apply them—especially for children.
How This Connects to CAIROS AI’s Mission
UNICEF’s comprehensive policy guidance reinforces the critical importance of specialized child-safety evaluation for AI systems. Every requirement they outline—from data protection and bias prevention to transparency and safety testing—demands expertise that generic AI development cannot provide.
Several themes align directly with our work:
Specialized knowledge required: The guidance emphasizes that “children are less able to fully understand the implications of AI technology” and have unique developmental needs. This demands specialized red-teaming that understands child development, psychology, and rights.
Human-in-the-loop essential: UNICEF’s requirement that “AI should not be used as the only input to determine key life decisions that impact children” aligns with our emphasis on expert human evaluation, not just automated testing.
Proactive assessment: The call for “mechanisms for assessing and continually monitoring the impact of AI systems on children” throughout the development life cycle is precisely what specialized red-teaming provides.
Safety-by-design: The guidance’s emphasis on a risk-based, safety-by-design approach backed by commitment to halt harmful AI practices reflects our proactive testing methodology.
Transparency and accountability: UNICEF’s requirement for oversight bodies and mechanisms for redress highlights the need for documented, rigorous evaluation—creating the accountability trail that responsible development demands.
Conclusion: From Policy to Practice
UNICEF’s Policy Guidance on AI for Children represents the most comprehensive framework to date for ensuring AI systems respect, protect, and fulfill children’s rights. With nine clear requirements, dozens of actionable recommendations, and practical implementation tools, it provides governments and businesses with a roadmap for child-centred AI.
But as UNICEF acknowledges: “This document should be seen as an early contribution to child-centred AI.” The guidance needs to be applied consistently, with experiences from the field shared openly to validate and enrich it over time.
The stakes could not be higher. Today’s children are the first generation that will never remember a time before smartphones. They are the first generation whose health care and education are increasingly mediated by AI-powered applications. They will be the generation living with the consequences of how we shape AI today.
As UNICEF emphasizes: “Our collective actions on AI today are critical for shaping a future that children deserve.”
Read the full guidance: Policy guidance on AI for children 2.0
Learn more about the project: UNICEF AI for Children initiative
CAIROS AI provides specialized child-safety red-teaming that operationalizes UNICEF’s requirements for child-centred AI. Our expert-led evaluations help organizations implement these principles in practice—identifying vulnerabilities, strengthening safeguards, and establishing the documentation needed for accountability and compliance.