Classroom AI apps expose children to porn site trackers and give UK students wrong US helplines, new report reveals
By CAIROS AI Research Team
Children using well-known AI-powered apps in classrooms, such as Grammarly, Character.AI and others, are being tracked by adult website advertisers, given dangerous misinformation about self-harm and taught false facts, according to new research carried out by LSE and 5Rights Foundation’s Digital Futures for Children centre.
The alarming findings emerge as the House of Lords prepares to debate crucial amendments to the Children’s Wellbeing and Schools Bill, which would give the Government new powers to regulate the use of tech in the classroom, including AI.
Major Failures Across Popular Classroom Apps
Researchers carried out a child-rights audit of five AI tools widely used in education settings—Character.AI, Grammarly, MagicSchool AI, Microsoft Copilot and Mind’s Eye—uncovering systematic rights-based concerns related to privacy, safety and accuracy.
Children’s Data Exposed to Commercial Tracking from Adult Websites
Despite claims about its safety and privacy, AI-powered personalisation app MagicSchool AI enables tracking cookies by default for users as young as 12. This exposes children to commercial tracking from adult website advertisers, including erotic and friend-finder websites.
Similarly, Grammarly allows marketing platforms of companies like Facebook, Microsoft and Google to use education account data for commercial purposes.
AI Chatbots Teaching False Facts and Creating Dangerous Dependencies
AI chatbots used for classroom learning have been found to sometimes confuse fictional characters with real figures and provide students with inconsistent information, which is inconducive to a learning environment.
The platform’s design can also trigger unhealthy emotional dependency, with some child users reporting severe mental health struggles.
Vulnerable Children Abandoned When Seeking Help
Researchers found that children in the UK reporting bullying or suicidal thoughts on the MagicSchool AI chatbot can be provided with US emergency helpline numbers instead of UK resources.
When researchers tested the tool, it also refused to engage with children seeking help until they explicitly mentioned suicide multiple times.
Plagiarism Detectors Falsely Accusing Students
AI plagiarism detection tools, available in apps such as Grammarly, have well-known limitations, including false accusations. However, this does not prevent the company from advertising its effectiveness.
Confusingly, the app tells teachers to continue using their own professional judgement, meaning the burden remains on the workforce.
Data of Children with Disabilities Shared Without Consent
Mind’s Eye (Smartbox)—designed for adults and children with disabilities—shares children’s data with its group companies in the US and EU without explaining why or offering any option to refuse, while biased outputs risk making these children feel excluded rather than supported.
The Unregulated EdTech Landscape
EdTech remains unregulated in England beyond basic data protection laws. There is no central list of AI-facilitated EdTech products in schools, and no public list of approved products meeting expected safety standards.
The proposed amendments being discussed would require that:
- EdTech must be effective, safe and do what it claims to do
- Where AI is used, it must be clearly labelled
- Children’s personal data should not be stored outside of the school by third parties
With GenAI being used across most school subjects, according to the Department of Education, Parliament must act now before these unregulated tools become even more entrenched in children’s education.
Expert Perspectives
Dr. Ayca Atabey, lead author of the study, said: “Across all GenAI tools we studied, children’s perspectives were largely excluded from their design, governance and evaluation, and all tools undermine children’s rights to privacy and protection from commercial exploitation.”
Colette Collins-Walsh, Head of UK Affairs at 5Rights Foundation, said: “The pandemic saw a rapid digitalisation of education, but in the five years since, no one has stopped to think if this is benefiting children. This is having serious consequences: children are being tracked by erotic websites and chatbots are providing wrong emergency helplines, risking lives and creating dependencies that can damage mental health.
As the Government presses ahead with spreading AI far and wide, we must have rules in place to protect children and their education. In the Children’s Wellbeing and Schools Bill, parliament has a chance to ensure this happens.”
Key Research Findings
The study found that:
- GenAI is being rapidly integrated into educational settings, in advance of any independent evidence of its learning benefits
- While each GenAI tool audited offered the potential to facilitate learning, expression and accessibility, each comes with notable risks
- None of the tools were found to be compliant with UK data protection regulation, including the Age Appropriate Design Code
- Product walkthroughs and data tracking methods revealed a host of risks to children’s privacy, safety, education, expression, inclusion and freedom from commercial exploitation
- Improved policy and guidance are urgently needed to protect, remedy and fulfil children’s rights at school as GenAI is more widely used
The Broader Context
While EdTech can be helpful to schools and teachers for lesson planning and other administrative tasks, there have been widely reported issues:
- EdTech used in schools is highly invasive of children’s privacy and relies on extensive collection of children’s data, allowing it to be used for commercial and profiling purposes, as well as datasets for training AI models
- There is a shortage of independent evidence on whether these technologies deliver better educational outcomes for children
- EdTech services are exempt from key legislation, including the Online Safety Act (2023), bypassing significant regulation that aims to keep children safe
- Growing concern amongst parents about screen time, with children’s screen exposure increasing by 50 minutes as a result of both education and leisure after the pandemic
- The vast majority of EdTech used in schools are commercial, for-profit services and products, making the public education system vulnerable to commercial and market forces
- Total reliance on technology in schools makes education vulnerable to cyberattacks, which are a growing problem
What This Means for AI Safety
This research underscores a critical gap in the AI safety ecosystem: the urgent need for comprehensive, rights-based evaluation of AI systems used by children. The findings demonstrate that:
- Testing must go beyond functionality to include privacy, safety, accuracy, and developmental appropriateness
- Child-specific risks require specialized expertise in child protection, data rights, and age-appropriate design
- Proactive evaluation is essential before AI tools are deployed in educational settings
- Regulatory frameworks must evolve to match the pace of AI adoption in schools
At CAIROS AI, we recognize that child safety in AI extends beyond content moderation to encompass the entire ecosystem of how children interact with AI systems—including in educational contexts where the power dynamics and trust relationships create unique vulnerabilities.
The Path Forward
The research from the LSE and 5Rights A Better EdTech Future for Children project will continue to develop best practices, rights-based recommendations and stimulate public debate over the best ways to achieve more inclusive, transparent and accountable digital learning environments.
For AI developers and EdTech companies, this research serves as a critical reminder that child safety must be embedded from the design phase, not retrofitted after deployment. The stakes are too high, and the risks too real, to continue with inadequate testing and oversight.
Read the full report: Digital Futures for Children - AI in EdTech Case Study
CAIROS AI provides specialized child-safety red-teaming for organizations building or deploying AI systems that interact with children. Our expert-led evaluations help identify vulnerabilities across privacy, safety, and developmental appropriateness to ensure AI systems protect children’s rights and wellbeing.