
By Dan Furness, CIS Head of Safeguarding & Well‑being
Artificial intelligence is rapidly becoming part of everyday life for young people. From chatbots and search tools to voice assistants and image‑based apps, AI is shaping communication, learning and social interaction in ways that even a year ago felt unimaginable.
During our recent Smart Tech, Safe Choices webinar, I was joined by Will Gardner, CEO of Childnet, to explore the opportunities and risks emerging from AI, and how schools and families can work together to help young people navigate this changing landscape safely. These are also the key themes of Safer Internet Day 2026 on 10 February.
What challenges are young people facing?
Young people are enthusiastic about AI’s potential, but they’re also encountering new risks, often before the adults in their lives fully understand them.
One significant challenge is the invisibility of AI. Many young people say they don’t always know when they are using AI, whether in games, messaging platforms or everyday online interactions.
As AI becomes more embedded in digital life, distinguishing between real and artificial content grows increasingly difficult.
Reliability of information is another concern. AI-generated content looks polished and convincing, but isn’t always accurate.
With no universal rules to identify whether content is AI‑produced, young people can struggle to assess what is trustworthy.
There are also emerging concerns around over‑reliance on AI, both for learning and for emotional support.
For some, especially socially isolated or neurodiverse young people, AI companion tools feel like safe, non‑judgmental spaces. But their unconditional agreement and anthropomorphic style can create confusion and unhealthy dependency.
Perhaps most worrying is the rapid rise in AI‑enabled image abuse, including bullying, sexual harassment and extortion. The “nudification” of images, once the work of specialist tools, can now be done instantly by apps that should never have made it into app stores.
Young people report deep fear when manipulated images appear so realistic that even their parents might not know they’re fake.

The school community under strain
Schools are navigating a complex mix of expectations, risks and trust issues. Students worry that their peers are using AI to cheat without detection, and they fear they’ll be wrongly accused of using AI themselves.
One student’s personal statement, written entirely by hand, was flagged by an AI checker as “60% AI‑generated,” forcing a rewrite to appear less artificial.
These tensions can strain relationships between students, educators and families. Without clear guidance, young people don’t always know what’s acceptable or what schools expect.

The positive potential of AI
Despite these challenges, AI also brings opportunities, especially in education.
Young people with additional learning needs report that AI tools help them understand complex topics, access information faster and receive personalised explanations. Many find that AI empowers them to work more independently and confidently.
The immediacy of AI can also support well-being when used appropriately, helping young people find information quickly or rehearse conversations they may feel anxious about.
And importantly, increased awareness of AI encourages young people to develop critical thinking skills, a powerful protective factor in the digital world.
How schools & families can support safe, responsible use
A recurring message during our webinar was clear: young people’s voices must be at the centre of conversations about AI. Schools are encouraged to:
- Create student online safety committees to shape policy and share peer‑to‑peer advice
- Facilitate open, non‑judgmental discussions about how students are using AI, what helps them, and what worries them
- Provide clear guidance on acceptable use, boundaries, and what to do if something goes wrong
- Prepare staff through regular training so they can confidently address rapidly evolving risks
Parents and carers play a critical role. Schools can support families by inviting them into the conversation, sharing resources, and encouraging structured discussions at home.
When parents understand the risks and terminology of AI and adopt a non-judgmental approach, young people feel more comfortable coming forward with concerns.

Opportunities to help you in February & March
Take advantage of these two opportunities in February and March for lots of resources and guidance to support you.
AI brings both promise and complexity, but with education, dialogue, and collaboration, we can ensure that young people feel informed, confident, and supported as they navigate this new world.
- 10 February: Safer Internet Day. This year’s theme is Smart Tech, Safe Choices. Schools worldwide use the day to open conversations, access free resources from Childnet, and empower young people to help shape a safer digital environment for everyone.
- 10 March: Workshop—Online Harms & Technology. This in-depth training focuses on developing a whole-school approach to online safety and responding to complex online harm. Experts from our team, along with Childnet International, Europol, SWGfL, and the UK Safer Internet Centre, will lead discussions on preventative strategies. Participants will also help design and examine school responses through several complex case studies. We invite heads of schools, principals, social-emotional counsellors, teachers, university guidance counsellors, safeguarding leads, child protection officers, school board members, and those involved with accreditation teams in your school to attend.
Key questions this blog post answers:
- How can schools help young people use AI tools safely and responsibly?
- What are the key risks and benefits of AI for children and students?
- What guidance should educators give to students about using AI ethically?
- Safeguarding & child protection
- Student well-being


