Generative AI: A whole school approach to safeguarding children
Generative AI: A whole school approach to safeguarding children
John Mikton Ecole Internationale de Genève


By Leila Holmyard, CIS Affiliated Consultant and Frankfurt International School Assistant Director of Safeguarding, Well-being & Belonging, and John Mikton, Technology for Learning Coordinator, Ecole Internationale de Genève, Primary Campus La Châtaigneraie

 

 

This blog draws from the CIS Model: Whole-School Approach to Safeguarding* and shares how schools can take a comprehensive approach to protecting children from risks related to the use of generative AI.

Leading safeguarding

  • Research from UNICEF in 2021 found that one in five girls and one in thirteen boys globally have been sexually exploited or abused by the age of 18, with technology featuring in almost all cases.
     
  • Recent years have seen a rapid increase in the number of “deep fake” images on the web, growing from roughly 14,000 to 145,000 from 2019 to 2021.
     
  • According to a Sentinel report, the majority of “deep fake” (96%) non-consensual pornographic images, 90% of which were targeted at women.

     

What are Deep Fakes?

A deepfake is an image, video, audio file, or GIF that has been manipulated by a computer to use someone’s face, body or voice artificially. This could be done with or without the subject’s consent.

 

School boards and leaders can work together to understand emerging safeguarding risks related to AI in order to take action to protect the students in their care. Schools need to be prepared to respond to incidents when they occur, with student-generated images becoming increasingly common in cases of peer-on-peer harm. This TIE article describes multiple incidents around the world where fake image/video generation by students has caused distress and harm to others, including teachers. 

Illustration_AI and students

 

The risk from deep fakes can also come from the outside. In February 2023, the FBI, in partnership with law enforcement agencies in Australia, Canada, New Zealand and the UK, issued a joint warning about the global sextortion crisis. They warn that children, primarily boys, are being coerced into sending explicit images online and extorted for money. Deepfakes can be used by criminals to create incriminating, embarrassing, or suggestive material about students using their existing online identities, such as profile pictures. 

These online schemes have a significant real-world impact on the mental health and well-being of young people, unfortunately resulting in suicides linked to such incidents. Given that these are money-driven criminal activities, the wealthy demographic of international schools may put our students at a heightened risk of being targeted.

School boards and leadership teams can consider these and other risks related to emerging technologies through their risk register—a risk management tool used to identify potential risks and document the actions taken by the school to mitigate these risks. The school’s safeguarding lead can also engage in training to ensure that school responses reflect recommended practice, especially in relation to peer-on-peer abuse involving technology.

 

Learn more about Generative AI & Education

Our members can follow the series of webinars in the CIS Community portal > KnowledgeBase > Webinars covering topics like:

Generative AI and education related to: emerging educational practice and regulations, exploring the potential for student transition, to higher education, and exploring the potential for learning communities.

 

 

Policy development

Like many organisations globally, international schools have been grappling with how the emergence of generative AI impacts our ways of working and adapting or creating new policies to address its use. It is a challenging place to be, given that governments and major corporations struggle to keep up with changing technology, introducing new laws and regulations often due to serious unanticipated concerns.

Yet, with its vast diversity and richness, the international school community is uniquely positioned to lead safeguarding policy development concerning AI in education.

As a community, we already benefit from numerous regional and global international school education organisations offering structures and networks for collaboration. Leveraging these can facilitate the sharing of strong practices, the development of common standards, and the advocacy for policies that ensure the safe, ethical, and effective use of AI within our community and beyond.

While developing a whole school AI policy, like this template, is important, schools should also interweave AI into existing safeguarding policies and procedures.

For example, your school’s safeguarding handbook can include definitions of AI and AI-related terminology, such as deep fakes and student-generated images. Creating a category of Generative AI (or similar) in your safeguarding record-keeping system will allow you to begin collecting data and revising documentation to reflect your school’s responses to these new and complex forms of harm.

Identification & response

The CIS Safeguarding Team has seen an increase in schools seeking support with incidents related to generative AI.

This undoubtedly reflects schools globally as AI technologies become more sophisticated and prevalent.

CIS provides comprehensive guidance for members in responding to peer-on-peer harm, and many of the principles can be applied to cases where students use generative AI in hurtful or harmful ways. These include:

Schools also need to prepare specifically for recognising and responding to incidents related to generative AI. This might include:

  • Adding Generative AI as a topic in your school’s annual safeguarding training to raise teachers’ awareness of this emerging risk
  • Clarifying that the school responds to all forms of harm and abuse between students, irrespective of where the harm takes place (even online), when it affects the safety and well-being of the students in school
  • Using case studies to anticipate future challenges and discuss with teachers how your school could manage incidents
  • Focusing on upstander approaches during teacher training and in student education, such as what students can do when they receive an inappropriate image or video of someone else
  • Sharing pathways for support for removing inappropriate or harmful content from the online space, such as Without My Consent and, for abusive, explicit or illegal online content, IWF-ICMEC and CEOP
  • Contact the following resources if you need more support, especially when images are shared on the internet:
  • Exploring data privacy laws in terms of how they may impact the school’s ability to respond to harmful AI-generated images held on student devices
  • Advising teachers about how they can keep their social media private to protect themselves

While this article focuses predominantly on risks related to generative AI, it is worth noting the future potential that AI may have for supporting schools in identifying safeguarding concerns.

In the UK, predictive analysis trials are being conducted to identify better children and families needing support from social services. Where children are already receiving help, social workers are using AI to analyse data from social care reports and crime data to determine what kinds of interventions will most likely succeed.

The jury is still out as to whether these approaches offer value for money, as AI is expensive to implement. Concerns exist around ethics and efficacy, particularly whether bias within the system could create blindspots against vulnerable children.

However, these pilot projects offer insights into how international schools might use AI in future to support and protect the children in their care.

Student education & empowerment

Recent research into safeguarding in international schools (Rigg, 2023; Holmyard, 2023) found that students felt social-emotional learning often focused on topics they didn’t consider relevant to them.

The gap seems particularly large in relation to technology, where students and adults often live in parallel worlds, with students engaging in media, games and platforms that are unknown or not well-understood by their parents and teachers.

Facilitating student voice activities is one way that schools can better understand students’ online lives and inform curriculum development. The International Taskforce on Child Protection has developed comprehensive guidance for student engagement in safeguarding, with safety and ethical considerations. The protocol for student focus groups can be adapted to explore student technology use and/or generative AI more specifically.

Here are some questions that schools could use to explore student perspectives and experiences of AI (adapted from Want to talk about it? Making space for conversations about life online):

  • What is exciting to you about generative AI?
  • Which AI tools do you already use in your daily life?
  • What do you think are the risks of generative AI for young people today?
  • How do you think the school’s response should be if a student uses generative AI inappropriately and causes harm to someone else?
  • How can the school help students to use generative AI safely and ethically?
  • Do you think AI should be part of the school curriculum? If so, where do you see it fitting in?

Digital literacy is no longer optional in today's AI landscape but a non-negotiable part of a school's learning pathway. International schools have the unique opportunity to lead by example, designing purposeful and authentic learning experiences grounded in student voice that support students with the essential critical thinking skills to understand both the technical and ethical nuances of generative AI.

Schools should also consider their reporting pathways relating to incidents of peer-on-peer harm that may involve generative AI, such as cyberbullying and online harassment. ICMEC recently released new guidance into anonymous reporting systems, which can be a valuable addition to other school pathways in schools for students to report concerns or seek guidance and support.

The research studies also found that a significant barrier to reporting for international school students is not knowing what the school will do with the information and what actions the school might take. Communicating with students ahead of time about likely or typical responses to disclosures of harm related to technology can reduce their worries about coming forward.

Finally, technology itself can guide students in the moment to reduce harmful comments and actions. The Rethink App, for example, helps students to pause and think before posting or commenting.

Community engagement

Like teachers, parents can also feel disconnected from teens’ online experience. They may not realise the impact of generative AI on their children’s daily lives, whether in academics or interactions with peers.

Technopanic by parents can be a significant barrier to students reporting online harm. Students worry that parents will remove access to their devices if they speak up about harmful online experiences, so they choose to keep quiet to maintain their access.

Educating parents about the risks of generative AI and how they can respond appropriately and in ways that foster continued dialogue is key to any school’s safeguarding strategy. Childnet International provides a wide range of resources to support parents in talking with their children about technology, beginning with preschool children.

Developing and enhancing the partnership between schools, parents, and the larger community provides a unique opportunity for a collective voice to address the challenges and opportunities of generative AI in schools. Developing venues (online and offline) for shared voices and ideas and leveraging the expertise within the parent community provides an inclusive approach to ensuring a common understanding of digital literacy, safeguarding and responsible use of AI at home and in school.

What next for the international school community?

In response to the high prevalence of child sexual abuse material on the internet, the AI for Safer Children Global Hub for law enforcement was created to provide a collaborative space for those involved in detecting and prosecuting child abuse to share strategies and AI tools to make their work more efficient. 

Could the international school community collaborate similarly to explore and respond to safeguarding risks, challenges, and opportunities related to AI?

It is vital to recognise the impacts of AI tools and digital environments on the mental health, well-being and safety of students, teachers, and parents through a whole-school lens. As AI tools increasingly permeate our social and professional lives, being proactive in addressing these impacts is vital, but keeping up to date on future trends in AI amidst constant change can feel daunting.

By adopting a community approach, international schools can support one another to stay ahead of the curve by exploring shared professional development opportunities and collaborative platforms and sharing approaches to adapting agile curriculums, teaching methods and safeguarding approaches. This ensures we address risks and support present and future needs as a global community.

For more information on how the CIS Model: Whole-School Approach to Safeguarding can strengthen child protection and safeguarding in your international school, explore the CIS International Safeguarding Toolkit and CIS members can find out even more in the CIS Community portal.

Further reading & resources

 

Generative AI: A whole school approach to safeguarding children