Opinion: Edtech

How to navigate new duties on AI usage in schools

New guidance and legislation mean safeguarding in the age of AI has gone beyond merely blocking harmful content

New guidance and legislation mean safeguarding in the age of AI has gone beyond merely blocking harmful content

24 Aug 2025, 5:00

Government is exploring making education a fourth statutory safeguarding partner

With the publication of the updated Keeping Children Safe in Education (KCSIE) 2025 guidance and the Online Safety Act now in force, school leaders face a pressing question: how can we embrace AI’s advantages while ensuring it’s safe, transparent and fit for the classroom?

Our schools are increasingly seeing the benefits of AI, including reduced administration time, richer resources and tailored student support. But as these tools are woven into everyday teaching, they also introduce new risks.

Even well-intentioned educational tools can, if not carefully designed, deliver outputs that misinform or, in extreme cases, cause harm.

So what are the expectations from new laws and guidance, and how can we over-deliver on them?

KCSIE 2025

For the first time, school safeguarding guidance explicitly references AI (although some feel the guidance could have gone further). This matters for schools’ existing filtering and monitoring duties, and for Designated Safeguarding Leads (DSLs) who must stay alert to new hazards.

While most schools already have web filtering in place, AI introduces fresh challenges. Students may find ways to bypass filters or encounter AI-generated content that appears trustworthy but is inaccurate.

The updated guidance calls on schools to ensure their filtering and monitoring is effective against these risks, which may mean updating policies and training staff, while working with edtech providers to ensure compliance.

The online safety act

The new law puts duties on platforms likely to be accessed by children. For education, this has implications for both commercial tools and bespoke systems, especially if they incorporate AI.

Platforms must take “proportionate measures” to protect children from harmful content and contact.

For AI-powered systems, that includes but isn’t limited to safeguards against inappropriate outputs or accidental exposure to adult material. It also reinforces the need for clear reporting mechanisms so problems can be flagged and resolved quickly.

School leaders should check how providers meet these requirements by asking:

  • How are the AI models checked for harmful outputs?
  • What guardrails are put in place to mitigate against these risks?
  • How are safety features built-in, not bolted on?
  • Is there a clear, easily accessible process for reporting and addressing incidents?

Evaluating AI tools

When deciding whether to adopt an AI tool, school decision-makers can use four simple evaluation lenses:

Data safety

What personal data is collected? Where and how is it stored, and for how long?

Age-appropriateness

Is the AI system designed for educational use, with suitable language and curriculum alignment?

Human oversight

The safest systems keep teachers in the loop, so can teachers review outputs before students see them?

Transparency

Openness helps build trust and ensures staff can make informed decisions.Ask: Is it clear how the tool works, what data it uses and its limitations?

Teachers in the driving seat

No matter how advanced the technology, teachers understand their students’ needs best. Responsible adoption of AI depends on pedagogical alignment; the tool should serve the curriculum, not dictate it.

Teachers’ perspectives are crucial; they should be part of procurement decisions, pilot programmes and ongoing evaluations. Their feedback helps refine how AI is used and ensures it complements, rather than replaces, human judgement.

Professional development is also key. Teachers need time and support to understand what AI can and can’t do, and how to use it ethically. Without this, there’s a risk of either over-reliance (“the AI said it, so it must be right”) or under-utilisation (“it’s too risky to try”).

Balancing innovation and safeguarding

AI offers huge potential for schools, but that potential will only be realised if it’s implemented with safeguarding at its core.

The arrival of KCSIE 2025 and the online safety act should be seen as guardrails, providing school leaders and providers a shared framework for setting appropriate boundaries.

Rather than simply blocking harmful content, safeguarding in the age of AI is about creating a culture where staff and students can explore technology confidently and responsibly.

Latest education roles from

Chief Education Officer (Deputy CEO)

Chief Education Officer (Deputy CEO)

Romero Catholic Academy Trust

Director of Academy Finance and Operations

Director of Academy Finance and Operations

Ormiston Academies Trust

Principal & Chief Executive

Principal & Chief Executive

Truro & Penwith College

Group Director of Marketing, Communications & External Engagement

Group Director of Marketing, Communications & External Engagement

London & South East Education Group

Sponsored posts

Sponsored post

AI Safety: From DfE Guidance to Classroom Confidence

Darren Coxon, edtech consultant and AI education specialist, working with The National College, explores the DfE’s expectations for AI...

SWAdvertorial
Sponsored post

How accurate spend information is helping schools identify savings

One the biggest issues schools face when it comes to saving money on everyday purchases is a lack of...

SWAdvertorial
Sponsored post

Building Character, Increasing Engagement and Growing Leaders: A Whole School Approach

Research increasingly shows that character education is just as important as academic achievement in shaping pupils’ long-term success. Studies...

SWAdvertorial
Sponsored post

Educators launch national AI framework to guide schools and colleges

More than 250 schools and colleges across the UK have already enrolled in AiEd Certified, a new certification framework...

SWAdvertorial

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *