Opinion: Edtech

How to navigate new duties on AI usage in schools

New guidance and legislation mean safeguarding in the age of AI has gone beyond merely blocking harmful content

New guidance and legislation mean safeguarding in the age of AI has gone beyond merely blocking harmful content

24 Aug 2025, 5:00

Government is exploring making education a fourth statutory safeguarding partner

With the publication of the updated Keeping Children Safe in Education (KCSIE) 2025 guidance and the Online Safety Act now in force, school leaders face a pressing question: how can we embrace AI’s advantages while ensuring it’s safe, transparent and fit for the classroom?

Our schools are increasingly seeing the benefits of AI, including reduced administration time, richer resources and tailored student support. But as these tools are woven into everyday teaching, they also introduce new risks.

Even well-intentioned educational tools can, if not carefully designed, deliver outputs that misinform or, in extreme cases, cause harm.

So what are the expectations from new laws and guidance, and how can we over-deliver on them?

KCSIE 2025

For the first time, school safeguarding guidance explicitly references AI (although some feel the guidance could have gone further). This matters for schools’ existing filtering and monitoring duties, and for Designated Safeguarding Leads (DSLs) who must stay alert to new hazards.

While most schools already have web filtering in place, AI introduces fresh challenges. Students may find ways to bypass filters or encounter AI-generated content that appears trustworthy but is inaccurate.

The updated guidance calls on schools to ensure their filtering and monitoring is effective against these risks, which may mean updating policies and training staff, while working with edtech providers to ensure compliance.

The online safety act

The new law puts duties on platforms likely to be accessed by children. For education, this has implications for both commercial tools and bespoke systems, especially if they incorporate AI.

Platforms must take “proportionate measures” to protect children from harmful content and contact.

For AI-powered systems, that includes but isn’t limited to safeguards against inappropriate outputs or accidental exposure to adult material. It also reinforces the need for clear reporting mechanisms so problems can be flagged and resolved quickly.

School leaders should check how providers meet these requirements by asking:

  • How are the AI models checked for harmful outputs?
  • What guardrails are put in place to mitigate against these risks?
  • How are safety features built-in, not bolted on?
  • Is there a clear, easily accessible process for reporting and addressing incidents?

Evaluating AI tools

When deciding whether to adopt an AI tool, school decision-makers can use four simple evaluation lenses:

Data safety

What personal data is collected? Where and how is it stored, and for how long?

Age-appropriateness

Is the AI system designed for educational use, with suitable language and curriculum alignment?

Human oversight

The safest systems keep teachers in the loop, so can teachers review outputs before students see them?

Transparency

Openness helps build trust and ensures staff can make informed decisions.Ask: Is it clear how the tool works, what data it uses and its limitations?

Teachers in the driving seat

No matter how advanced the technology, teachers understand their students’ needs best. Responsible adoption of AI depends on pedagogical alignment; the tool should serve the curriculum, not dictate it.

Teachers’ perspectives are crucial; they should be part of procurement decisions, pilot programmes and ongoing evaluations. Their feedback helps refine how AI is used and ensures it complements, rather than replaces, human judgement.

Professional development is also key. Teachers need time and support to understand what AI can and can’t do, and how to use it ethically. Without this, there’s a risk of either over-reliance (“the AI said it, so it must be right”) or under-utilisation (“it’s too risky to try”).

Balancing innovation and safeguarding

AI offers huge potential for schools, but that potential will only be realised if it’s implemented with safeguarding at its core.

The arrival of KCSIE 2025 and the online safety act should be seen as guardrails, providing school leaders and providers a shared framework for setting appropriate boundaries.

Rather than simply blocking harmful content, safeguarding in the age of AI is about creating a culture where staff and students can explore technology confidently and responsibly.

Latest education roles from

Senior Co-Chief Executive Officer

Senior Co-Chief Executive Officer

Scholars' Education Trust

Deputy Principal, Curriculum & Quality

Deputy Principal, Curriculum & Quality

City College Plymouth

Group Principal & Chief Executive

Group Principal & Chief Executive

Windsor Forest Colleges Group

Regional Education Directors

Regional Education Directors

Lift Schools

Sponsored posts

Sponsored post

Bett UK 2026: Learning without limits

Education is humanity’s greatest promise and our most urgent mission.

SWAdvertorial
Sponsored post

Six tips for improving teaching and learning for vocabulary and maths

The more targeted the learning activity to a student’s ability level, the more impactful it will be.

SWAdvertorial
Sponsored post

From lesson plans to financial plans: Helping teachers prepare for the Autumn budget and beyond

Specialist Financial Adviser, William Adams, from Wesleyan Financial Services explains why financial planning will be key to preparing for...

SWAdvertorial
Sponsored post

IncludEd Conference: Get Inclusion Ready

As we all clamber to make sense of the new Ofsted framework, it can be hard to know where...

SWAdvertorial

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *