Opinion: Edtech

The DfE’s new materials dangerously underplay AI’s risks

AI support materials published by the department last week effectively transfer responsibility for all of its risks onto schools

AI support materials published by the department last week effectively transfer responsibility for all of its risks onto schools

17 Jun 2025, 17:00

Last week, the Department for Education published materials to support safe and effective use of generative AI in education settings. Sadly, these underplay many of the technology’s risks and leave teachers exposed.

From the start, there is an alarming assumption that schools and teachers will use AI. Rather than offering a critical perspective on its use, the materials effectively endorse it.

It’s little surprise then, that despite the words ‘safe’ and ‘safety’ appearing frequently, there is very little meaningful engagement with AI’s complex risks. Some important ones are listed, but many are not and those that are lack nuance.

Understanding the risks

MIT’s AI Risk Repository lists over 1,600 AI risks. Here are three that are key to education and not well covered by the DfE’s materials.

First, using AI to produce text risks narrowing writing standards.

Algorithms’ ‘preference’ for certain language features (say, em-dashes) risk reinforcing a standard of what ‘good writing’ looks like which is defined more by the tools’ training data than by thoughtful teaching or diverse perspectives.

Second, AI tools are commercial products designed to make users feel good.

They tell you what you want to hear, which is the opposite of the kind of critical thinking and multiple perspectives that good education should foreground, and risks creating bubbles of self-reinforcing thinking.

Third, and crucially, embedding AI is likely to risk making existing inequalities worse.

Lack of AI access due to infrastructure or training could become a gatekeeper that further excludes teachers and students who are already disadvantaged.

Busting the myths

Supporting AI use in schools sends a message that it is safe, ethical, objective, useful, inexhaustible and even inevitable. Technology companies have carefully crafted this message, and it’s working, but it’s highly questionable.

Safe?

In fact, it can be used to create great harm. Ask the huge numbers of schoolgirls whose images have been used to create deepfake porn videos that their male peers circulate to humiliate and subjugate them.

The DfE materials acknowledge that there is no current law against this, and reports suggest schools often do nothing to punish the perpetrators.

Other currently known harms include research that finds people are more demanding, judgmental and negative towards others after talking to AI chatbots.

Ethical?

Even the most innocent query has damaging implications. (See below.)

Objective?

It is biased in many ways, for example against women and people of colour. AI systems tend to take on human biases and amplify them, causing people who use it to become more biased themselves.

Useful?

Our recent study found that it is ineffective at finding and summarising research for teachers.

Reliable?

AI is sycophantic and lies. Indeed, it will often double down on its lies. This is worsened by the fact that humans tend to ‘fall asleep at the wheel’. For example, lawyers were recently caught citing fake cases generated by AI in court proceedings.

Inexhaustible?

Every single query costs electricity, water, minerals, labour, and accelerates climate change.

Meanwhile, experts have labelled claims that AI itself will solve the climate crisis (repeated in the DfE’s support materials) ‘misguided’ and ‘hype’.

Inevitable? 

AI tools (sometimes poor quality) have often been embedded in systems without users’ consent or knowledge. It is important to question this and to draw boundaries around its use based on evidence.

It is particularly important that we support and encourage teachers and leaders who make the reasoned decision not to use it, especially in the current unregulated and over-hyped environment.

The idea of a carbon footprint was invented by companies to transfer responsibility from organisations and systems to individuals. In the same way, by superficially acknowledging that some issues do exist with AI, the DfE is effectively transferring responsibility for its practical and ethical implications directly to schools and teachers. 

We strongly suggest supplementing the published use cases with significantly more concrete examples of issues and risk-mitigation strategies, including explicit permission for schools and teachers to opt out of AI use entirely.

To join Dr Rycroft-Smith and Darren Macey’s free webinar, ‘How evidence-based are the DfE’s AI support materials?’ on Saturday 21 June, click here

Latest education roles from

IT Technician

IT Technician

Harris Academy Morden

Teacher of Geography

Teacher of Geography

Harris Academy Orpington

Lecturer/Assessor in Electrical

Lecturer/Assessor in Electrical

South Gloucestershire and Stroud College

Director of Management Information Systems (MIS)

Director of Management Information Systems (MIS)

South Gloucestershire and Stroud College

Exams Assistant

Exams Assistant

Richmond and Hillcroft Adult & Community College

Lecturer Electrical Installation

Lecturer Electrical Installation

Solihull College and University Centre

Sponsored posts

Sponsored post

Dream Big Day: Empowering Every Pupil to Imagine, Create, and Flourish

In today’s rapidly evolving world, educators face an immense challenge: How do we inspire young people to envision ambitious...

SWAdvertorial
Sponsored post

Reframing digital skills for the workforce of tomorrow

No longer just for those with a passion for technology: why digital skills matter

SWAdvertorial
Sponsored post

Safe to speak, ready to act: SaferSpace tackles harassment, misconduct and safeguarding concerns in schools 

In today’s education climate, where safeguarding, wellbeing and staff retention are under increasing scrutiny, the message is clear: schools...

SWAdvertorial
Sponsored post

Beyond exams: why ASDAN’s refreshed qualifications are key to real-world learner success

In today’s outcome-driven education landscape, it’s easy to overlook the quieter, yet equally vital, qualities that help learners truly...

SWAdvertorial

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *