AI’s allure is proving hard to resist for the UK’s learners

AI cheating: just how much is going on in schools?

‘Pupils need to be taught to critically question the results of AI’

An explosion in AI ‘study aids’ has armed pupils with the means to cheat their way through assignments. But how much is this happening in schools, and what’s being done to stop it?

Alex Kirkbride, the principal of Honiton College, in Devon, recalls a recent conversation with a year 7 pupil who had answered open-ended homework questions using My AI, a chatbot available to users of the social media app Snapchat. “Look sir, we don’t need to do homework anymore,” said the youngster.

Powered by Open AI’s ChatGPT language model, the chatbot is customised with human features to appear as a friend. It tells users to “ask [me] questions about anything”.

An Ofcom survey last year found My AI was now used by 72 per cent of 13- to 17-year-olds. Nearly a third of 7-to 12-year-olds also said they use it, despite its 13-plus age restriction. It’s one of several third-party AI tools that have sprung up off the back of ChatGPT.

Daisy Christodoulou, director of education for No More Marking, said pupils she recently spoke to had “never heard of ChatGPT or large language models (LLMs) – but they told me, ‘if you ask Snapchat nicely it will do your homework for you’”.

TikToker Toby Rezio

Over on TikTok – another popular social media app for youngsters – influencers are endorsing AI tools, purportedly as a study aid but sometimes more blatantly for cheating. Toby Rezio, an American Tiktoker with 91.8 million video likes, admitted to cheating using My AI (see pic), which launched in April last year.

Schools Week also found TikTok videos of students talking about cheating using ChatGPT itself, several of which have racked up millions of views. We’ve also seen several Snapchat posts in the last six months which appear to be by British secondary school-age children revealing how they use My AI for help with their homework.

One boy asked it to write him a 600-word essay on Vikings culture, commenting: “This new Snapchat AI is about to save my life”.

A Snapchat spokesperson said they monitor how the tool is being used and parents can turn the function off.

AI arms race

Last month, the Joint Council for Qualifications refreshed its AI use in assessments guidance to include an expanded list of detection tools.

But StealthGPT, which claims to be “the first AI-powered tool dedicated to generating undetectable AI content”, advertises how it can not only “elude the discerning eyes” of one particular detection tool but also “enhances the writer’s voice, ensuring that the work reflects their unique style and intellect”.

In one video ad for an account named Tutorly.ai, so far viewed 16,400 times, a student complains how they “just got caught using ChatGPT for my essay and now I have to write double the length”. A narrator responds: “Tutorly can write plagiarism free essays in just a few seconds … this is ChatGPT on crack!”

Schools Week analysis found most of the “potential indicators of AI misuse” cited by the JCQ, such as default use of American spelling and a lack of direct quotations, can also easily be overcome by using further chatbot prompts to write in specific styles.

Harald Koch author of a book about AI cheating

Harald Koch, the author of a book about AI cheating, said: “Before an AI checker has been rolled out in a meaningful way, the next level … of AI has already been released”.

A recent international study of 14 widely used detection tools found them to be “not accurate or reliable enough to use in practice”.  Even OpenAI (the company behind ChatGPT) shut down its own AI detector tool in July due to its “low rate of accuracy”.

Christodoulou believes AI is being used for cheating far more than most educators realise. When No More Marking ran an assessment of 50,000 eight-year-olds last year, it snuck in eight essays written by ChatGPT.

The teachers marking, who were incentivised with prizes for spotting the AI, were “more likely to flag human writing” as AI-generated than the essays from ChatGPT. They awarded one ChatGPT essay with the highest marks.

“If you spot one AI-generated essay, there’s probably another 10 you haven’t,” Christodoulou added.

Caught in the act

ChatGPT was first released in November 2022. Two thirds of 500 secondary teachers polled last year by RM Technologies believe they’re regularly receiving work written by AI.

JCQ states pupils accused of submitting AI generated assignments “may attract severe sanctions”, including disqualification and being barred from exams. Teachers with “doubts” about authenticity who do not “investigate and take appropriate action” can also “attract sanctions”.

Snapchats My AI screenshot

Exam malpractice cases relating to tech devices that resulted in penalties jumped by almost a fifth from 1,825 in 2022, to 2,180 in 2023 – although malpractice cases overall rose by the same rate.

JCQ has highlighted examples of students caught misusing AI on their coursework, including two AQA A-level history students, one of whom was disqualified. Another two students on the OCR’s Cambridge Nationals Enterprise and Marketing qualification confessed to cheating and received zero marks.

And a GCSE religious studies candidate lost marks for using AI in an exam undertaken on a word processor, which they denied doing. But detection software found “multiple components [of their assessments] were affected”.

Hasmonean High School for Girls, in London, said in comments submitted to the Department for Education’s call for evidence on Generative AI in Education that malpractice in assessed coursework had been “a challenge to manage”. Teachers reported a “sudden change in students’ essay styles, indicating plagiarism.”

The school is developing training to support appropriate pupil use of GenAI tools, and investing in plagiarism software to detect malpractice. Koch believes the solution lies in educators using “more oral performance reviews instead of written homework”.

JCQ advises educators to use more than one detection tool and to consider “all available information” when trying to detect use of AI.

Daisy Christodoulou of No More Marking

They also suggested schools make students do some coursework “in class under direct supervision” to prevent AI misuse. Reza Schwitzer, head of external affairs at AQA, says doing work in “exam conditions is more important than ever”.

Meanwhile, Christodoulou wants a “pause” on all assessed coursework. “If a pupil knows that their friend is using AI and getting away with it, that’s really destructive for the values you want to nurture.”

A US study of seven AI detectors found they wrongly flagged writing by non-native speakers as AI-generated 61 per cent of the time, compared to 20 per cent of human-written essays overall.

In New Zealand last year, AI plagiarism detectors are believed to have falsely accused two high school students of cheating. One parent described the use of AI detection tools as playing “Russian roulette”.

The regulatory gap

Speaking at a Westminster Education Forum last week, the DfE’s deputy director of digital Bridie Tooher admitted “things are moving so fast that … the tech will always overtake the regulations”.

Educators raised concerns to DfE in its AI consultation about developers being “often opaque” about how they use the data put into their platforms, including pupils’ identity, grades or behaviour.

Thea Wiltshire, the Department for Business and Trade’s edtech specialist, said schools “allowing generative AI to learn from it is an abuse of [pupils’] intellectual property.”

Snapchats My AI

AI governance expert Kay Firth-Butterfield warns schools using open-source models will also be feeding pupils’ information into the “global data lake”. She points out that in the US, early adopters of AI in the business world are now having to “claw back what they’ve been doing because they didn’t put a good governance structure around AI in at the beginning”.

Another key concern is around schools and young people not adhering to the age restrictions of AI platforms. An Ofcom report last year found that 40 per cent of 7-to 12-year-olds reported they’d used ChatGPT, My AI, Midjourney or DALL-E. These are all prohibited for their age.

Even for 13- to 18-year-olds, parental consent is required for ChatGPT.

A secondary school’s digital lead, who did not want to be named, said despite selling the software, big firms were delegating to schools the safeguarding responsibility.

Christina Jones, chief executive at the River Tees Multi Academy Trust, said “teachers being responsible for identifying use of AI” puts a “huge pressure” on them and wants “a wider debate about how teachers can be supported with that.”

AI inequalities

The AI rise could also exacerbate existing equalities.

Many GCSEs taught in state schools no longer include assessed coursework. But Christodoulou highlights how private schools mostly do the English IGCSE, for instance, which can include up to 50 per cent of non-examined assessment. This has tasks “ChatGPT is so good at”.

If any cheating is not picked up, this could further widen the attainment gap between the schools.

In a recent poll by the International Baccalaureate (IB) of 2,000 UK students, 86 per cent attending UK independent schools used a chatbot and 71 per cent of state school students.

AQA also warned that “without centralised planning or at least a central fund, schools that have the money will benefit the most [from AI] as they will be able to afford the most advanced systems, with schools with less money left behind”.

But Fiona Aubrey-Smith, a researcher on AI use in schools, says the AI “gap” is “now closing as groups of schools come together to support each other”. She’s part of a new AI research project exploring the system leadership implications of AI, involving 23 MATs and looking at issues including data and security, governance and ethics, and educational vision.

Hamid Patel

Many think chatbots have the potential to level the academic playing field by widening access to personalised systems of learning, previously only available to families who could afford tutoring.

But Michael Webb, technology director at digital education agency JISC, estimates it costs a student around £80 a month for all the AI tools required to do well academically, giving those students “a significant advantage … there’s no easy answer to that.”

Writing for Schools Week, chief executive of Star Academies Sir Hamid Patel said every child should have an AI tutor from the age of five “by the end of this decade”. Making such tools “free-of-charge” could “help eradicate educational inequality far more effectively than several decades of policy and funding,” he added.

What are schools doing?

A January report by the government’s own open innovation team concluded a long-term strategy for the use of AI in schools was needed. That included guidance and support for teachers to ensure the “digital divide” isn’t exacerbated – highlighting the emerging difference between state and private schools’ use of the technology.

Tooher admits that “there does need to be some support from government. We’ve still got primary schools in England without access to gigabit broadband. How do make sure that … some schools are not left behind.”

Three in five of the 2,000 parents responding to a poll for Internet Matters last month said they had not been told if their child’s school planned to use AI for teaching or spoken to about children using the tool for homework. This “questions whether some schools are considering the impact of AI at all”, they said.

Alleyn’s School, a private school in London, has abandoned traditional homework essays in favour of in-depth research. RGS Worcester independent school has trained its own AI model, which Wiltshire suggests that schools “with the time” should do to “restrict the data that [the LLM] is drawing on”.

Alex Kirkbride

Computer science teacher Charles Edwards is leading a working party on AI at Simon Langton Girls’ Grammar School to draw up policies. He said the school was “aware of chatbots being used for homework” and is responding by “placing new schemes of work in place ready for next year to combat how it is used and the ethics of how to use this as a tool in and out of school”.

Kirkbride says his pupils are “now regularly briefed around appropriate use of chatbots, including referencing sources of materials where courses include coursework”.

But at the same time as worrying about cheating, schools are being encouraged to embrace AI to generate lesson plans, crunch data and help mark assignments. Education secretary Gillian Keegan has said teachers’ day-to-day work could be “transformed” by it.

Aubrey-Smith recently met a year nine pupil who “saw it as the most immense injustice” that their teacher had been using AI to create lesson resources when their pupils were “not allowed to use AI for their work”.

Snapchats My AI

Making stuff up

As the sector gets to grips with the challenge, Christodoulou provides a note of caution.

Chatbots just “repeat the kinds of misconceptions and misinformation that are out there already on the internet”, including “basic maths errors” and “inventing completely new and plausible ‘facts’ that are totally incorrect.”

There are also deeper philosophical considerations about the impact of AI on young people’s faith in democratic systems, and how AI will influence their curiosity for learning.

Koch believes that “to protect against manipulation”, pupils need to be taught to “critically question the results of AI and to establish this as a normal process”.

More Profiles

The rebel CEO with a community cause

Wayne Norrie is a MAT chief determined to always put children’s needs first, even if that means clashing with...

Jessica Hill

Looking to the Future after online harassment

Lawrence Foley went from school refuser to doctor in modernist literature – but his career almost came to a...

Jessica Hill

Dan Thomas, chief executive, TLP

Dan Thomas, now chief executive of The Learning Partnership trust, was once prompted by a fortune cookie to ‘take...

Jessica Hill

A journey from California’s Death Row to school governance

From California’s Death Row to school governance, the Confederation of School Trusts’ Samira Sadeghi explains how the roles are...

Jessica Hill

Always talking up maths, in a country that talks it down

David Thomas was one of the youngest heads in the country. He’s set up a national online academy and...

Jessica Hill

Teacher training guru Lynne McKenna, University of Sunderland

Lynne McKenna, Dean of Education and Society at the University of Sunderland, has a formidable work ethic. She “easily”...

Jessica Hill

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *