Education technology

Children misusing ChatGPT? It’s the adults you need to watch

Behind the hand-wringing over ChatGPT empowering children to cheat is a much bigger threat: adults misusing these nascent educational tools, says Cesare Aloisi

Behind the hand-wringing over ChatGPT empowering children to cheat is a much bigger threat: adults misusing these nascent educational tools, says Cesare Aloisi

31 Mar 2023, 12:30

There’s nothing adults like more than blaming something on the kids, especially when it comes to technology. “Ergh my children are always on their phones. Isn’t it appalling they’re so glued to Tik Tok these days? In my day, we used to talk to each other.” You get the picture.

Nowhere has that been more apparent than the debate over artificial intelligence and ChatGPT. The Twittersphere has been awash with people arguing that children can’t be trusted and should be kept as far away from it as possible. My colleague, Reza Schwitzer has already pointed out that as long as we have externally-marked exams as part of our assessment toolkit, these predictions of impending doom are somewhat unfounded.  

I would like to make another observation – that far from the problem being about not trusting children, it’s adult uses of AI in education that need greater scrutiny.  

There are many potential uses for tools like ChatGPT, particularly in the assessment space. Used well they could revolutionise our education system, for example by quality assuring marking at scale to make it as fair and equitable as possible or crunching data and research to provide new insights for policymakers. Some might want to go even further, using AI (as Duolingo already do) to actually write and mark question papers. But this is where some of the problems also start.  

These are still experimental systems. Despite the excitement, and the opportunities they offer, they need to be integrated into our education system incrementally, safely and responsibly. Current AI systems have several limitations, particularly around safety and ethics. The include:

Brittleness and unreliability

They are unable to deal with unusual situations, and sometimes do not work as expected.

Untrustworthiness

Current AI systems are generally overly confident about what they do and don’t know; they fabricate answers that were meant to be factual.

Lack of transparency and explicability

Most AI systems are ‘black boxes’. We don’t really know how they reached certain conclusions and they can’t explain that well. And when they can, such as with ChatGPT, they may be making it up. They can also develop capabilities they were not programmed for.

Bias and toxicity

AI systems are trained on real-world data and as such they are as biased and prejudiced as the real world, often more so.

All of these point to challenges with integrating AI into our education system. For example, if AI was used to mark student work, that could be OK when the responses are short and predictable. However, AI cannot exercise academic judgement the way a teacher can, so it could give two similar responses very different marks because of some superficial differences in the answers. Or it might make spectacular judgement errors with unexpected and original answers.

Even though AIs are meant to be objective, they are often more biased than people because they exaggerate human biases and see correlations where people don’t. So an AI might become much better than people at recognising responses written by boys, or by people of colour, or by affluent students, even if all responses were anonymised, and upmark/downmark them based on those biases and prejudice.

Similar problems would apply where AIs were used to assist with writing question papers. They would need a huge amount of quality assurance around them to ensure the questions they wrote were factually correct, non-toxic, non-biased etc.

All this means that we need to treat AI systems like we treat experimental medications: investing in research and development, but testing them in safe environments before rolling them out at scale.

We also need to learn from other industries that are further ahead than us, like healthcare, and develop ethical and safe AI frameworks to ensure AI developers follow certain rules.

And ultimately, we need to remember that when we ask how best to deal with AI, we aren’t only talking about children. It’s the adults we need to watch.

Latest education roles from

Head of Safeguarding

Head of Safeguarding

Lift Schools

Head of Health & Safety Operations

Head of Health & Safety Operations

Capital City College Group

Head of Welfare and Student Finance

Head of Welfare and Student Finance

Capital City College Group

Chief People Officer and Director of People and Organisational Development – West London College

Chief People Officer and Director of People and Organisational Development – West London College

FEA

Sponsored posts

Sponsored post

CPD Accreditation Among New Developments For The Inspiring Leadership Conference

As this year’s Inspiring Leadership Conference approaches, we highlight fives new initiatives and the core activities that make this...

SWAdvertorial
Sponsored post

Equity and agency for a changing world – how six core skills are transforming inclusive education

There is a familiar thread running through current government policy, curriculum reviews and public debate about education. We are...

SWAdvertorial
Sponsored post

Equitas: ASDAN’s new digital platform putting skills at the heart of learning

As schools and colleges continue to navigate increasingly complex learning needs, the demand for flexible, skills-focused provision has never...

SWAdvertorial
Sponsored post

Bett UK 2026: Learning without limits

Education is humanity’s greatest promise and our most urgent mission.

SWAdvertorial

More from this theme

Education technology

DfE plans to use AI to help answer your emails

Plans say system can reduce time to draft a response from 30 to just one minute

Ruth Lucas
Education technology

‘Deepfake’ teacher avatar plan to help pupils catch up

Academy trust staff to use AI generator to create videos that look and sound just like them to introduce...

Freddie Whittaker
Education technology

AI questions to be trialled in SATs moderator tests

Government launches trial in bid to cut costs and school workload, amid push to 'harness the power of AI'

Jack Dyson
Education technology

Two in five teachers feel they’re ‘cheating’ when using AI

Teachers are caught between enthusiasm for AI and worries about authentic education, Bett study finds

Ruth Lucas

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *