Children misusing ChatGPT? It’s the adults you need to watch

Behind the hand-wringing over ChatGPT empowering children to cheat is a much bigger threat: adults misusing these nascent educational tools, says Cesare Aloisi

Behind the hand-wringing over ChatGPT empowering children to cheat is a much bigger threat: adults misusing these nascent educational tools, says Cesare Aloisi

31 Mar 2023, 12:30

There’s nothing adults like more than blaming something on the kids, especially when it comes to technology. “Ergh my children are always on their phones. Isn’t it appalling they’re so glued to Tik Tok these days? In my day, we used to talk to each other.” You get the picture.

Nowhere has that been more apparent than the debate over artificial intelligence and ChatGPT. The Twittersphere has been awash with people arguing that children can’t be trusted and should be kept as far away from it as possible. My colleague, Reza Schwitzer has already pointed out that as long as we have externally-marked exams as part of our assessment toolkit, these predictions of impending doom are somewhat unfounded.  

I would like to make another observation – that far from the problem being about not trusting children, it’s adult uses of AI in education that need greater scrutiny.  

There are many potential uses for tools like ChatGPT, particularly in the assessment space. Used well they could revolutionise our education system, for example by quality assuring marking at scale to make it as fair and equitable as possible or crunching data and research to provide new insights for policymakers. Some might want to go even further, using AI (as Duolingo already do) to actually write and mark question papers. But this is where some of the problems also start.  

These are still experimental systems. Despite the excitement, and the opportunities they offer, they need to be integrated into our education system incrementally, safely and responsibly. Current AI systems have several limitations, particularly around safety and ethics. The include:

Brittleness and unreliability

They are unable to deal with unusual situations, and sometimes do not work as expected.


Current AI systems are generally overly confident about what they do and don’t know; they fabricate answers that were meant to be factual.

Lack of transparency and explicability

Most AI systems are ‘black boxes’. We don’t really know how they reached certain conclusions and they can’t explain that well. And when they can, such as with ChatGPT, they may be making it up. They can also develop capabilities they were not programmed for.

Bias and toxicity

AI systems are trained on real-world data and as such they are as biased and prejudiced as the real world, often more so.

All of these point to challenges with integrating AI into our education system. For example, if AI was used to mark student work, that could be OK when the responses are short and predictable. However, AI cannot exercise academic judgement the way a teacher can, so it could give two similar responses very different marks because of some superficial differences in the answers. Or it might make spectacular judgement errors with unexpected and original answers.

Even though AIs are meant to be objective, they are often more biased than people because they exaggerate human biases and see correlations where people don’t. So an AI might become much better than people at recognising responses written by boys, or by people of colour, or by affluent students, even if all responses were anonymised, and upmark/downmark them based on those biases and prejudice.

Similar problems would apply where AIs were used to assist with writing question papers. They would need a huge amount of quality assurance around them to ensure the questions they wrote were factually correct, non-toxic, non-biased etc.

All this means that we need to treat AI systems like we treat experimental medications: investing in research and development, but testing them in safe environments before rolling them out at scale.

We also need to learn from other industries that are further ahead than us, like healthcare, and develop ethical and safe AI frameworks to ensure AI developers follow certain rules.

And ultimately, we need to remember that when we ask how best to deal with AI, we aren’t only talking about children. It’s the adults we need to watch.

More from this theme


ChatGPT: One in three teachers use AI to help with school work

Proportion of teachers and leaders reporting using the technology has doubled in five months

Freddie Whittaker

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *