If we teach students to write like bots, can we be surprised that bots can write like students? As we start a new year, the media has been crowded with articles about the new powers of artificial intelligence (AI). Some envisage that very soon teachers will be unable to distinguish between authentic and AI-generated essays.
While we have not yet reached that point, there are three reasons why we should take this possibility seriously. First, it is a question of values. Students should understand why this kind of intellectual theft is wrong. Schools should make this plain in their codes of conduct and enact the consequences should students choose to cheat.
Second, it brings us closer to the perennial question of what education is and should be for. The 2015 “reformed qualifications” have brought with them an increasing amount of teaching to the test because the product (the qualification) has become more important than the process (the development of the intellect).
The combination of high-stakes exams and predictability of question types make our system more vulnerable to students successfully substituting AI for authentic writing. Awarding bodies and Ofqual must share the blame: every new specification comes with more detailed guidance.
In the quest for ever-better results, teachers are encouraged to make tasks predictable. Acronym-based planning (AFOREST anyone?), over-emphasis on paragraph structures (PEE, FART, etc.) and WAGOLLs ensure that students’ essays become the products of an educational assembly line. And the more predictable the expected outcome, the easier it is for an AI essay-creator to accommodate it.
Third, and arguably most importantly, is that if students are to benefit fully from their education, they need to develop their own style. Making students more self-reliant prepares them to compete in the world of work, where original and workable ideas, inventions, and enquiry are valued.
So how should schools respond?
First and foremost, staff should be trained and kept updated on what AI products can actually produce, and the distinguishing features of their output.
Low-trust strategies do not teach students to take moral ownership
Some will argue that the most obvious remedy is to take a low-trust, high-control approach whereby all essays become controlled assignments and all preparation is closely supervised. But low-trust strategies do not teach students to take moral or educational ownership of their work for its own sake. And is a treadmill of in-class assessment the best way to organise a learning journey?
It would be better to make more intensive use of teachers’ knowledge of assessment and their students’ capabilities. Teachers know only too well the laboured style of sentences and paragraphs “lifted” from study guides. They can detect the more mature style that usually indicates a parent’s involvement. And they can spot any deviations from their normal output: handwriting, style, pattern of technical errors and even their willingness to revise a draft. In classroom discussion, students’ idiosyncrasies and attitudes emerge that shape their written work.
To steer clear of answers that are easily replicated by AI, we should encourage our classes to experiment with different structures and give them the space to try, fail and rebuild. Using classroom time for students’ drafting and revising allows teachers to see what might be expected from each one, then compare it with what is handed in.
Finally, school leaders and awarding bodies must support teachers in holding the line against challenges from more assertive students and parents, otherwise inauthentic work will slip through. Where there is doubt about authenticity, for example, interviews with students about their essays could challenge those whose submissions are out of keeping with what they have produced before and add depth to assessment more generally.
One thing is certain: purchasing the software that creators of AI are developing to detect the inauthentic essays they have programmed their bots to manufacture in the first place is not the solution. Instead, trust, agency, flexibility, integrity and accountability are how we will keep assessment humanly intelligent.
Your thoughts