We can only prevent more ‘mutant algorithms’ and make edtech safe with a framework for the ethical use of AI in education, writes Rose Luckin

Last summer’s ‘mutant algorithm’ fiasco is an object lesson in how emerging technologies such as artificial intelligence (AI) can become the subject of myths and misinformation. It’s no wonder they can often feel quite frightening to educators.

When Sir Anthony Seldon, Priya Lakhani and I created the Institute for Ethical AI in Education (IEAIED) back in 2018, we sought to remove some of this trepidation. Our aim was to encourage the safe and ethical use of AI in education.

Last week, we published the culmination of our work, and with the sharp rise in the use of technology precipitated by Covid, it could not be more timely. Our final report and framework are designed to help heads and teachers make informed choices and maximise the potential of AI in all educational settings.

And that potential is huge.  AI’s benefits include cutting teacher workload and improving  assessment. Yet many educators still lack the knowledge and understanding they need to ensure the AI products they purchase enhance teaching and learning safely. By improving this knowledge and raising expectations, we can drive more ethical development of AI-based edtech for the benefit of teachers and students.

AI’s benefits include cutting teacher workload and improving  assessment

The framework we’ve published draws on insights gathered from a series of roundtables organised by the IEAIED last year – which included the views of young people – and from last November’s Global Summit on the Ethics of AI in Education.

It sets out a number of principles, as well as actions that educational settings should take to fulfil them. They should demand and expect that suppliers will help them to fulfil these principles:

  • First, the evidence should lead the technology. AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that it is for the benefit of learners.
  • Next, AI should be used to assess and recognise a broad range of learners’ aptitudes (including, for example, metacognition), not just those measured by formal assessments and exams.
  • Learners’ social and emotional needs should not be sacrificed for efficiency. AI should increase the capacity of educational institutions whilst respecting human relationships.
  • AI systems should promote equity between different groups of learners and aim to improve social mobility.
  • AI should be used to increase the control learners have over their learning and development. For example, it should enable learners to see how well they are progressing across a range of metrics, including non-traditional measures, such as motivation and self-regulation.
  • A balance should be struck between privacy and the legitimate use of data for achieving well-defined and desirable educational goals. It is vital that learners’ rights to privacy and anonymity are protected, but this should not hamper algorithms from learning and improving through processing appropriately anonymised and stored data.
  • Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate. Teachers should feel confident to ask suppliers to explain to them clearly what the AI is doing and why.
  • Learners and educators should have a reasonable understanding of artificial intelligence and its implications. They should expect resources to assist with this from suppliers of AI packages.
  • AI resources should be designed by people who understand the impacts of the technology.

AI can and should enhance the social skills and wellbeing of learners as well as their traditional outcomes. Likewise, it can and should assist effective teaching and learning without undermining the role of educators in the process.

AI is here now, and it is here to stay. It can bring enormous benefits to education, such as supporting each learner in ways that are best suited to their individual needs.

But to reap its benefits, we must help educators feel more comfortable to embrace it and more confident to demand high-quality, ethical AI products and services from suppliers.

The IEAIED framework exists to empower educators and to inform suppliers about what they need to be able to provide their educational customers. By adhering to its principles, we can improve educational outcomes while keeping schools and students safe from any more ‘mutants’.