Opinion

Edtech’s success depends on demystifying AI

5 Apr 2021, 5:00

We can only prevent more ‘mutant algorithms’ and make edtech safe with a framework for the ethical use of AI in education, writes Rose Luckin

Last summer’s ‘mutant algorithm’ fiasco is an object lesson in how emerging technologies such as artificial intelligence (AI) can become the subject of myths and misinformation. It’s no wonder they can often feel quite frightening to educators.

When Sir Anthony Seldon, Priya Lakhani and I created the Institute for Ethical AI in Education (IEAIED) back in 2018, we sought to remove some of this trepidation. Our aim was to encourage the safe and ethical use of AI in education.

Last week, we published the culmination of our work, and with the sharp rise in the use of technology precipitated by Covid, it could not be more timely. Our final report and framework are designed to help heads and teachers make informed choices and maximise the potential of AI in all educational settings.

And that potential is huge.  AI’s benefits include cutting teacher workload and improving  assessment. Yet many educators still lack the knowledge and understanding they need to ensure the AI products they purchase enhance teaching and learning safely. By improving this knowledge and raising expectations, we can drive more ethical development of AI-based edtech for the benefit of teachers and students.

AI’s benefits include cutting teacher workload and improving  assessment

The framework we’ve published draws on insights gathered from a series of roundtables organised by the IEAIED last year – which included the views of young people – and from last November’s Global Summit on the Ethics of AI in Education.

It sets out a number of principles, as well as actions that educational settings should take to fulfil them. They should demand and expect that suppliers will help them to fulfil these principles:

  • First, the evidence should lead the technology. AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that it is for the benefit of learners.
  • Next, AI should be used to assess and recognise a broad range of learners’ aptitudes (including, for example, metacognition), not just those measured by formal assessments and exams.
  • Learners’ social and emotional needs should not be sacrificed for efficiency. AI should increase the capacity of educational institutions whilst respecting human relationships.
  • AI systems should promote equity between different groups of learners and aim to improve social mobility.
  • AI should be used to increase the control learners have over their learning and development. For example, it should enable learners to see how well they are progressing across a range of metrics, including non-traditional measures, such as motivation and self-regulation.
  • A balance should be struck between privacy and the legitimate use of data for achieving well-defined and desirable educational goals. It is vital that learners’ rights to privacy and anonymity are protected, but this should not hamper algorithms from learning and improving through processing appropriately anonymised and stored data.
  • Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate. Teachers should feel confident to ask suppliers to explain to them clearly what the AI is doing and why.
  • Learners and educators should have a reasonable understanding of artificial intelligence and its implications. They should expect resources to assist with this from suppliers of AI packages.
  • AI resources should be designed by people who understand the impacts of the technology.

AI can and should enhance the social skills and wellbeing of learners as well as their traditional outcomes. Likewise, it can and should assist effective teaching and learning without undermining the role of educators in the process.

AI is here now, and it is here to stay. It can bring enormous benefits to education, such as supporting each learner in ways that are best suited to their individual needs.

But to reap its benefits, we must help educators feel more comfortable to embrace it and more confident to demand high-quality, ethical AI products and services from suppliers.

The IEAIED framework exists to empower educators and to inform suppliers about what they need to be able to provide their educational customers. By adhering to its principles, we can improve educational outcomes while keeping schools and students safe from any more ‘mutants’.

Your thoughts

Leave a Reply to Crispin Weston Cancel reply

Your email address will not be published. Required fields are marked *

4 Comments

  1. Hi Rose,

    I hope you are well.

    “AI is here now”, you say. Not in schools it isn’t. There isn’t even any *unethical* AI out there. So I can’t help thinking that your article (and your Institute) is addressing a non-existent problem.

    Beyond all the “should”s, your main “how”s seem to be to reassure teachers about any qualms they might have about big bad tech and to help them to develop a better understanding of how AI works.

    In schools, AI doesn’t work. There is nothing useful out there, ethical or unethical, produced by Google or anyone else. Unless you were going to get teachers involved in the design of new AI products (which is clearly not going to help), why would you need them to develop a better understanding of AI? Its like saying that the problem with the UK car industry is that drivers don’t understand how their carburettors work.

    AI requires data and the quality and quantity of attainment data in schools is abysmal. Solve the data problem and AI will follow.

    Best, Crispin.

  2. One issue that still stands out for me is who is making the decisions.
    As we see progression in how AI is structured to meet identified targets, we need to think about how much control the school has and who benefits from the decisions. Not being able to get a clear answer for the same question around general use of data for research, it worries me even more with AI.

    • Hi Tony,

      If I may hazard a reply (as I think these issues are massively under-discussed)…

      …it depends which decisions we are talking about.

      As I see it, the main potential for AI is in relation to feedback, which is otherwise very time-consuming for teachers to provide, at least if it is to be done well. All feedback involves some sort of intervention that is predicated on a prior assessment of student performance. I have listed 6 forms of extrinsic feedback (along with 4 non-extrinsic types, see https://edtechnow.net/2018/01/14/feedback/#table_1), but perhaps the most relevant are my #7 “criticism” (good & bad aspects of the student performance, hints & tips), #8 “dialog” & #9/10 adaptive teaching (essentially selecting the next exercise or “sequencing”).

      In my view, dialogic feedback is the last think that we should be trying to automate (e.g. by chatbots) – 1) because it is difficult and 2) because it is tied up with relationships and role modelling (see my comment to https://schoolsweek.co.uk/the-real-edtech-revolution-is-yet-to-come/). It is what teachers do best. That leaves criticism & sequencing.

      From the technical point of view, criticism is activity-specific, so automation needs to be provided within the activity software/platform. Sequencing is not activity-specific, and so needs to be part of an overarching learning management system (LMS).

      When it comes to the issue of control, this appears to be at odds with the issue of expertise: my car is designed by someone else; someone who (unlike me) knows about the internal combustion engine, involves a loss of control on my part. Yet I can still exercise a reasonable level of control through my buying decision. If there is something about one sort of car that I don’t like, I can decide not to buy it, even though I do not understand how its good or bad features are actually implemented. Because cars are supplied through a regulated market, there are lots of experts looking at how cars work and by a combination of regulation and market information, the chance of unacceptable practices creeping in is (ultimately) reduced. It will certainly be less in comparison to what is done in kit cars, knocked together by enthusiastic amateurs. So, where AI is used to automate activity-specific feedback, the teacher can exercise control by deciding whether or not to assign activities using a particular activity platform, this decision being assisted by regulators and third-party reviewers and done in a relatively transparent way.

      Feedback that is not intrinsic to an activity (i.e. sequencing) needs to be governed by the selection of learning objectives, which determine why the student being asked to perform these activities in the first place. In the end, the selection of learning objectives is a matter for wider society and not for teachers, whose expertise lies in *how* to teach rather than in *what* to teach. Nevertheless, we need robust mechanisms to ensure that there is proper accountability regarding the *what*. We cannot provide this accountability unless we are able to *describe* our learning objectives in meaningful ways. The description of learning objectives in measurable ways is necessary both on levels of principle (ensuring accountability and transparency) and pragmatics (enabling automated sequencing engines to know what they are trying to achieve). An attempt to describe our learning objectives was made in the first National Curriculum after 1988, and just as quickly abandoned because what was proposed at that time didn’t work. We did not at that time have the capabilities of modern data analytics.

      A third aspect of control regards ensuring that AI developers adopt acceptable means of achieving their objectives – the ends do not always justify the means. The most obvious topic in this regard is data protection, an area that in industry creates a significant bureaucratic overhead and severe legal risk. There is no reason why acceptable data protection practices could not be encoded in machine-readable policies, whose implementation could be automated by default, reducing workload and legal risk, while encouraging consistent implementation.

      So there are solutions at hand that address both pragmatics and principle, implementation and control (and that address our problems with assessment as well as with instruction). But these solutions only have a chance of if government starts to take these matters seriously, which at the moment, it shows no sign of doing.

      Saying “leave it to the teachers” (which seems to be the underlying message for both Rose and you) is really a way of saying that we should do nothing at all. That might be OK if you think that all is rosy in our classrooms and exam halls or that this sort of thing is not going to be done anyway at some point by big tech (just without any political accountability).

      All the best, Crispin.

    • Tony,

      One further important point about data protection.

      In relation to in-activity data/feedback (i.e. criticism), activity platforms do not need any personal data about the student – a student identifier unique only to that student and that platform will be sufficient for most purposes.

      A data interoperability framework (which is what all this boils down to) will distinguish between the LMS (which will hold personal information and which will need to be tightly regulated with regards to security) and activity platforms (much the more common sort of software, and where the real need for innovation lies), which do not.

      Crispin.