Artificial intelligence isn’t a threat to education – it’s an opportunity, says Stavros Yiannouka, as long as we’re careful about how we use it
The world is both fascinated by and fearful of artificial intelligence. Very few of us understand the technology behind it and even fewer can clearly articulate its social, political, and economic implications.
In my role as CEO of an education think-tank, I have given the topic a considerable amount of thought.
My first point is that, despite popular fiction, AI does not equate to conscious, intelligent androids poised, terminator-like to take over the world – at least for the foreseeable future. Instead, my view of AI is of a form that is very much with us today: computer-based, algorithm-driven programs capable of making decisions, increasingly self-learning, and perhaps in the not-too-distant future, self-programming. In other words, the kind of AI that is being developed by Google’s DeepMind and others to perform a range of complex tasks from driving cars to diagnosing cancer.
Secondly, I believe that the future of AI and its impact on humanity is not pre-determined. Technological determinism – the idea that technology is the principal driver of history and we are all mere passengers with a one-way ticket – has been fashionable for quite some time now. The proponents of this view are also the same people who once believed that globalisation was irreversible.
But I believe the future has yet to be written. And there is both promise, peril, and everything in between.
AI holds the promise of personalised life-long learning
Education has long suffered from a dearth of evidence-based methods to determine what works and what doesn’t in terms of pedagogical approaches. Big data and powerful analytical algorithms hold the promise that we can finally crack the code of how and when optimal learning occurs.
Moreover if, as we intuitively understand, we have different interests and learn in different ways, then AI holds the promise of personalised life-long learning.
Imagine a world where AI, instead of pushing us towards one more turn at our favourite online game, or one more purchase from our favourite online store, directed us towards the learning we needed to acquire in order to enhance our career potential or pursue our particular creative passion.
I believe the ultimate promise of AI for education is that it can make viable multiple pathways for success for more and more individuals.
Of course we need to be concerned about where AI may ultimately take us. As more and more of our lives migrate online, those lives are being constantly monitored, recorded, and analysed to the point where, as the historian Yuval Noah Harari has speculated, algorithms may get to know us better than we know ourselves. Coupled with our growing understanding of neuroscience, the temptation to harness AI in order to control thought and behaviour could become irresistible.
Again, the future is probably already with us. Social media and gaming companies already employ “nudges” to keep us coming back for more. The developers of the popular game Toon Blast have made an advertisement featuring the Hollywood star Ryan Reynolds, which touts the addictive quality of the game.
The real question isn’t whether digital addiction is desirable or not, it is to what use we are going to put the addictive potential of AI-powered digital technologies. Are we going to harness it in the service of exercising control, of proscribing thought and behaviour, and promoting uniformity? Or are we going to use it to unlock opportunities for individuals and societies to realise their full potential and pursue alternative pathways to success?
All pathways are possible. We can move towards the promise of AI in education or we can allow ourselves to drift towards the peril. All who are involved in edtech have choices to make.
My prescription is that we in education must take a number of steps to “tame” the dragon of AI.
We must take a number of steps to ‘tame’ the dragon of AI
Firstly, we should make a commitment that we will develop AI in the service of humanity, and not – as I fear the case is now – use humanity as simply the raw material with which to construct AI, with no particular regard for the outcome. Secondly, we should make AI for education available to everyone; we should work with education professionals to ensure that the AI they are developing is designed to facilitate human learning and human flourishing regardless of the socio-economic status of the individual learner. Finally, we should push for global standards and a global treaty to forsake the “weaponization” of AI.
We have done it for chemical and biological weapons, why not for AI? And what better place to start this debate than right now.