Artificial intelligence (AI) used in schools should look out for signs of “distress” in pupils and flag concerning behaviour to safeguarding leads, new government guidance states.
Education secretary Bridget Phillipson today announced government has updated its AI safety expectations, published last year, “to get ahead of emerging harms.”
Newly added sections detail how AI tools used in schools must protect children’s mental health, cognitive, emotional and social development, and also protect against manipulation.
Speaking at the Global AI Safety Summit in London today, Phillipson said the updated standards “safeguard mental health”.
“High profile cases have alerted the world to the risk of a link between unregulated conversational AI and self-harm,” she said. “So our standards make sure pupils are directed to human support when that’s what’s needed.”
AI products used in schools “should detect signs of learner distress”, such as references to suicide, depression or self-harm, the new non-statutory standards state.
They should also detect spikes in night-time usage, “negative emotional cues” and “patterns of use that indicate crisis”.
If distress is detected, the AI should “follow an appropriate pathway” such as signposting to support and “raising a safeguarding flag” to a school’s lead.
The standards say AI products should also respond with “safe and supportive” language that “always directs the learner to human help”.
‘AI must not replace human interactions’
There are also strict new guidelines around emotional and social development, which caution developers against “anthropomorphising” products.
It states products should not “imply emotions, consciousness or personhood, agency or identity”. For example, they should avoid statements such as “I think”, and “avatars or characters” that “could give an impression of personhood”.
Phillipson said this was particularly key for younger pupils, and those with SEND.
“We’ve got to make sure AI products don’t replace vital human interactions and relationships,” she said.
“Experts tell us and research confirms that when AI tries to look like us, mimicking our social cues, a machine in human’s clothing, it can foster in our children unhealthy levels of trust and disclosure.”
Guidance warns against ‘manipulation’
On “manipulation”, the standards say AI products used by pupils and teachers should “not use manipulative or persuasive strategies”.
This includes flattering language like “that’s a brilliant idea”, stimulating negative emotions like guilt or fear for motivational purposes or “portraying absolute…confidence”.
They must also not “exploit” users by steering them towards prolonged use to increase revenue.
“We don’t want our children kept on apps or on screens longer than necessary for their education,” said Phillipson.
AI should ‘encourage, not spoon feed’
On cognitive development, the standards say development and use of AI products used in education should involve regular engagement with experts, such as educators and psychologists.
The impact on the development of learners must also be monitored, and records should be kept.
Programmes should also not give full answers or explanations until after a pupil has attempted it themselves. They should instead “follow a pattern of progressive disclosure of information”.
Phillipson said the standards “prevent AI acting as a substitute for cognitive development”. “It must encourage, not spoon feed,” she said. “Offer assistance, not shortcuts. Help to tease out the answer.”
The minister said government believes AI could “superpower the learning of every child – especially children from disadvantaged backgrounds and with special educational needs and disabilities”
But she vowed that “no matter how transformational technology becomes, learning will remain a deeply human act.”
“Under this government, AI will back our teachers, but never remove them,” she said. “AI will empower our teaching assistants, never make them obsolete.”
Your thoughts