Schools could use AI to help write letters to parents, give feedback to pupils and come up with ideas for lessons, new government toolkits have said.
The guidance, published today and drawn up by the Chiltern Learning Trust and Chartered College of Teaching, also says schools should plan for “wider use” of AI – including to analyse budgets and help plan CPD.
Government said the toolkits are part of a new “innovation drive”, which includes investment to “accelerate development” of AI marking and feedback tools. A new pilot has also been launched to trial tools in “testbeds” schools.
The government previously produced guidance on “safety expectations” for the use of generative AI – artificial intelligence that creates content – in education, along with policy papers and research on the subject.
Education secretary Bridget Phillipson said: “By harnessing AI’s power to cut workloads, we’re revolutionising classrooms and driving high standards everywhere – breaking down barriers to opportunity so every child can achieve and thrive.”
Here’s what you need to know on the new toolkits (which can be viewed in full here)…
1. Marking feedback and ideas for lessons
For teaching and learning, the documents state generative AI may be able to support ideas for lesson content and structure, formative assessments, analysis of marking data and creating “text in a specific style, length or reading age”.
On assessments, the guidance says this could include quiz generation from specific content or offering feedback on errors. AI could also “support with data analysis of marking”.
It can also produce “images to support understanding of a concept or as an exemplar”, exam-style questions from set texts, and visual resources, like “slide decks, knowledge organisers and infographics”, a slide in one of the toolkits adds.
2. Email writing and timetabling
The toolkits also say technology could support cutting down time spent on admin, like email and letter writing, data analysis and long-term planning.
One example given is producing a letter home for parents about an outbreak of head lice.
The toolkit says policy writing, timetabling, trip planning and staff CPD were other areas in which it could be used.
In smaller settings, AI can “help streamline administrative tasks such as rota management and ensuring staff-to-child ratios are optimised in line with statutory requirements, among other uses”.
3. Plan for ‘wider use’, like budget planning and tenders
But leaders have been also told to plan for AI’s “wider use”.
The writers of the reports say some “finance teams [are] using safe and approved” tools to analyse budgets and support planning. Business managers are also using it to generate “tender documents based on a survey of requirements”.
“By involving all school or college staff in CPD on AI, you can help improve efficiency and effectiveness across operations – ultimately having a positive impact on pupil and student outcomes.”
The guidance suggests “integrating AI into management information systems”. This can “can give insights that may not otherwise be possible, and these insights could support interventions around behaviour, attendance and progress”.
4. Adapt materials for pupils with SEND
According to the DfE, the technology “offers valuable tools to support learners with SEND by adapting materials to individual learning needs and providing personalised instruction and feedback”.
For example, it can “take a scene and describe it in detail to those who are visually impaired”.
But specialists and education, health and care plans (EHCPs) should be consulted to “help identify specific needs and consider carefully whether an AI tool is the most appropriate solution on a case-by-case basis”.
Meanwhile, many programmes are multilingual and “could be used with pupils, students and families who have English as an additional language”.
5. Critical thinking lessons, reconsider homework tasks
As the technology becomes more prevalent, “integrating AI literacy and critical thinking into existing lessons and activities should be considered”. For example, AI ethics and digital citizenship could be incorporated into PSHE or computing curriculums.
Some schools and colleges have promoted “AI literacy within their curricula, including through the use of resources provided by the National Centre for Computing Education”.
This ensures youngsters understand how systems work, their limitations and potential biases. Approaches to homework may also have to be considered, focusing on “tasks that can’t be easily completed by AI”.
The guidance added many systems “will simply provide an answer rather than explain the process and so do not contribute to the learning process”.
6. Draw up an AI ‘vision’
The guidance stresses “it’s essential” schools “are clear with staff around what tools are safe to use and how they can use them”. Those included on the list should “have been assessed” and allow schools “control over” them.
Writing “vision statements”, created in consultation with “a wide range of stakeholders”, have been recommended so “you can be clear on the benefits you expect to achieve and how you can do this safely”.
As part of this, they have been warned about two issues “inherent” in AI systems: hallucinations and bias.
The former are “inaccuracies in an otherwise factual output”. Meanwhile, bias can occur if “there was bias in the data that it was trained on, or the developer could have intentionally or unintentionally introduced bias or censorship into the model”.
7. Transparency and human oversight ‘essential’
Schools should also “consider factors such as inclusivity, accessibility, cost-effectiveness” and compliance with internal privacy and security policies.
A “key consideration” listed in the guidance is whether its “output has a clear, positive impact on staff workload and/or the learning environment”.
It is also essential “no decision that could adversely impact a student’s outcomes is based purely [on] AI without human review and oversight”.
An example of this is “generating a student’s final mark or declining their admission based on an AI-generated decision”.
The guidance said: “Transparency and human oversight are essential to ensure AI systems assist, but do not replace, human decision-making.”
The toolkits also issue warnings over mental health apps, which they say “must be regulated by the medicines and healthcare products regulatory authority”.
8. Beware AI risks: IP, safeguarding and privacy
There were broader warnings about using AI.
The guidance notes that pupils’ “work may be protected under intellectual property laws even if it does not contain personal data”.
To safeguard against this, schools should be certain AI marking tools do not “train on the work that we enter” and have parental consent.
Copyright breaches can also happen if the systems are “trained on unlicensed material and the outputs are then used in educational settings or published more widely”.
Schools should ensure AI systems comply with UK GDPR rules before using them. If it “stores, learns from, or shares the data, staff could be breaching data protection law”.
Any AI use must also be line with the keeping children safe in education guidance.
Most free sites “will not be suitable for student use as they will not have the appropriate safeguards in place and the AI tool or model may learn on the prompts and information that is input”.
Child protection policies should “be updated to reflect the rapidly changing risks from AI use” as well.
The guidance also says newsletters and school websites could “provide regular updates on AI and online safety guidelines”. Parental workshops “can extend the online safety net beyond school or college boundaries”.
9. Be ‘proactive’ to educate kids on deep-fakes
The “increasing accessibility of AI image generation tools” also presents new challenges to schools, the guidance added.
“Proactive measures”, like initiatives to educate students, staff and parents about this risk, have been identified as “essential to minimise [this] potential harm”.
Schools have also been told to conduct regular staff training “on identifying and responding to online risks, including AI-generated sexual extortion”. These sessions should be recurring “to address emerging threats”.
“Government guidance for frontline staff on how to respond to incidents where nudes and semi-nudes have been shared also applies to incidents where sexualised deep-fakes (computer-generated images) have been created and shared,” the guidance continued.
Your thoughts