Ministers are being urged to clarify how schools should check the online behaviour of prospective staff after some asked teaching applicants for all their social media usernames, pseudonyms and “any websites you are involved with…or named on”.
Updated safeguarding guidance last year states schools “should consider carrying out an online search” on shortlisted candidates that could be explored during interviews.
It followed the 2021 murder of Sarah Everard in south London – and her killer’s use of social media – as well as more online issues cropping up in Teacher Regulation Agency (TRA) cases.
An application form for Chesham Grammar School, in Buckinghamshire, asks teacher candidates to provide account names and handles for “all of your” social media accounts, including any under a nickname or pseudonym.
Applicants are also told to provide “any websites you are involved with, in or featured on or named on” and “any other publicly available online information about you of which the school should be made aware”.
Trust searches for info that might hurt ‘reputation’
The recruitment policy at Red Kite School Trust, which runs the school, states searches are only conducted for shortlisted candidates to provide the appointment panel with any information that may be relevant to their suitability or “have an impact on the trust’s reputation”.
The school said the form, provided by its lawyers, was now being reviewed as was the way [safeguarding] guidance was interpreted.
Arnold House School, a prep school in London that uses the same wording on application forms, did not respond to a request for comment.
Dr Aimee Quickfall, who runs initial teacher training at a northern university, said such policies could make the profession “even less attractive when the first experience is having your digital identity trawled”.
“My worry is what will come up is things that should never come into that decision-making process. Is that going to cloud people’s judgements?”
The Recruitment and Employment Confederation has already raised with government officials the “lack of practicality” in the updated Keeping Children Safe in Education (KCSIE) guidance.
Leaders want further advice on approach
Geoff Barton, the general secretary of the school leaders’ union ASCL, said employers should decide how to use the guidance based on HR and legal advice. But he said further advice on a recommended approach would be helpful.
The Department for Education, in response to a consultation on the changes, said it wanted recruiters to “use their professional curiosity and use a search engine as another tool that provides more information about the suitability of an individual to work with children”.
Schools should use “professional judgement” in considering any information that comes to light, it added.
Seventy per cent of respondents to the consultation agreed the plans to suggest schools consider online searches would be helpful, while just 15 per cent disagreed.
The department said searches might help to identify any incidents or issues that have happened and were publicly available online, which the school might “want to explore” during interviews.
It follows “increasing prevalence” of online behaviour in TRA hearings. Primary teacher Thomas Heayel was banned last year after posting inappropriate pictures of himself and offering to sell sexual services online.
Ofsted checks how schools manage safe recruitment, but it does not have specific rules on what level of checks it expects.
‘Where is the line?’
George Mair, director of EdEx Education Recruitment, said checks before employment could protect staff from pupils finding information about them online, as it could be made private or removed.
“But where is the line?” he added. “How far do they take investigating someone’s personal life out of school before it becomes too invasive? It should be clear-cut guidance on what schools should and shouldn’t do.”
Dozens of multi-academy trusts now pay third-party companies to search applicants’ social media (see box).
Law firm Browne Jacobson suggested five years was a “sensible timeframe” for any search of popular social media platforms. Things to look out for included inappropriate or offensive behaviour, discrimination and drug or alcohol misuse.
HCR Law, another law firm, said the search should be done by someone not involved in any decisions on appointments to minimise the risk of discrimination and unconscious bias.
“Clearly defined parameters” were necessary to ensure “a consistent approach”.
A YouGov survey in 2017 found one in five employers had turned down a candidate because of their social media activity.
Red Kite’s policy shows that online search information is held for as long as the successful candidate is employed and six months for those who did not get the post.
The Department for Education was asked for comment.
The companies paid to search your social media
Several companies provide online reports of prospective employees for less than £50.
Security Watchdog, part of outsourcing firm Capita, charges £30.
On its website it says its reports “complement traditional employment screening” and can help to find online activities that could damage a brand or reputation, leaks of company confidential information or behaviour that encourages illegal activities.
It says it should only be used as part of the pre-employment process and should not be used for general candidate assessment.
Social Media Check, which provides reports based on automated software for about 40 multi-academy trusts, charges between £30 to £35 for each report, with a same-day turnaround.
The school can ask for checks on public or private profiles, but the applicant must first consent. The company said its consent rate was about 98 per cent for education requests.
The school then receives a report that highlights posts that might include hate speech, swearing, violent images and toxic language. The software can also scan pictures for words.
Nic Whelan (pictured), a director at the company, said the service helped to negate “risks with subjectively and unconscious bias” and was secure and compliant.
‘My retweet about a choc ice was flagged as hate speech’
Reporter Samantha Booth signed up for an online search with Social Media Check. Here’s what it found …
About 3 per cent of my 13,000 Twitter and Facebook posts were flagged up as potential risks, including swearing, hate speech and toxic language.
I was shocked. I’ve been on social media for more than a decade, but I didn’t recognise myself in those definitions.
But the results show the potential false flags that can be thrown up.
For instance, some posts flagged as “toxic language” included factual tweets about murder and other criminal investigations that I covered as a local newspaper reporter.
A quote from a council meeting during which someone asked for “legal sites for gypsies and travellers” in 2016 was potential “hate speech”.
It also brought up a post I had retweeted in 2018 about “a man having a choc ice for breakfast”. Choc ice can be a racial slur, so I guess it was flagged for that reason.
It also pointed to pictures from the Women’s March I joined in London in 2017 after Donald Trump was elected as US president. A picture I posted showed a placard stating: “This pussy grabs back.”
There was nothing hugely concerning in my report, but it provided a real insight into what my digital footprint looks like.
Ultimately, I think the process is reliant on a human being looking at the context of posts. Mine wasn’t shared with my employer, either.
Perhaps I would feel a bit uneasy about some of my immature, younger-self posts being viewed – but at least they didn’t scroll through all my other social media posts of nights out, holidays and the like.
It has reminded me, however, of the importance of spring cleaning your social media every few years.
Your thoughts