Ofsted plans to extend dual visits to secondary schools from September as a test of its new short inspections.
Schools Week revealed in January that the school watchdog planned to conduct “double visits” during the spring term to see if different inspectors reached different verdicts about a school’s judgment.
The paper has now learned that the schools watchdog conducted six “double visit” pilots during the term in primary schools.
Two inspectors visited the schools concurrently, and completed different activities while in the schools. Both then came to independent conclusions and compared whether their differing methodologies had led to different inspection judgments.
In all six pilots the inspectors reached the same conclusions. But Ofsted said it would anticipate more variation in a larger sample. It called the results encouraging, but said there were not enough to enable statistically valid conclusions.
According to Ofsted the pilots were to ensure it had a reliable way of testing the methodology for its short inspections, which it was confident it now had. It would now work with university researchers to help it to analyse the outcomes and refine the methodology further.
When Schools Week revealed the dual visits earlier this year, Ofsted’s national director for schools, Sean Harford (pictured), acknowledged that the watchdog did not ensure different inspectors in the same school on the same day would give the
same judgment. The pilot testing has not resolved this.
Tom Sherrington, head of Highbury Grove School in north London, said when the dual visit pilots were announced he expected Ofsted to be frank about the level of reliability uncovered in the pilots.
But Ofsted has now told Schools Week that if inspectors reach different judgments during double visits, this will not be communicated to the school as the process is only designed to test the methodology, not to produce multiple judgments.
A spokesperson said: “The validity of inspection judgments is of the utmost importance to Ofsted. We already have a rigorous quality assurance process to ensure that inspectors make judgments that are consistent and based firmly upon the evidence gathered.
“However, we are always looking at ways to further improve our approach, and this methodology testing forms part of this work.”
During the pilot, inspectors each came up with separate judgments and wrote up their findings separately. Their reports were moderated by a senior HMI before a single judgment was communicated back to the school.
Mr Sherrington said: “They are starting to explore if what the inspectors do has an influence on the outcome, but it does not seem like a robust evaluation. I think it sounds like they are trying to cut corners.
“If they want to do a proper trial they should sign up the school and communicate everything about it to the school – but not have it as a formal outcome – rather than trying to cut a corner and do it as a real inspection. A proper evaluation would need to factor out the data element of the judgment to see if the judgments made in school are critical at all. It does not sound as if they are testing that out.”
He also questioned how much inspectors could tell about a school from one day. “It’s an institutional delusion that they have the skills to judge a school beyond the data. I can honestly say that it takes weeks when you arrive at a school as a headteacher to evaluate a school and its strengths and weaknesses.”