An overreliance on standardised data undermines our efforts to keep schools improving. If we really want to help our pupils then the best thing we can do is make sure that the people educating them also have the chance to keep on learning

I recently completed a strategic review for one of the largest academy chains in the country, which involved scrutinising all of their school improvement activity.

After interviewing dozens of people – both within the trust and externally – who are deeply and personally involved in trying to make schools better, I concluded that much of the work being done relies on a fundamentally flawed precept. Measuring schools predominantly on the basis of what pupils achieve is not just questionable: it is counterproductive. It really isn’t that different from measuring firemen by the number of fires they put out.

This is not a knee-jerk case study reaction. In my position I have to know and apply the findings of credible, high-quality international research. But understanding how and why this situation came about provides an object lesson for government.

RAISEOnline, the school performance data management system used by government and schools, underpins almost everything the people I interviewed are doing. Others have done a more forensic job than me of exposing flaws surrounding the use of data in schools and why much of it is “garbage”.

For example, it is an illusion that National Curriculum subject levels are linear and reflect a growing depth in learning as they progress. They aren’t and they don’t. To progress through them only really means to assimilate more information and express it better, so using them in any way to extrapolate either an individual pupil’s progress or a school’s overall performance is, at best, crude and, at worst, meaningless.

So how did we reach this point? The origin and influence of RAISEOnline should be critical to our understanding. The international drive to use data in education has been driven not by teachers or schools, but by technology companies whose employees are steeped in data. They cannot understand how schools don’t operate in the same way but, more dangerously, they are not the least interested in understanding why.

RAISEOnline was a comprehensive dataset used by a business to better understand its customers: schools. Reskinned and repurposed, it was sold to a gullible, lavishly-funded government quango who seized on it as a key means to drive change and exert influence. As is the case with the overwhelming majority of educational ICT, RAISE was a clever and successful sales pitch, nothing more.

Now add to this picture how Ofsted performs. It relies heavily on the same, school-derived data. According to the Public Accounts Committee report in January this year: “Of the schools rated ‘inadequate’ in 2012/13, 36 per cent had previously been rated ‘good’ or ‘outstanding’… Of schools inspected by Ofsted in 2012/13, 48 per cent of those which had received some kind of formal intervention improved at their next inspection.”

But the killer point is this: “Meanwhile, 59 per cent of schools that received no formal intervention also improved.”

It is self-evident that an ‘outstanding’ from Ofsted is not a standard, it is merely a licence to continue working uninterrupted for a while.

So what should schools measure instead? Michael Fullan argues that external approaches to instructional improvement are rarely “powerful enough, specific enough, or sustained enough to alter the culture of the classroom and school” and an equally serious researcher, Richard Elmore, asserts that “improvement above all entails learning to do the right things in the setting where you work”.

Discomforting as it is to hear – all schools are different and all great schools are unique.

Research also tells us the greatest impact school leaders can have on pupil outcomes occurs when they are actively involved in promoting teacher learning and development. This has an effect size almost four times greater than that for ensuring an orderly and supportive environment.

Put all this together and the inescapable conclusion for anyone serious about driving school improvement is that you need to ensure heads are predominantly engaged in promoting and participating in teacher learning, while measures of success need to be decided at the individual school level.

Which is why I would like to see each school measuring their own performance in terms of teaching quality. Simply initiating such a conversation in schools where it is most needed is likely to lead to improvements, and will encourage all schools to generate data that (unlike RAISEOnline) has value because it is unique to them.

The onus is then on Ofsted and other external agencies to demonstrate sufficient professional knowledge and understanding to be in a position to quality assure that data.

Joe Nutt is an international educational consultant with an unusual range of experience. While working for the MAT in this article, for example, he also found himself teaching a Pre-U class at Eton. He can be contacted via Schools Week .