Progress 8 is an embarrassment, says Tom Sherrington, a measure far removed from any sense of what the quality of schooling should be

Every time I blog about Progess 8 and the disproportionate impact a few students can have or the depressing curriculum decision-making that it drives in schools, someone will try to explain the maths to me. As if I don’t fully understand what the measure is. I do; as a piece of data engineering, it’s a masterpiece. Imagine: we can represent the average progress made by every child in an entire secondary school in one two-digit decimal.

However, in doing so, I would argue that in Progress 8 (P8) we have created a measure that is so far removed from what learning is, from what an education is, from any sense of what the quality of schooling should be, that we should be embarrassed. We’re descending into a vortex of delusional algorithmic data-worship.

P8 is deeply flawed at every turn. To begin with, it depends on measuring progress from a baseline that is formed from the average scores from a couple of tests in year 6, scores that are the product of an intense accountability-driven system with significant variation around the country. A small change in the key stage 2 baseline makes an enormous difference to expected outcomes at key stage 4 to the point that, arguably, the validity and reliability of the key stage 2 scores for any given year 7 cohort are the biggest variables in the whole calculation. It’s a house of cards with a very shaky foundation.

We’re descending into a vortex of algorithmic data-worship

At the other end, we create an Attainment 8 (A8) score from an arbitrary combination of subject components, using an arbitrary scale of scores, and assert that this is a measure of a student’s aggregate attainment. This masks a host of major assumptions. There is the illusion of linearity – that the jumps from one grade to the next are of broadly equal size in terms of attainment, even when grade boundaries turn on the decision to award just one mark more or less out of hundreds as we seek to capture some sense of “standards” in an examined performance.

Ultimately, all exam grades are nothing more or less than bell-curve markers. Essentially, we are simply gauging where our students lie in a national ranking exercise compared with where they started. That is in the context where, by definition, 30 per cent of students must fail to achieve “pass” grades – they’re not all allowed to succeed; that’s how it works.

The P8 measure is riddled with arbitrary elements: the double weighting of maths and English – with the weird “choose your best from language and literature”; the exclusion of sociology from counting on the same footing as geography; the third-class status of arts subjects that only count in one “bucket”. And then we average it all out. This means that five students with a -2.0 score and five with +2.0 are no different to ten with a score of 0.0.

We’re not interested in the spread or the profile of scores – just the average, outliers and all.

It’s here that the great data machine loses credibility, even on its own terms. Many schools have confidence intervals for their scores many times bigger than the scores themselves; we kid ourselves that 0.3 must be “better” than 0.2 or even 0.1, but, in all likelihood, within the error, this might not be the case. Not in terms of real learning, real achievement and real progress – whatever that means.

There are numerous other plausible rules and algorithms we could devise to combine subjects that would lead to very different A8 scores, rank orders and P8 scores. This tells us that there is nothing that is inherently “true” about P8. It does not measure anything with objective, intrinsic meaning; it’s an arbitrary construct with a loose association to the learning journey our students go on – on average.

Why not ditch this fetish for made-up aggregated numerical measurement and try to develop more intelligent, more nuanced qualitative and quantitative ways to determine the quality of the educational experiences and outcomes we actually value.

 

Tom Sherrington is a former head and education consultant