Progress 8 is an embarrassment, says Tom Sherrington, a measure far removed from any sense of what the quality of schooling should be

Every time I blog about Progess 8 and the disproportionate impact a few students can have or the depressing curriculum decision-making that it drives in schools, someone will try to explain the maths to me. As if I don’t fully understand what the measure is. I do; as a piece of data engineering, it’s a masterpiece. Imagine: we can represent the average progress made by every child in an entire secondary school in one two-digit decimal.

READ MORE: Progress 8 is fairer, but some flaws need fixing

However, in doing so, I would argue that in Progress 8 (P8) we have created a measure that is so far removed from what learning is, from what an education is, from any sense of what the quality of schooling should be, that we should be embarrassed. We’re descending into a vortex of delusional algorithmic data-worship.

P8 is deeply flawed at every turn. To begin with, it depends on measuring progress from a baseline that is formed from the average scores from a couple of tests in year 6, scores that are the product of an intense accountability-driven system with significant variation around the country. A small change in the key stage 2 baseline makes an enormous difference to expected outcomes at key stage 4 to the point that, arguably, the validity and reliability of the key stage 2 scores for any given year 7 cohort are the biggest variables in the whole calculation. It’s a house of cards with a very shaky foundation.

At the other end, we create an Attainment 8 (A8) score from an arbitrary combination of subject components, using an arbitrary scale of scores, and assert that this is a measure of a student’s aggregate attainment. This masks a host of major assumptions. There is the illusion of linearity – that the jumps from one grade to the next are of broadly equal size in terms of attainment, even when grade boundaries turn on the decision to award just one mark more or less out of hundreds as we seek to capture some sense of “standards” in an examined performance.

Ultimately, all exam grades are nothing more or less than bell-curve markers. Essentially, we are simply gauging where our students lie in a national ranking exercise compared with where they started. That is in the context where, by definition, 30 per cent of students must fail to achieve “pass” grades – they’re not all allowed to succeed; that’s how it works.

We’re descending into a vortex of algorithmic data-worship

The P8 measure is riddled with arbitrary elements: the double weighting of maths and English – with the weird “choose your best from language and literature”; the exclusion of sociology from counting on the same footing as geography; the third-class status of arts subjects that only count in one “bucket”. And then we average it all out. This means that five students with a -2.0 score and five with +2.0 are no different to ten with a score of 0.0.

We’re not interested in the spread or the profile of scores – just the average, outliers and all.

It’s here that the great data machine loses credibility, even on its own terms. Many schools have confidence intervals for their scores many times bigger than the scores themselves; we kid ourselves that 0.3 must be “better” than 0.2 or even 0.1, but, in all likelihood, within the error, this might not be the case. Not in terms of real learning, real achievement and real progress – whatever that means.

There are numerous other plausible rules and algorithms we could devise to combine subjects that would lead to very different A8 scores, rank orders and P8 scores. This tells us that there is nothing that is inherently “true” about P8. It does not measure anything with objective, intrinsic meaning; it’s an arbitrary construct with a loose association to the learning journey our students go on – on average.

Why not ditch this fetish for made-up aggregated numerical measurement and try to develop more intelligent, more nuanced qualitative and quantitative ways to determine the quality of the educational experiences and outcomes we actually value.

 

Tom Sherrington is a former head and education consultant

Your thoughts

Leave a Reply to AssemblyTube Cancel reply

Your email address will not be published. Required fields are marked *

2 Comments

  1. Chris Oskis

    The standard deviation- the bell chart is disgusting. The exam boards use it to calculate the Grade boundaries and they have the power to move the grade boundaries which in turn makes the number of pupils fail or pass which they control. Not that every child matters. It’s mathematical and stops or hinders hundreds of children to achieve nothing. After 7 years of education. The exam boards should be shamed into this poor way of flexibility in the marking and percentages of passes and fails.
    If every child passes at C grade across the country the exam board would and does regularly move the goal posts thus rendering those that have just scrapped a pass as a fail.
    Schools are constantly fighting the KS 2 data and then at the other end the exam boards flexibility to adjust the number of grades outcomes.
    It’s unfair. It also shows that the teachers that work dam hard with difficult pupils have even more of a fight for them to pass. Plus the professional pressure from Head teachers for success rates.
    Teachers try to play the numbers game I have no doubt.
    the rules change in an ever increasing pressured job. It seems this numerical debacle is another way to control the scocial progress of a child. Social engineering go mad.

  2. Many headteachers would agree with Tom here, as do I. The system is counterproductive for anyone wanting to educate our children. But will headteachers and headteacher organisations do anything about it?

    As Tom points out, the system is about sorting students on a bell curve and deciding which children and schools fail. Headteachers are focused on making sure their pupils are in the higher reaches of the bell curve. It is a bit like a group of people trying to climb out of a swamp. You can only get out of the swamp by standing on other people and pushing them into the mud. You only get above the water line by drowning others. As a headteacher you cannot make a principled stand on this because other headteachers might then stand on you and drown you. So it is a fight for survival of the fittest.

    The headteacher organisations could take a lead on this for the collective good. But they don’t. In fact they make the situation worse by supporting systems like PiXL, which are methods of helping you climb on the backs of your colleagues, and charging you lots of money to do so.

    The government’s role seems to be to act as whip masters, extolling headteachers to climb on each other’s backs even faster.

    Ofqual’s role is to make the slope out of the swamp even steeper, so that schools are now starting the slope climbing with younger and younger students.

    So, what is the solution? No educationalist wants to be in this swamp! We can only get out of this mess by some form of collective and consensual agreement on what education should be about, and how we achieve that for all our young people. Saying we don’t like the swamp is a starting point, but it will not make it go away.