Progress 8 scores offer a better way of looking at school performance than did the A*-C percentage GCSE pass rate, says Stuart Kime, but be aware of how you interpret them .
Removing the A* – C percentage pass rate statistic from the vernacular of English schooling was an eminently sensible decision. But is the new Progress 8 measure any better? In some ways, yes, but in others, I’m not so sure.
Yes, because it drives a conversation focused on pupil progress; but maybe not, chiefly because of the potential for drawing incorrect inferences from it.
The increased focus on progress is an exciting and important tick in the box for Progress 8. Conversations about children on the C/D borderline are replaced by discussions about monitoring and supporting individual pupils, irrespective of their starting point. For this reason, I am supportive of Progress 8. But is the measure technically capable of performing what it is supposed to do?
Progress 8 has two stated purposes: to measure pupil progress from the end of primary school to the end of secondary school, and to incentivise good teaching among all pupils across eight qualifications. Can it do these things?
For a variety of reasons, not all children will have a key stage 2 score, and not all key stage 4 qualifications count in the calculation of Progress 8. Even if we ignore the ideological drivers behind the exclusion of certain key stage 4 qualifications, the problem that we’re dealing with an incomplete set of data remains.
While this is certainly an issue when looking at data for an individual pupil, it’s not insurmountable; it becomes a huge headache when you want to look at an aggregated Progress 8 score for a school, and want to make accurate judgments about school performance. To what extent can we rely on a score that is derived from incomplete data?
There are missing data in the calculation, either by happenstance or design. The second, and more serious problem, relates to the use of and interpretations derived from the 95 per cent confidence intervals that surround a school’s score.
Ninety-five per cent confidence intervals (CIs) are defined by the Department for Education as “a proxy for the range of scores within which a school’s underlying performance measure can confidently be said to lie”. This is an incorrect definition of confidence intervals.
CIs are often used in social science analyses as a way of capturing variability in measurements and from a technical standpoint, should only be used with random samples drawn from complete sets of data. Progress 8 data are incomplete (at key stage 2 and – due to the exclusion of certain qualifications – at key stage 4) and there’s no random sampling involved. In this instance, there is no technical argument that supports the use of 95 per cent confidence intervals. So should they still be used?
Is the measure technically capable of performing what it is supposed to do
There are real consequences from Progress 8 scores and their CIs. They play a part in the definition of a school being above or below the floor standard, and a school will be defined as “coasting” if, over three consecutive years, its score is below -0.25 and the upper band of the confidence interval is below zero.
If the imprecision of Progress 8 scores and their CIs is understood, and we recognise that they provide a crude, first order check rather than a final answer algorithm, there is a place for their cautious use. But will they be used in this way, or will their precision be inflated inappropriately?
CIs also tend to be larger – literally the lines will be longer – when they relate to small cohorts. Without a deep understanding of CIs, two schools with similar Progress 8 scores may well be judged differently by virtue of the number of children in their key stage 4 cohort: of the two, the small cohort school with a negative Progress 8 score may well have an upper confidence limit so long in each of three consecutive years that it passes through zero, exempting it from the coasting definition.
Progress 8 scores have arrived, and they offer a better way of looking at school performance than did the A*-C percentage pass rate, but I urge real caution in the inferences drawn from them, especially when confidence intervals come into the discussion.
Stuart Kime is director of education at Evidence Based Education