Grade predictions are unreliable – so why do we still use them?

Too many school leaders rely on having their teachers make predictions about pupils’ grades – but no-one’s any good at it, explains Ben White

We recently launched a ‘grade predictions challenge’, offering a bottle of sparkling wine to teachers who could accurately predict 80 per cent of the A-level results of a class.

We provided some pretty solid information – actual AS results, class assessment average, and a mock paper grade. Alongside this was a brief bio for each student.

Despite the fact that grade forecasting is a thing which we teachers do fairly frequently, after over 650 entries no-one has managed to get close. The average score is just over 4/14. The top score is 10/14.

These inaccurate predictions were essentially made at the last possible moment – relating to data from March 2017 regarding the results of exams taken that June.

Rather worryingly, almost a third of the teachers admitted that they are asked to predict GCSE outcomes for students currently in year 7, which is five years away from the actual exams. A number of these teachers are even asked to predict fine grades: i.e. not just a grade 6 but a 6a, 6b or 6c.

The more precise our predictions, and the further ahead of the actual exam, the less accuracy utility they will have

We would argue that these predictions are then often used in ways which ignore their limitations; they contribute significantly to workload but don’t significantly improve pupil learning.

Grade predictions often inform individualised interventions, for example, where teachers are asked to work with their borderline or threshold pupils in order to raise their grades. Sometimes these interventions are run by staff with little or no subject expertise – they are focused on encouraging these borderline students to make the leap to the next grade.

At the same time, the headline figures produced by these predictions inform senior leaders and governors about likely outcomes. Is progress being made? Is this year group likely to show a dip compared to previous years? Governors are sometimes briefed on likely Progress 8 scores for a year group, guiding them on whether this year’s headline results are likely to be up or down on the last.

For this to be useful, it seems important that teacher predictions are accurate. The problem is, they aren’t for the uses to which they are put.

Unthinking use of this sort of data can lead teachers to spend time either worrying about or working with the “wrong” students. Furthermore, encouraging teachers or pupils to focus on the abstract goal of raising grades may distract from approaches more likely to have an impact – namely teaching all students the ideas and content which they don’t understand and (arguably) supporting effective study behaviours.

I’m not anti-data. Used appropriately it can provide insights which individual teachers and school leaders could not reach alone. However, I am against using data in a statistically illiterate manner. The more precise our predictions, and the further ahead of the actual exam, the less accuracy utility they will have.

Time spent focusing on these speculative numbers and letters is arguably time which could be spent on something more meaningful – or even on not working at all given the current drive to reduce workload, and the number of hours which teachers report working each week.

There is already some fairly large-scale research suggesting that grade predictions (even relatively late in a course) have limited reliability. However, there are still plenty of teachers and leaders who believe that while this may be true in general it isn’t for them.

Some of this research involves predictions shared with UCAS or exam boards, and in both cases there are reasons why teachers might veer higher than they really think.

To get much broader sample of data and find out exactly how accurate predictions are, we’ll be running a real-time grade predictions challenge on this year’s A-level and GCSE results – do get in touch if you’d like to be involved.

Ben White is a psychology and sociology teacher, and co-founder of Walden Education

Your thoughts

Leave a Reply

Your email address will not be published.


  1. If predictions are so unreliable, how do you know that the final exam results are reliable either? Reliability is about achieving consistency of data – but if you never corroborate different sources of data, then nothing can be said to be reliable. You are merely trusting to the authority of the exams to give the right answer. See my https://edtechnow.net/2017/11/20/curriculum-matters/#slide_69.

    Predictive accuracy is therefore hallmark of reliability. Calling it “speculation” is merely an admission that our assessment (formative as well as summative) is lamentably inaccurate. When is the profession going to start taking this problem seriously?

    This article is one more example of a common argument which is to be heard frequently in our current education discourse: “because we do it badly, it can’t be worth doing at all”.

  2. christopher morrell

    The article problematises, quite rightly, the statistical reliability of grade prediction. But the greater concern is surely the distortive effect that predictions can have. It appears to be accepted that university admission policy has been negatively affected: lower actual results the consequence for some of unconditional offers. Yet this is surely less serious than the motivational impact of grade prediction on younger pupils. My own fourteen year-old, entering Year 10 in a new school (where even the medium of instruction is not the same; Welsh > English) was, after just one month, presented with GCSE grade predictions in subjects where he has not yet been tested, nor his books looked at, nor any allowance made for changes of syllabus. This suggests a situation where over-worked, micro-managed teachers are coerced into making illiterate decisions. But what, in a system where challenge is frowned on and popular discussion almost non-existent, is the likely effect on the average parent and the average pupil? Will it be, in a case like my son’s, of ‘go ahead, prove them wrong, John’; or will it be – ‘John is a ‘B’ student, John finds Maths too hard, John’s no good at languages?’
    Can anybody suggest to me a rationale for supporting grade prediction?
    Chris Morrell