Understanding the limitations of the data we hold, and the inferences we will have to draw, are crucial to a GCSE and A-level awards system based on trust, writes Ben White

Teacher assessment is not as accurate as some of us think. If you’re not convinced, feel free to have a go at this competition. You will be provided with mock grades, two years of modular assessment data, an AS grade and a brief pupil biography. All you have to do is predict each pupil’s final outcome. With over 2,000 entries, the modal score is 4 out of 15. The top score is 9.

However, as Prof Becky Allen shared again last week, this limited agreement between exam grades and teacher predictions is rooted not just in our limited capacity to predict the future but the limited reliability of exams themselves. Ofqual research explored the likelihood of a second trained examiner awarding the same whole grade as the first marker across A-levels and GCSEs in a range of subjects. It’s not reassuring reading, even in normal times, but nor is it surprising for those of us who teach or manage subjects at the less reliable end (English, history, sociology, for example).

Given the above, perhaps students have the same chances of a fair grade as in normal years. The cohort will be awarded a similar proportion of grades as any year. This will be linked to school assessments instead of external exams. Our efforts must be aimed at ensuring a fair relationship between these and their final grades.

Any new process can sidestep some of the more obvious pitfalls

The salient difference for us as teachers is how it feels to us. We have been pushed from our familiar role of striving to give our students the best chance of success into one in which we have to make a definitive judgment for each of them. In the end, far as it might be from ideal, most will receive a reasonably fair grade, and most will proceed to the next step in their education and lives as they always have done.

If we are aware of them, any new process can sidestep some of the more obvious pitfalls, and coherence will also make it easier to deal with appeals. The fact is that they are likely to be more numerous than ever, and we must ensure that they are neither meant nor received as threats to ourselves or our professionalism. When we do allocate grades therefore, it’s important that we all be aware of certain factors that will influence the validity of the entire effort.

First, using vague teacher judgment is inherently unfair, likely to betray biases, and to be more easily swayed by students’ needs, or pressure from parents. With regard to bias, it is also worth noting that setting additional tasks now to evidence grades seems highly likely to disadvantage those with less access to tech, less self-confidence, or high honesty thresholds.

Second, for some of the same reasons, as well as the limited sample, the unreliability of previous grade boundaries, and the fact that not all students will have had the same regime, the grades we have in our mark-books do not have a high degree of precision. However, a range of quantitative assessment results is a sensible base on which to build.

For these reasons, I will not predict individual outcomes based upon what I know of how they “would” have worked, and nor will I share any grades with students. Instead, my provisional model for allocating grades to our A-level students goes as follows:

  • Sort students from strongest to weakest using assessment data, awarding 50 per cent for a recent mock exam, 25 per cent for last year’s end-of-year exam, and 25 per cent for three other assessments completed in exam conditions by all classes since Christmas;
  • Use a combination of prior cohort’s results and this cohort’s ALPs A-level targets to allocate grades from top to bottom.

Guidelines for how Ofqual will decide grades are not yet clear. The current guidance suggests the grades teachers allocate will be significant but not definitive. Whatever they decide, we have to trust them as much as they have to trust us for our make-do-and-mend system to work.