Opinion

Appeals can’t be allowed to fuel further disadvantage

12 Aug 2021, 14:00

Misinformation could lead to mass dissatisfaction with this year’s results and the best remedy is an appeal to reason, writes Yvonne Williams

The media run-up to this week’s results days has put teachers and students under greater pressure than ever. In an already high-stakes system, the stakes have just kept rising, not least because higher and further education places, apprenticeships and jobs are all in shorter supply because of the economic downturn.

In response, parental discontent is on the up. Some have already armed themselves against disappointing results by threatening schools with legal proceedings.

Amid all of that, it’s all too easy to lose objectivity as the appeals procedure is about to unfold. Regrettably, media coverage has tended to fuel the anxiety, when what is needed is a deeper understanding of the complexity of marking.

The public would like to believe that assessment is straightforward. Parents and students assume (understandably) that there are simply right judgments and wrong ones. What students who appeal are looking for is a re-mark of their work and a higher grade.

Ofqual’s 2021 Guidance makes that sound simple, but a closer look at JCQ’s documentation on appeals reveals that this is far from the case. “There may be a difference of opinion as to the assessed grade without there being an unreasonable exercise of academic judgement,” it states, then goes on to explain that: “The reviewer will only conclude that there has been an unreasonable exercise of academic judgement if the Teacher Assessed Grade was clearly wrong – i.e. there was no basis upon which the grade could properly have been given.”

 

No ‘definitive’ marks

For the first time, the public may have to confront the reality of assessment – that it is not and never has been an exact science. Ofqual’s own blog in 2019 stated: “We talk about ‘definitive marks’, but there are often other plausible marks that are equally correct.”

Assessment is not and never has been an exact science

Somehow, exam boards have to accommodate this reality to establish a standard for an exam series. Throughout a ‘normal’ marking window, this is the job of the principal examiner, who sets and maintains that standard. All other examiners mimic their judgement so that every student in a cohort receives the same treatment.

But this year there is no single standard-setter. Instead, the role of principal examiner has been delegated to every head of department in every school, or each subject lead across each trust, who has carried the standard for their centre. And as long as the marking is consistent, aligned to national criteria and the exemplar scripts provided by exam boards, then it should be deemed fair.

The problem is evident: Subject leaders can’t see the bigger picture of national patterns. As a result schools can only make grades comparatively fair within their organisations.

By extension, exam boards can only make outcomes fair within their own organisations. In normal times, Ofqual reviews the statistics at the end of the grade awarding process and requires exam boards to make adjustments to grade boundaries if they are out of line with the rest. No one ever really knows what individual grades look like until after the whole marking period.

This year, however, boards can only check the application of grade descriptors, the extent to which schools have used the standardisation materials and how consistently they have applied their marking policies.

 

Inevitable trade-offs

But in truth, for all its apparent rigour, the exam system is not perfect, even under the steadiest of conditions. It is far easier to detect and rectify errors in science and mathematics, for example, than in the humanities and the arts. Examiners can and do disagree, and use a ‘best-fit’ approach that means a final mark is often a trade-off between an answer’s strengths and weaknesses.

The files of work presented by schools this year will be similarly uneven, and also require a best-fit judgement. Departmental moderation will iron out the differences between in-school markers, but an outsider who has not participated in the earlier discussions might still disagree.

Ofqual’s 2017 research indicated that two examiners could legitimately arrive at different marks simply on the basis of how far they ignored or penalised irrelevant material. Would it be fair for an exam board reviewer to impose their mark on a file based on that?

To add another twist, Ofqual research also identified 12 per cent of scripts as ‘hard-to-mark’. Generally English literature scripts, these resulted in “examiners in the experiment [being] unable to jointly decide upon a ‘definitive’ review mark”.

For all these reasons, there are marking errors and successful appeals in every exam series. And despite the meticulous scrutiny this year’s results rely on, mistakes will happen.

The difference this year is that, perhaps for the first time, the general public is acutely aware of the system’s flaws. And the greatest mistake would be to succumb to the anxiety over these results and allow appellants to seize the advantage over the rest of their cohort.

 

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

One comment

  1. “We talk about ‘definitive marks’, but there are often other plausible marks that are equally correct.”

    Yes. There is no ‘right’ mark, and so no ‘right’ grade. ‘Accuracy’ is impossible to achieve, for there is no ‘right’ against which to measure ‘accurate’.

    But it is possible for assessments in general, and grades in particular, to be ‘reliable’ – that is, to have a high probability that a second, fair re-mark will result in confirmation of the original assessment EVEN THOUGH that second mark is different. To achieve that, we need to be imaginative as to how to determine assessments/grades from an original mark, AND ALSO as regards how appeals work.

    But it’s very easy. There is a hint as regards one way to do that in Ofqual’s blog of 5 March 2019 which is cited; this idea is developed here (https://www.sixthformcolleges.org/1412/blog-6/post/31/exam-grades-can-never-be-accurate-but-they-can-be-reliable), and some 20 other possibilities are (briefly) described here https://www.hepi.ac.uk/2019/07/16/students-will-be-given-more-than-1-5-million-wrong-gcse-as-and-a-level-grades-this-summer-here-are-some-potential-solutions-which-do-you-prefer/),

    Exams will come back. And when they do, it is essential that the assessments/grades are “reliable, full stop”; not “reliable to one grade either way” – as has been the case with all exams during the 2010s.