As Ofqual publishes further details about this summer’s exams, Tim Oates lifts the lid on the ‘black box’ of standardised centre assessment
It’s not a subject that usually grabs headlines, so it’s pleasing to see so many column inches dedicated to standardised centre assessment. Yet for some, the process being used to generate millions of awards for students this summer may still seem like a ‘black box’. If teachers, parents and students are to have confidence in this summer’s final awards, they need more insight.
Since the first inkling that exams would be cancelled, assessment specialists have been working intensively together and with Ofqual to set out a totally new set of procedures for schools and colleges and for exam boards. The most obviously new ingredient is the requirement to draw up a rank order of pupils in each subject. It’s a vital part of the model, and new in this context, but it’s not a new-fangled measure. In fact, it has been the subject of thorough research since the 1930s.
For each subject, schools and colleges will think carefully about the grades which their pupils would most likely have gained had they sat the exams as usual and they will assign those grades to their pupils. For each subject, they will also place pupils in a rank order. Thinking about grades will help with the rank order, and the rank order will help teachers to reflect on the grades. Of course, we don’t underestimate the workload this represents for schools, so carefully-presented guidance makes clear the steps to be taken and the evidence which will best inform these decisions.
It’s a jigsaw, not a Pandora’s Box
The rank order is important because it provides the basis for exam boards to take into account vital data about each school’s previous exam performance and the ability profile of pupil groups in their determination of the final grades to be awarded. In other words, once schools’ grades and rank orders are submitted, exam boards will link these to data on the historical performance of each school and the prior ability of pupils in each subject.
But why not just go with the ‘raw grades’ assigned by teachers? Well, work with thousands of assessors and candidates tells us that teachers are really rather good at relative judgements – ‘this essay is better than that one’, ‘this pupil is likely to get a better grade than that one’ – but that it is much, much harder for them to link individual performance to a national standard, which is necessary for ensuring awards are as fair as possible.
The prior attainment data tells us whether a school is dealing with a group which differs from the previous year in terms of their starting point. Fortunately, we have vast amounts of this data from national assessments in primary and from previous exam series. In turn, the centre history gives us a sense of the typical results any given centre manages to achieve for pupils of a given prior ability. Put altogether, they enable us to locate students’ grades in the fairest way possible. Each part plays its role in producing a coherent picture. It’s a jigsaw, not a Pandora’s Box.
As Ofqual has already stated, there may well be variation between the grades assigned by teachers and the final grades awarded by exam boards. But this variation is not arbitrary or random – it is the product of systematically linking pupils’ performance to a national standard for each subject.
We are fortunate in this country to have an established examination system that can inform our awards based on a long history of accumulated data. We have teachers who are passionate about their subjects and care deeply about all of their students. And in our exam boards and Ofqual we have many of the best minds in educational assessment drawn from around the world. We are all pulling together as a team, and as such, I am confident that this summer’s grades will be as fair as they possibly can be.