Don’t replace levels with re-hashed levels is the advice given to teachers by Ark’s head of education research, Daisy Christodoulou.
Kicking off the first session in the main hall at last Saturday’s packed researchED conference, Ms Christodoulou, pictured below, suggested multiple-choice questions and comparative judgments as better choices for assessing pupil progress. More than 700 delegates attended the London conference.
The government scrapped the national levels system used across all subjects last September. It said at the time that they had become an “over-generalised label”.
An independent commission created to recommend how teachers should assess pupils without levels was due to be published at the end of the summer term, but has been delayed.
Ms Christodoulou claimed that multiple-choice questions could provide information about pupils’ level of assessments, if used in the correct way.
Her ideas caught the attention of delegates. Robert Brooks (@r_brooks1) tweeted: “#red15 MCQ [multiple-choice questions] good for testing knowledge in lessons providing there is challenge. Some great ideas by Daisy.”
Ian Anderson (@IanW_Anderson): “Multiple-choice question design can be much cleverer than I thought… thanks @daisychristo #rED15″
On written essays, Ms Christodoulou argued that the previous approach of checking answers against a set of criteria was a “terrible way” to mark.
“Normally when you mark an essay you are encouraged to take the essay, take the criteria… [and] you go for a tick box approach. That’s a terrible way of marking. Once you have done a number of these, you go back to the first one and realise it was wrong. Because really, when you are marking, you are not comparing yourself against the criteria, you are actually in your head sizing the essay up in comparison to the others. And that is a much more effective way of doing it.”
The comparative assessment method for essays has been advocated and trialled by No More Marking, a company set up by Dr Chris Wheadon that has had more than 500,000 judgments made using the software.
Dr Wheadon said comparative judgment took the “human error” out of marking. “All the research shows that marking suffers from all sorts of problems – the halo effect, for instance, where if you start seeing good pieces of work you keep thinking they are good even if they are not. And there is tunnel vision where you stop using the extremes of the mark scheme.
“Comparative judgment takes away the severity or leniency of some markers because you have more than one person marking each piece.”
Ms Christodoulou urged delegates to get “the right software” to ensure they had a reliable series of grades.
I think employers want employees who have learned to write well. Ticking boxes or learning to write to meet rigid criteria, like a performing dog, does not do this.