Exams

Ofqual’s second in command leaves after 7 years at watchdog

Dr Michelle Meadows leaves her post for Oxford University professor role

Dr Michelle Meadows leaves her post for Oxford University professor role

Exclusive


Ofqual’s second in command during the 2020 grading fiasco has left after seven years at the watchdog.

Dr Michelle Meadows was deputy chief regulator and executive director for strategy, risk and research. She left her post last month to join Oxford University as associate professor of educational assessment.

Julie Swan has now been appointed as deputy chief regulator alongside her role as executive director for general qualifications. She’s been with Ofqual since it was set up in 2008.

New chief regulator Dr Jo Saxton said Swan’s “excellent leadership skills and valuable contribution” to Ofqual’s work for more than a decade made her “the ideal person to step into this important role”.

She added: “I would like to take the opportunity to thank Dr Michelle Meadows. We are extremely grateful for the enormous contribution her work has made at Ofqual during the past seven years, and we wish her every success in her new appointment, which had been planned for some time.”

Dr Jo Saxton

Swan said looks forward to supporting new chief regulator Dr Jo Saxton “as we transition out of the extraordinary arrangements in place during the pandemic”.

Cath Jadhav, director of standards and comparability, has taken over Meadow’s executive director role on an interim basis.

Sean Pearce remains as chief operating officer as does Catherine Large as executive director of vocational and technical qualifications.

Meadows joined Ofqual in 2014 and was appointed as deputy chief regulator in 2016. She was grilled by MPs last year after the government’s U-turn on awarding grades moderated by an algorithm in 2020. Students were instead given centre assessed grades.

She said the grading model was tested “thoroughly” and said she didn’t believe that the algorithm “ever mutated”. It followed prime minister Boris Johnson blaming a “mutant algorithm” for the fiasco.

More recently, Meadows said that “flexibility of thinking [was] required” to make a move to online exams in the future, as has been mooted for several years. She was previously a research director at exam board AQA.



More from this theme

Exams

Trust boss accused over ‘culture of cheating’ and off-rolling at flagship school

An award-winning trust CEO and two colleagues face misconduct claims over exam malpractice and off-rolling to keep its flagship...

Tom Belger
Exams

2022 exams: What you need to know about plan B

DfE and Ofqual seeking views on draft teacher assessed grades guidance

Samantha Booth
Exams

2022 exams: Adaptations and grading plan finally confirmed

Education secretary Nadhim Zahawi says 'fairness at the heart' of plans

Samantha Booth
Exams

Drop in A-levels diversity, and 4 more EPI findings

Key findings from the Education Policy Institute's report into A-level subject choice.

Samantha Booth
Exams

Numbered A-levels, inflation and plan B: 6 exams findings from today’s education committee

A bigger rise in top grades for private schools this year reflects poorer pupils missing out on more education,...

Samantha Booth
Exams

2022 exams grading plan will be confirmed in October, says Ofqual

A decision on how exams will be graded in 2022 will be announced next month, Ofqual has said. Ofqual...

Freddie Whittaker

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

One comment

  1. During the “grilling by MPs” at the Education Select Committee hearing on 2 September 2020, Dr Meadows also stated “There is a benchmark that is used in assessment evidence that any assessment should be accurate for 90% of students plus or minus one grade. That is a standard benchmark. On average, the subjects were doing much better than that. For A-level we were looking at 98%; for GCSE we were looking at 96%, so we did take some solace from that.”

    Yes, that does say “plus or minus one grade”, and “solace” too. (See question 997, https://committees.parliament.uk/oralevidence/790/pdf/). That may indeed be a “benchmark”. But is that “benchmark” acceptable?

    Dr Meadows has considerable knowledge concerning the (un)reliability of exam grades: this is an extract from page 70 of a report, of which Dr Meadows is a co-author, published by AQA in 2005:

    “However, to not routinely report the levels of unreliability associated with examinations leaves awarding bodies open to suspicion and criticism. For example, Satterly (1994) suggests that the dependability of scores and grades in many external forms of assessment will continue to be unknown to users and candidates because reporting low reliabilities and large margins of error attached to marks or grades would be a source of embarrassment to awarding bodies. Indeed it is unlikely that an awarding body would unilaterally begin reporting reliability estimates or that any individual awarding body would be willing to accept the burden of educating test users in the meanings of those reliability estimates.”

    (https://filestore.aqa.org.uk/content/research/CERP_RP_MM_01052005.pdf)