Exams

Ofqual’s second in command leaves after 7 years at watchdog

Dr Michelle Meadows leaves her post for Oxford University professor role

Dr Michelle Meadows leaves her post for Oxford University professor role

Exclusive

Ofqual’s second in command during the 2020 grading fiasco has left after seven years at the watchdog.

Dr Michelle Meadows was deputy chief regulator and executive director for strategy, risk and research. She left her post last month to join Oxford University as associate professor of educational assessment.

Julie Swan has now been appointed as deputy chief regulator alongside her role as executive director for general qualifications. She’s been with Ofqual since it was set up in 2008.

New chief regulator Dr Jo Saxton said Swan’s “excellent leadership skills and valuable contribution” to Ofqual’s work for more than a decade made her “the ideal person to step into this important role”.

She added: “I would like to take the opportunity to thank Dr Michelle Meadows. We are extremely grateful for the enormous contribution her work has made at Ofqual during the past seven years, and we wish her every success in her new appointment, which had been planned for some time.”

Dr Jo Saxton

Swan said looks forward to supporting new chief regulator Dr Jo Saxton “as we transition out of the extraordinary arrangements in place during the pandemic”.

Cath Jadhav, director of standards and comparability, has taken over Meadow’s executive director role on an interim basis.

Sean Pearce remains as chief operating officer as does Catherine Large as executive director of vocational and technical qualifications.

Meadows joined Ofqual in 2014 and was appointed as deputy chief regulator in 2016. She was grilled by MPs last year after the government’s U-turn on awarding grades moderated by an algorithm in 2020. Students were instead given centre assessed grades.

She said the grading model was tested “thoroughly” and said she didn’t believe that the algorithm “ever mutated”. It followed prime minister Boris Johnson blaming a “mutant algorithm” for the fiasco.

More recently, Meadows said that “flexibility of thinking [was] required” to make a move to online exams in the future, as has been mooted for several years. She was previously a research director at exam board AQA.

Latest education roles from

IT Technician

IT Technician

Harris Academy Morden

Teacher of Geography

Teacher of Geography

Harris Academy Orpington

Lecturer/Assessor in Electrical

Lecturer/Assessor in Electrical

South Gloucestershire and Stroud College

Director of Management Information Systems (MIS)

Director of Management Information Systems (MIS)

South Gloucestershire and Stroud College

Exams Assistant

Exams Assistant

Richmond and Hillcroft Adult & Community College

Lecturer Electrical Installation

Lecturer Electrical Installation

Solihull College and University Centre

Sponsored posts

Sponsored post

Inspiring Leadership Conference 2025: Invaluable Insights, Professional Learning Opportunities & A Supportive Community

This June, the Inspiring Leadership Conference enters its eleventh year and to mark the occasion the conference not only...

SWAdvertorial
Sponsored post

Catch Up® Literacy and Catch Up® Numeracy are evidence-based interventions which are highly adaptable to meet the specific needs of SEND / ALN learners

Catch Up® is a not-for-profit charity working to address literacy and numeracy difficulties that contribute to underachievement. They offer...

SWAdvertorial
Sponsored post

It’s Education’s Time to Shine: Celebrate your Education Community in 2025!

The deadline is approaching to nominate a colleague, team, whole school or college for the 2025 Pearson National Teaching...

SWAdvertorial
Sponsored post

Navigating NPQ Funding Cuts: An Apprenticeship Success Story

Last year’s NPQ funding cuts meant that half of England’s teachers faced costs of up to £4,000 to complete...

SWAdvertorial

More from this theme

Exams

OCR pauses geography GCSE changes amid ‘unanswered questions’ about future of exams

Exam board believes it is 'wise to wait' for the outcome of the curriculum and assessment review

Freddie Whittaker
Exams

Exam board fined £250k over string of rule breaches

Ofqual found teachers who also drew up assessments could have known which papers pupils would take, and conflicts among...

Jack Dyson
Exams

Ofqual: School-level exam cheating hits three-year high

Cases of students cheating also jumped by 5.9 per cent, Ofqual data shows

Lucas Cumiskey
Exams

EPI calls for review of phonics screening check

Researchers argue there's 'no evidence' the checks led to improved outcomes

Freddie Whittaker

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

One comment

  1. During the “grilling by MPs” at the Education Select Committee hearing on 2 September 2020, Dr Meadows also stated “There is a benchmark that is used in assessment evidence that any assessment should be accurate for 90% of students plus or minus one grade. That is a standard benchmark. On average, the subjects were doing much better than that. For A-level we were looking at 98%; for GCSE we were looking at 96%, so we did take some solace from that.”

    Yes, that does say “plus or minus one grade”, and “solace” too. (See question 997, https://committees.parliament.uk/oralevidence/790/pdf/). That may indeed be a “benchmark”. But is that “benchmark” acceptable?

    Dr Meadows has considerable knowledge concerning the (un)reliability of exam grades: this is an extract from page 70 of a report, of which Dr Meadows is a co-author, published by AQA in 2005:

    “However, to not routinely report the levels of unreliability associated with examinations leaves awarding bodies open to suspicion and criticism. For example, Satterly (1994) suggests that the dependability of scores and grades in many external forms of assessment will continue to be unknown to users and candidates because reporting low reliabilities and large margins of error attached to marks or grades would be a source of embarrassment to awarding bodies. Indeed it is unlikely that an awarding body would unilaterally begin reporting reliability estimates or that any individual awarding body would be willing to accept the burden of educating test users in the meanings of those reliability estimates.”

    (https://filestore.aqa.org.uk/content/research/CERP_RP_MM_01052005.pdf)