Opinion

Why did some results dip last year? Ofqual reveals all…

Last year, on GCSE results day, panicked phone calls came through to the Schools Week office. There were dips – big dips – in the results of schools with outstanding records and well-respected leaders. What was going on?

Many callers said the exams regulator, Ofqual, must have moved the boundaries of English and maths GCSEs, echoing what happened in 2012 when English GCSE results dropped dramatically. Others asked if there was an issue with iGCSEs. Or was is it just the law of averages catching up with over-achievers?

For the past three years, Ofqual has published a “volatility” chart showing how many schools had large drops or gains in their results.

The graphs showed a normal pattern: a small shift for most schools of a few percentage points, fewer schools moving more than 10 percentage points, only a few hundred having substantial drops or gains.

So why the concern?

Demographics or subjects entered don’t appear to make a difference

Big drops in maths and English tend to freak schools out because their overall pass rate is affected. If music GCSE takes a battering it doesn’t affect the headline measure. Headteachers who saw their maths or English plummet were therefore in shock and wanting answers, even though the overall dip rates looked normal, no more or less than the previous year.

“It’s because we have more free school meals kids in our school,” some said. Others worried that having English as an additional language pupils, or doing the EBacc, had ruined their rate.

Schools Week began requesting information from Ofqual to see who was correct. First, we asked for (and received) grade boundary reports flagging how and why decisions were made to change them. Second, we asked for the data behind the volatility charts. We wanted to see if schools with big drops had anything in common. Ofqual, equally concerned to see if certain schools were being treated harshly, worked with us to answer this question. The report published today is the fruit of that labour.

What the report shows is that only one factor correlates significantly with volatility in results: the more a school shifts its exam entries, entering or removing large numbers of pupils, the more likely large increases or decreases in results. Other factors – pupil demographics or subjects entered – don’t appear to make a difference.

First lesson, then: the more you change exam entries, the bigger the risk your results will change – positively or negatively.

An alarming sounding part of the report is that schools with more C-grade pupils have more volatile pass rates. But this is not because Ofqual are harsher on those schools. Instead, it’s a quirk of the accountability measures decided by government.

If a school has lots of pupils working around the D/C borderline, then a one-mark change in the grade boundary will affect lots of pupils and can cause a big dip, or rise, in five A*-Cs.

Schools with lots of A-grade pupils, notably the independent sector, also face volatility. But theirs is in the A/A* range. Having lots of pupils working around the A-grade range makes them sensitive to boundary changes of higher grades. Big dips in their A grades, however, don’t grab the headlines so much because that score isn’t routinely published.

Lesson two: All types of schools have volatile results it’s just that certain dips (in C grades) affect certain schools (those with lots of C-grade pupils) and those matter more because the government focuses on them.

Headteachers calling us last year were therefore seeing the combined impact of three things. One, volatility caused by moving pupils between exams. Two, genuine decreases in pupil ability. Three, having lots of pupils whose ability straddles the government’s main target. The first two are things heads can change; the last one, not so much.

Thankfully, performance measures are changing. As schools shift to Progress and Attainment 8 this pattern will change. How? That’s not yet clear.

In the meantime, all school leaders should know that if lots of pupils switch exam entries, or lots of pupils work at the borderline of performance measures, they are more likely to see great leaps in results, but also reductions. So, do not despair if results dip, but also – perhaps – do not congratulate yourself too much on gains either.

Latest education roles from

Programme Manager Digital and Creative Technology

Programme Manager Digital and Creative Technology

Richmond and Hillcroft Adult & Community College

Chair of Trustees – Esher Sixth Form College

Chair of Trustees – Esher Sixth Form College

FEA

Teacher of English

Teacher of English

Harris Academy Clapham

Teacher of Computer Science

Teacher of Computer Science

Harris Academy Chobham

Brick/ Multi Instructor/Technician

Brick/ Multi Instructor/Technician

Bournemouth and Poole College

2iC Maths

2iC Maths

Harris Academy Chobham

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

4 Comments

  1. Ben Gibbs

    Not despairing at a significant dip is one thing. Imagine losing one’s job because you run a school with lots of borderline attainers affected by grade-boundary shifts like this. Actually, imagine a poor year attracting Ofsted, who decide that because you didn’t expect the dip, you didn’t have sufficient control over your data. So they put you in special measures, and the trust or LA decide to sack you and you vice principals, dissolve the board, and instal an inexperienced team. But of course, because they don’t know the school, all they have to go on are the false premises in the Ofsted report, so they spend a year pulling levers and cracking whips, and wonder why everyone – students, parents, teachers, Ofsted – tell them things are getting worse. “How could it possibly be worse?”, they ask. But it is. So the school is rebrokered, but a year’s goodwill and progress has been lost, and £500k has been spent on trying to fix problems that did’nt exist. Just imagine!

  2. Students vary. Within and between years. What schools need to keep an eye on is the upper and lower limits of their own year-by-year variation in exam results. If a particular year’s results are within the limits, then a rise or dip is most likely to be due to natural variation. If they are outside the limits, that’s the point at which they need to look for causes.

  3. Billy Downie

    There are a number of other issues here that need to be considered, but that support your editorial Laura. I did an analysis of the West Midlands results this year for our Headteacher Board. We felt it was important to have some context, comparing results from 2013 with 2015 (in essence the impact of the first wave of reforms). I found that for schools with an aps on entry of below 27, 63% of schools saw more than a 5% dip (5ACEM). For schools with an APS above 29 (not including Grammars) 32% saw a dip of 5% or more. Interestingly, For schools with an APS below 27, 13% saw an improvement of over 5% whilst that figure was only 9% for schools with an APS of 29 or more (Not grammars). There seems to be an emphasis on iGCSE English and ECDL in the article, but it is worth considering the impact of Maths also, which saw far less board switching than English. For schools with an APS below 27, the average across the schools in terms of expected progress saw a dip of 6%. For schools with average prior attainment (either between 27 & 28 or 28 & 29) this figure was a dip of 4%. For schools with an aps above 29 this figure was a dip of 3%.

    I also looked at FSM data and there was a similar differential there. Schools with greater than 27% FSM saw a dip in 5ACEM of 8%. This gets progressively less until you reach those schools with between 1 and 6% of pupils eligible for FSM, who saw a dip of only 3% as a result of the reforms.

    I know data isn’t everything but it is sometimes worth broadening the focus and digging a bit deeper.

    • Nick Linford

      Hi Billy – useful stuff. We did look at FSM and demographic profile of school as an explanatory factor and the figures were *remarkably* low. Pretty much 0 in all instances. Based on the data I’ve viewed, I would say that the explanatory factor in what you’re seeing is likely to be that the schools you are talking about have more C grade students in them – which makes the school more vulnerable to dips because more students are clustered at the C grade and this is where the accountability focus (as mentioned in the piece), not because Ofqual are harsher on those students. If you ran those figures again using an A/A* measure I suspect they would invert and the low FSM schools would have bigger dips and spikes.