The number of primary schools set to be deemed as “failing” this year by falling below government floor standards is likely to be much higher than first thought after Nicky Morgan mixed up statistics in a key speech.
The education secretary’s announcement last month that no more than 1 per cent more schools would fall below the new standard this year than did last year was welcomed by heads, who celebrated the news that just a handful of schools would be affected, rather than thousands.
But the government has today confirmed that the rise is capped at 1 percentage point, rather than 1 per cent as first announced by Morgan. It means the number of failing primaries could actually rise by more than 160.
The mistake has led to an offer of lessons in basic statistics for Department for Education staff and criticism from a maths teacher who said she would expect 11-year-olds to know the difference.
The Royal Statistical Society told Schools Week the blunder showed the need for MPs to be statistically literate, and confirmed that Morgan had not attended either of two training sessions offered to politicians.
Hetan Shah, the society’s executive director, said: “The Royal Statistical Society has run training for MPs in basic statistics and we would be happy to put something on for the DfE if they wanted it.”
In response to concerns about harder tests and their impact on schools, the education secretary told the National Association of Head Teachers annual conference in Birmingham last month that the number allowed to fail would be capped at “no more than 1 per cent more…than last year”.
In 2015, 676 schools, about 4 per cent of England’s 16,766 primaries, fell below the standard. Based on these figures, a 1 per cent cap would have meant no more than seven more schools failing this year.
But a cap of 1 percentage point could allow up to up to 843 schools to fall below the standard, an increase of more than 160.
The government confirmed the cap was based on a percentage point increase, but declined to explain why both the final draft and delivered versions of Morgan’s original speech referred to 1 per cent rather than 1 percentage point.
Rachel Collander-Brown, who teaches maths to 11 to 18-year-olds at Wandsworth’s Southfield Academy, said she would expect her pupils to grasp the difference between per cent and percentage point by the time they were in year seven.
She added that government staff releasing figures should “know the difference and to write about it properly, not in a way that’s easier to understand if that way of writing it is actually wrong”.
“I would definitely expect [year sevens] to know the difference between raising something by 1 per cent or 1 percentage point,” she added.
Collander-Brown said she often taught the difference to pupils analysing statistics in newspaper articles, using employment rates and tax as examples.
“If unemployment was at 5 per cent and went up to 6 per cent, you could say it had risen by 20 per cent or by 1 percentage point, but to say it had gone up by 1 per cent would be wrong.
“Also if tax has gone from 10 per cent to 12 per cent, it would be wrong to say it had risen by 2 per cent.”
Russell Hobby, the general secretary of the NAHT, said his union had “always assumed the worst case interpretation of the announcement in the first place”, adding that it was “important to put a limit on the number of schools that could fall below the floor”.
“It is also important to see this as a cap, not a target. However, increasingly we believe that it would be inappropriate to use assessment data for floor standards at all this year. The uncertainty around assessment is too great for it to be used for high stakes accountability,” he added.
A Department for Education spokesperson said: “While we are raising standards, the secretary of state has reassured schools that the proportion falling below the floor will not rise by more than 1 percentage point this year.”
Whether it is 1 per cent or 1 percentage point, the system is wrong. The figures are purely arbitrary. It means that the government has decided that however good schools become, they plan to fail a particular number. They can pick any number they wish, and this is what they are doing.
Imagine if we did the same with driving tests. You could be a good driver but the authorities could decide that a set percentage always have to fail. Why would you do that? Would the population feel that was reasonable or fair? To continue the analogy, the driving test could fail you if you did not know what a crankshaft was, even though it had no bearing on your ability to drive. Driving instructors would have to give lessons in crankshafts and drivers would be more likely to pass the driving test is they knew about crankshafts, than if they were good drivers.
Something is seriously wrong with this policy.
Whether it is 1% or 1% capped, there seems to be a naivety that only 160 more schools will go below the floor target. Given the new requirements of the curriculum, particularly writing, which is much more challenging and the shortage of time given to bring current Year 6’s to the required standard, I think that 160 more schools is an understatement.
There is a fundamental problem with ‘benchmark’ standards. They are OK with ‘training’ where the principle is criterion referenced ‘mastery’. Training requires memorising the correct responses to a finite number of workplace scenarios. A train driver is a good example. To ‘pass’, the trainee has to give the correct response to a very high (but still arbitrary) proportion of the very large (but finite) list of contingencies set out in the ‘rule book’.
All training is like that. eg. the driving test (also includes some physical skills like hill starts and parallel parking). Or airline pilot – rule book plus physical skills like taking off and landing the aeroplane.
The ‘benchmark’ approach is fine for such things.
But education is not the same as training. There is not a finite list of things to be learned in being a good writer. There are an infinite number of possible questions that can be asked in a maths or science exam. The PISA questions are deliberately constructed to use contexts that the candidate will NOT have been specifically taught the ‘answers’ to. That is one important reason why English pupils perform so badly.
Education is about developing and promoting understanding. Good educational tests produce large variations (usually Normal Distributions). Here ‘benchmarks’ make no sense.
At the selective boy’s school I attended as a pupil in the early 1960s the ‘House’ athletics competition winner was based on team performances on ‘sports day’, plus individual achievements based on the gaining of benchmark ‘standards’ for all pupils. This is a direct parallel with the problem of the government’s approach to SATs. Where to place the benchmark in a continuous Normal Distribution of attainment? I recall the ‘putting the shot’ standard. I was a summer born, physically undeveloped, weedy boy. I could barely lift the shot from the ground let alone ‘put’ the thing the required distance to get the ‘standard’.
It is the same with SATs, especially KS1, where age differences are hugely significant in developmental terms. Benchmarking has been borrowed from the behaviourist business culture of ‘training. It is nonsensical when applied to all forms of proper education.
Its ideological imposition is profoundly destructive.
In this case, it’s not a benchmark – which in theory everyone could pass. It’s the old problem of absolute versus relative standards, which can never be solved politically, as the S of S for Education [sic] is nowadays forced to simultaneously express two contradictory aims: “we will always encourage the best” and “but no one will be allowed to fail.” Hence my cynical “sic” as education consists of developing whatever abilities and interests are there, but modern schooling is entirely driven by measurement that requires preset objectives and operationally simplistic marking schemes.
If you don’t agree, try the sample English grammar KS2 SAT paper on the Department’s website.
It’s the hyenas chasing the antelope, picking off the weakest. Morgan’s comment exposes the reality of a system based on bullying by numbers, followed by Ofsted and schools commissioners going in for the kill. The entire paraphernalia of assessment and accountability is exposed here as simply a means of privatising control of schools.
Is anyone surprised by the apparent poor grasp of statistics by the DfE? Time and again the DfE has been caught out in publishing dodgy data. Here are a few examples:
1 The UK Statistics Authority criticised DfE use of PISA data after Gove deliberately ignored an OECD warning not to use flawed PISA 2000 stats for the UK for comparison;
2 The UK Statistics Authority twice censured Morgan for her misleading use of KS2 literacy and numeracy figures. She and Gibb now get round the criticism by adding the word ‘properly’ when talking about 11-year-olds’ competence in reading, writing and doing sums at the end of the Labour years.
3 The DfE’s continual comparison of improvement rates sponsored academies with all other schools. But rates in the former are calculated from a lower base and would likely be higher.
4 Again, the DfE compares achievement of converter academies with all other schools when the former were already mainly high performers.
5 In October 2014, the UK Statistics Watchdog said a ministerial statement linked to secondary school statistics gave the impression it was part of impartial official data. The statement didn’t meet ‘professional standards’.
I could go on. Instead, try my quiz to match your grasp of stats with ministers’. http://www.localschoolsnetwork.org.uk/2014/10/can-you-handle-stats-as-well-as-schools-ministers-and-the-dfe-find-out-by-taking-the-lsn-stats-quiz