News

Morgan says ‘rate our success by PISA’ – but is it a swizz?

A potential shake-up of the PISA rankings caused by a shift to computerised tests could undermine the government’s assertion that its exam reform success should be judged on the UK’s position in the tables.

The influential Programme for International Student Assessment tests show a global snapshot of how 15-year-olds are performing in maths, science and reading every three years.

Nicky Morgan (pictured) said last year she would use the rankings to judge the success of her department’s extensive exam reforms. The Conservative manifesto also states PISA tables will be used to show educational progress.

But a study by Dr John Jerrim of the Institute of Education suggests this year’s results face a major shakeup as the majority of countries took tests on a computer instead of paper.

His analysis of 32 countries that took both computer and paper tests as part of a trial run in 2012 found “substantial differences” in results. A third of countries differed by more than 10 test points – which Dr Jerrim said had previously been described as a substantial amount.

The country most affected was Shanghai, which dropped by 50 points – the equivalent to more than an entire year of schooling.

Shanghai has long been held up as a beacon of excellence by the schools minister, Nick Gibb.

Its PISA success led the government to encourage Shanghai maths teaching and fund a teacher exchange programme with the region.

Other countries saw results rise as a consequence of the computer tests. Scores for the United States rose by 17 points and Brazil by 25 points.

Dr Jerrim said that the notable differences highlighted issues that would be important when it came to interpreting future results.

He told Schools Week: “Whether they will be comparable is the million dollar question. But it’s too early to say.”

The UK was not involved in the 2012 study and Dr Jerrim said it would be too much of a leap to draw conclusions from other countries about how it would fare in computer tests.

But he said it did pose questions – particularly as the government is set to use the rankings from 2018 to judge if its reforms to GCSEs and A-levels have had the desired effect.

He said explanations could include the different cognitive processes needed to perform tasks on computers rather than paper, or that pupils can no longer use test-taking strategies such as leaving the most challenging questions to tackle at the end.

Dr Jerrim added that the Organisation for Economic Cooperation and Development – the body that administers the tests – could use a statistical adjustment to smooth out what he called the “mode effect”.

Peter Adams, senior project manager at PISA, told Schools Week the organisation was analysing its own data and researching options for statistical adjustments. He said it was not yet at a stage to comment further.

But he did stress that the 2012 survey was not designed to “address the questions asked by Dr Jerrim nor does it support his conclusions”.

He said it was done purely to inform the development of vital computer-based assessment for future PISA tests.

A Department for Education spokesperson said the UK did not take part in the optional computer-based maths assessment in 2012, adding: “While this analysis raises interesting questions, it does not and cannot provide any insights into the PISA 2015 findings for England or the rest of the UK.”

 

Your thoughts

Leave a Reply to Janet Downs Cancel reply

Your email address will not be published. Required fields are marked *

One comment

  1. Janet Downs

    Interesting that Shanghai would have dropped 50 points which equates for more than a year of schooling (40 PISA points allegedly are worth one year). Add to that the fact that 25% of Shanghai’s cohort was missing, this would likely reduce Shanghai’s score further.
    That said, computer tests can bring false results because pupils can’t go back and attempt to answer questions they found difficult and which they would rather leave until other questions had been tackled. This could result in them lingering too long over such questions knowing they wouldn’t get another chance. This is a problem that needs sorting if computer tests are to have any reliability as a measure of achievement.