Scaled-up literacy scheme fails to produce positive results

A scaled-up scheme to train teaching assistants to deliver literacy interventions to struggling pupils has not produced the positive results of an earlier pilot.

Switch-on, a 10-week programme which saw TAs trained to deliver intensive reading interventions, had a “promising” initial trial in 2014, achieving an extra three months of progress to year 7 pupils.

However, pupils made no additional progress after it became the first project to be expanded and retested by the Education Endowment Foundation – raising the possibility that the full roll-out of policies does not always emulate initial successes.

The challenge is to find an effective way to scale the model so it can be delivered to large numbers of schools with positive impacts

The first trial had involved TAs at 19 schools who were trained by Switch-on’s original developers to deliver reading interventions, and earned a score of three padlocks on the EEF’s five-point scale (with five being the best).

However findings published last week from a second attempt in which 184 schools took part, and TAs made both reading and writing interventions – showed that pupils made no additional progress compared with those in the control schools, though the original developers were not involved this time.

Emily Yeomans, a senior programme manager at EEF, a charity spun out of the Department for Education that aims to improve attainment in poor students, told Schools Week that replicating studies often leads to different results, even when conditions are kept constant.

Stephen Gorard, professor at Durham University’s school of education, said that the disappointing results could be due to different methodologies, or even simple chance.

For instance, the earlier trial had randomised pupils – with a two-per-cent dropout rate – while the second trial chose instead to randomise schools.

However as one full school dropped out, the dropout rate of the control group was much higher, at 13 per cent.

Gorard added more replications of trials is to be welcomed, but raised concerns over the potential costs – adding if funds are limited it’s best to focus on the most promising trials.

The EEF says it wants to continue tests, and is discussing future options with Switch-on, which is delivered by Nottinghamshire County Council.

Any other tests would look to recreate similar conditions to the first trial, including looking at involvement with original developers, Yeomans said, insisting that the findings from the earlier trial had not been “erased”.

“The challenge now is to find an effective way to scale the model so that it can be delivered to large numbers of schools with similarly positive impacts to those seen in the first trial,” she said.

The majority of EEF-funded projects are tested as an “efficacy trial” –the method used during the first test of Switch-on in 2014.

A total of 14 projects that posted positive initial results have been tested again under an “effectiveness trial”. The retesting results for Switch-on are the first of these to be published.

Yeomans, writing in a blog on the charity’s website, said she believed retesting is a positive, as it allows programmes to be tested under different conditions, leading to either strengthening the effectiveness rating of programmes, or instigating further innovation to get consistent results from a programme.

Your thoughts

Leave a Reply to AssemblyTube Cancel reply

Your email address will not be published.

One comment

  1. An EEF project is rated 3 padlocks for effectiveness but on a larger scale is shown to have zero effect. How many other EEF projects claiming positive results are really a waste of time and money? I guess even EEF cannot tell us this but it is unlikely that the Switch-on Project is unique in promoting something that turns out to be untrue.

    The initial Switch-on Project cost £70,000, and involved professionals running the project. It is now shown to have produced a conclusion that cannot be supported.

    What is the point of the latest fad of encouraging teachers to be researchers? What weight can we give to results by teachers producing their own research?

    Is the search for the Holy Grail of getting a quart out of a pint pot becoming ridiculous? We might get far better results by focusing on keeping things the same, and relying on improving the professional skills of teachers.