The government has abandoned the reception progress measure, but schools should not

I’m struggling to get caught up in the heat of debate over the government’s recent decision not to use the results of this year’s reception baseline testing for the purpose of comparing pupils’ progress.

Why? Three different assessments were never going to provide comparable results, and comparability is, seemingly, at the heart of this decision.

But maybe it’s not that simple.

Reception baseline discussions began under the coalition government, and implementation of the policy continued over into the current Conservative administration.

With a secretary of state keen to develop more positive relations with teachers and school leaders than her predecessor did, it doesn’t seem implausible to think that there is more to this decision than a lack of assessment comparability.

The three baseline tests – from Early Excellence, the Centre for Evaluation and Monitoring (CEM), and the National Foundation for Educational Research (NFER) – could never have produced comparable results. Different tests give different results, so it seems rather odd that comparability is the whole foundation for the decision to abandon their use as a progress measure.

Moreover, comparability was never a requirement for potential assessments in 2014. No mention was made in the criteria for potential assessments that comparability across different assessments was required, though that’s not surprising: it’s not a feasible goal.

Some may call it an own goal; I call it a prime opportunity for schools to begin reclaiming the prize of assessment.

But enough of my scepticism and rumination. What should schools do now?

Before you read this, I want to declare my vested interests in the assessment sector: my organisation delivers training to schools on the use and interpretation of CEM assessments and I have a professional bias. Nevertheless, I think that early years assessment, when used well, can provide huge benefits to schools and pupils alike.

And with the government continuing to fund reception baseline assessments for most schools in the 2016 to 2017 academic year, schools have choice over how they will be used.

With that in mind, I offer these three points to inform that choice:

1 – Assessments can, and should, promote learning

Assessment is “one of the most potent forces influencing education” (Crooks, 1988) yet it remains something of an unclaimed prize.

The removal of the accountability function from the reception baseline assessments can actually help schools claim some of that prize for learning.

By using a valid and reliable assessment, teachers can gain valuable insight into pupil ability and progress, and use this to begin asking questions about next steps in their teaching.

2 – A good baseline assessment is a tool, not an end in itself

For myriad reasons, many schools have developed a culture of “data-driven decision-making”, but being driven by data is highly problematic (especially if the quality of the data is unknown) and not advisable.

A more fruitful approach is to focus on “decision-driven data usage”: make choices about the education your school aims to provide, then choose assessments that generate valid and reliable data to help you achieve your ends.

A robust baseline assessment and follow-up progress measure used to support learning should be a prime candidate here, but it should feature as just one item in a school’s assessment toolbox.

3 – A good assessment is one which is: consistent over time, place and context; a valid test of the thing it claims to measure; valuable to those who use data derived from it; and has a clearly defined and defensible purpose.

Prof Rob Coe wrote a blog on the CEM website, Would you let this test into your classroom?, which offers a 47-question checklist. It gets quite technical in parts, but is about the most complete list for schools to use in assessing the quality of the assessments they adopt.

Techy, yes, but why not use it to challenge assessment providers to be transparent about what their tests can and cannot do? If you want a robust measure of pupils’ progress in the reception year – and you really should – you could do worse than pick up the phone and ask any one of the providers (or all three): “How valid and reliable is your assessment for the purpose of measuring pupil progress?” Do it.

In my opinion, the DfE has done a great – though unintended – service to learning by removing the progress measure function of reception baseline testing. Some may call it an own goal; I call it a prime opportunity for schools to begin reclaiming the prize of assessment.

Stuart Kime is a director of evidencebased.education, an education innovation and training company.

Latest education roles from

Tutorial Learning Mentor

Tutorial Learning Mentor

Barnsley College

School Liaison Admissions Tutor

School Liaison Admissions Tutor

Riverside College

Study Coach

Study Coach

Heart of Yorkshire Education Group

EA to the CEO & Senior Directors

EA to the CEO & Senior Directors

Haberdashers’ Academies Trust South

Chief Executive Officer Cornwall Education Learning Trust (CELT)

Chief Executive Officer Cornwall Education Learning Trust (CELT)

Satis Education

Head of Faculty (History and RS)

Head of Faculty (History and RS)

Ark Greenwich Free School

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

One comment

  1. Mazeclipper

    Leicester has had its own Baseline Assessment for many years – used as formative assessment. Schools and Academies can choose to submit their data to the LA, who in turn make it available back to schools. This promotes school to school dialogue about changes in cohorts and also where schools are getting strong EYFS outcomes from low baselines. Schools did an amazing job of keeping it going during the pilot of national baselines, so that we could be ready to make our own comparisons in an area of high EAL. Now we can just return to using assessment for children.