There are valid reasons for concerns about reception baseline assessment. But don’t be intimidated: it’s a chance to pick a good option that works for your school
Primary schools in England are about to be presented with a choice of commercial schemes to act as a baseline for the new primary accountability policy. As I write, the cogs and wheels will be grinding in the Department for Education (DfE) to finalise which of the proffered schemes will be accredited for use this September.
Is this policy a good one? In part, I think it is. Every school should be held accountable for the progress each pupil makes whilst in their care. But the way that progress is evaluated must be reliable – and the only way to get reliable information is to gather high quality data on at least two occasions.
There should be enough of it to be representative of the national picture; it should have been gathered in the same way to minimise variations between pupils; it must be reliable enough that the same
results from the assessment would be obtained a few days later or if it were administered by a different teacher and, importantly, it must be focused on the outcomes in which we are hoping to see progress, although the content may be different on the two occasions. The introduction of a quality-controlled reception baseline is the first step in that direction.
But there are some problems with this accountability-by-progress idea. There is considerable woolliness around the outcome measure. How can we run a race without knowing where the finish line is? Who is held to account for a progress measure that spans such a long period, as it is likely that key players are no longer in school? And, of course, it is not yet clear how these progress measures will be used.
Make assessment an everyday part of school life
Any sort of league table would be detrimental to pupils, the schools and the system. To really evaluate quality in the system we need a range of indicators not a single figure, and there are wider issues around inspection that need to be addressed. Policy colleagues could do worse than look to Scotland where inspectors, policymakers and senior teachers work together without the distances we see between the different agencies in England. This smaller educational space fosters a more supportive relationship in which negotiation replaces confrontation.
Putting aside the outcome measure for a moment, there are many positives to be gained from reception baseline assessments when they are used formatively.
They can be quick to administer; they can be fun and engaging for the children; they can give incredibly reliable and useful information; they can inform the reception teacher about the child’s developmental progress above and beyond what might be reasonably achieved by observation alone. A reception teacher can triangulate this data alongside other information, such as high quality observation and knowledge of the child’s home background, to ensure that each child gets the best start to their academic life.
There will, of course, be concerns. People will question the validity of any such assessment, the potential for “labelling”, the likelihood of game-playing, and the lack of authenticity. But none of these issues is particular to any specific assessment and many of them are likely to be raised whatever flavour of assessment is used in our classrooms.
Accepting that reception baseline is here to stay, for a while at least, I would challenge teachers not to be intimidated. Use it as an opportunity to pick a good option that works for your school and use it as a springboard to reassess assessment.
Add value to your chosen system by looking at it alongside other information about the child. Ask yourself questions, explore trends and analyse the evidence. Start measuring progress from that baseline as soon as you can. It is motivating and empowering and it puts you in control. Discuss, evaluate and explore new ideas about assessment. Make assessment an everyday part of school life. Regardless of what the DfE calls “progress”, schools can make their own model of progress – and make it work for them.
Kate Bailey is Director of Applied Research at the Centre for Evaluation and Monitoring (CEM) at Durham University