If we don’t know when teaching interventions have failed, we’ll never improve

3 Dec 2018, 5:00

In theory everyone thinks it’s a good idea, but why are schools not embracing evaluation more fully when its impact can be so huge, wonders Stuart Kime

The English education system has a pretty strange – perhaps dysfunctional – relationship with evaluation, although I don’t think that we’re unique in this. We talk about it a lot, we seem to think it’s important, and we don’t make the most of what it can offer us. As such, most educational initiatives and innovations, given sufficient enthusiasm and buzz, are doomed to success.

How can evaluation help schools?

Evaluation methods are not a panacea, but they can help teachers, school leaders and policymakers be more effective and efficient by:
• helping improve a programme or practice as it’s being developed;
• helping decide if we should amend, continue or cease a programme or practice;
• informing the decision to scale up a programme or practice;
• identifying inefficient aspects of a programme or practice’s delivery;
• communicating accurately the impact of programmes and practices;
• influencing policy.

Evaluation can help us stop doing so many good things, so that – as Dylan Wiliam would put it – we can focus on even better things. But until we have an increased awareness, knowledge and understanding of evaluation in our education system, I’m afraid this is a pipe dream. And the ramifications are clear for a workforce trying hard to eradicate ineffective programmes and practices.

Teachers’ and school leaders’ occupational self-regulation – in other words, their ability effectively to budget their personal resources in the professional context of education – is a critical feature of effective teaching. As one study by Kunter et al on the professional competence of teachers puts it: “People with strong self-regulatory skills demonstrate a level of occupational engagement that is commensurate with the challenges of the teaching profession while at the same time maintaining a healthy distance from work concerns and conserving their personal resources.”

Robust evaluation can help teachers and school leaders figure out how best to expend their personal resources in the professional context.

Done well, evaluation may reveal that something didn’t work

Having worked in schools, in the civil service and in research, I am confident in saying that I don’t think I’ve ever met someone who thought that evaluation was a bad idea. It’s hard to argue against it, really. So, given its potential benefits and public enthusiasm for it, why is it not a core component of every government policy decision? Why is it not at the heart of CPD providers’ development and delivery processes? Why do we not heed John Hattie’s plea to “know thy impact”?

While the answers to these questions often include a lack of time, money, training and tools, one fact sits above all: done well, evaluation may reveal that something didn’t work. Personally, I see this as a good thing – knowing what hasn’t worked is just as valuable as knowing what has – but I also recognise that teachers and leaders invest their time, effort and energy in interventions and initiatives, and CPD providers have organisations built on the success of their products and services; the threats to reputation and self-esteem can be very high for all.

At a policymaking level, consider the attraction of evaluation: knowing the impact of public funds invested in a new initiative would enable better policy decisions. But consider also the political fallout for the minister responsible when we find that the initiative has no effect (or is worse than doing nothing at all). And consider that in the context of a fractured minority government with a working majority of precisely 0.

There are arguments against evaluation – too costly, too slow, too hard – but none is sufficient to persuade me that it’s not worth persevering with. To help schools develop in-house evaluation capacity, the Education Endowment Foundation published The DIY Evaluation Guide in 2013. Take a look!.

Kunter, M., Baumert, J., Blum, W., Klusmann, U., Krauss, S., & Neubrand, M. (Eds.). (2013). Cognitive activation in the mathematics classroom and professional competence of teachers: Results from the COACTIV project. Springer Science & Business Media.

Coe, R., Kime, S., Nevill, C., & Coleman, R. (2013). The DIY Evaluation Guide. London: The Education Endowment Foundation.

Your thoughts

Leave a Reply

Your email address will not be published.