Opinion

How you can evaluate the impact of your decisions

With the government increasingly asking for evidence that initiatives have had “impact”, Stuart Kime writes a step-by-step guide for creating those documents

Professor John Hattie’s entreaty for teachers to “know thy impact” is a laudable and important one. But knowing is hard. How can a teacher or school leader know what impact their choices had on valued student outcomes?

For most people working in education – from teachers to policymakers – the impact on valued outcomes of the decisions and actions we take is, frankly, unknown. To make a statement about impact is to draw a causal relationship between an input and an output. But in only the minority of instances is any of us able to make this statement honestly.

That’s why we created the Education Endowment Foundation’s DIY Evaluation Guide, a free resource that enables you to find how the impact of any initiative can be measured.

For speed, here are the steps we take you through in the pack:

Step 1: Ask your question

There are three components to a good evaluation question: the choice (to be evaluated); the outcome (what will be measured); and context (the people whose outcomes will be measured). Even when we think there is no choice (for example, giving feedback is not optional), there often is a choice (there are multiple ways of giving feedback). Robust evaluation is therefore more plausible than often we first think. Thinking through these parts of the question, you would hopefully end up with something like this: “What impact does using comment-only marking have on students’ reading comprehension over one year?”

Step 2: Decide what measure you will use

Good evaluation is dependent on good measurement. Having a reliable, valid assessment of the outcome you’re interested in is crucial: like healthy eating, this really is a case of getting out what you put in. Generally, you won’t need to add in extra tests; you can often use ones currently in use, but it’s important to remember that the higher the quality of your assessment, the greater the likelihood that you’ll generate more reliable, useful findings from your evaluation. National assessments (past papers), standardised tests (from the likes of GL, Hodder, Pearson or CEM) or home-grown tests can all be used, though there are trade-offs with them all.

Step 3: Give a pre-test

The pre-test helps to ensure that you know where everyone is starting from in terms of the outcomes you’re interested in. It’s important to do this before you go on to step 4, as it helps to reduce bias in your results.

Step 4: Create a comparison group

You need to be able to compare what happens to those students who receive the intervention with students who don’t. In reality, the only way that we can get close to drawing a causal link (and being able to say that X caused Y) is to randomly allocate students to either a treatment group (they get the intervention) or a control group (business as usual). This is the point at which most people have a sharp intake of breath. Surely, I’m asked, it’s unethical to give the intervention to some children and not to others? My answer? Well, the rationale for evaluation is that we don’t know the impact of the intervention, so surely it’s unethical to give it to everyone without evaluating its impact?

Step 5: Implement the intervention

With the students involved pre-tested and then randomised either to receive the intervention or not, it’s time to deliver the thing you’re evaluating. Importantly, though, you should keep a close eye on what is actually delivered and how, so that your conclusions are as accurate as possible.

Step 6: Give a post test

Giving a valid, reliable test that measures the outcome of interest is the next step. If the intervention has had an impact, this is the tool which should highlight that.

Step 7: Analyse

The final step! By looking at group averages for the two groups and doing an effect size calculation (the DIY Guide has an Excel sheet you can use for this), you get a measure of the impact the intervention has had.

educationendowmentfoundation.org.uk

Your thoughts

Leave a Reply to John Smith Cancel reply

Your email address will not be published. Required fields are marked *

2 Comments

  1. John Smith

    “How can a teacher or school leader know what impact their choices had on valued student outcomes?” Er, ask, look, listen, observe etc. etc.

    “For most people working in education – from teachers to policymakers – the impact on valued outcomes of the decisions and actions we take is, frankly, unknown.” Because the EEF and RCT specialists such as Hattie say it is so? What utter tosh this modern-day ‘scientific’ management really is, and such a terrible waste of public money.

    The EEF emperor has no clothes.

    • James

      John – what about all the biases associated with just asking, looking, listening and observing? It’s all useful but still subjective. Surely an objective measure is also required?