Opinion

How to test interventions in the classroom

Don’t get hung up on methodology – good classroom research starts by establishing meaningful, measurable outcomes, explains Dr Lauren Ballaera

It’s August in a classroom in west London and eight new teachers are discussing ways they can use education research to improve outcomes for their pupils.

All of them are PhDs – mainly mathematicians and physicists – who are training as teachers through the Researchers in Schools programme, so they are no strangers to advanced research methodologies. Still, translating these to schools comes with its own set of challenges. “My head of department wants to know if this particular intervention works or not,” says one participant. “How can I give her a clear answer without being over-simplistic?”

It’s a good question, and one which speaks to a common (though false) dichotomy between those scientific research methodologies that are impractical for most schools most of the time, and “softer” methodologies that help schools reflect but don’t really get at whether something actually worked.

Carrying out research in schools is particularly challenging because of the “noise” – the other things that are happening besides what you are trying to measure – and the difficulties associated with defining and measuring pupil outcomes. But by carefully selecting what we measure, schools and teachers can use research and evaluation for practical decision-making. This is the “what”.

Carrying out research in schools is particularly challenging because of the “noise”

Firstly, identify pupil outcomes that relate to the intervention in question. It is often easy to assess outcomes that aren’t aligned with your intervention, but seem relevant. For example, defining key-stage attainment as your primary outcome, when your intervention is in fact tapping into a broader range of skills and behaviours.

Then, explicitly define pupil outcomes, remembering that there is often more than one way to define an outcome so select a definition that works for your context.

Finally, ensure that the outcomes can be meaningfully measured. Where possible, using reliable measures is preferable, but there will be instances where this is not possible. For example, there might not be an existing measure for the outcome, or the way in which the outcome is measured is specific to a certain context and thus existing measures cannot be used.

If these are the guiding theoretical principles, then what does this look like in practice – the “how”?

One way for schools to define pupil outcomes is to engage with the research literature, which can be particularly hard given teachers’ workloads.

However, there are quick and easy mechanisms, such as the Education Endowment Foundation’s Teaching and Learning Toolkit, which has been used by 51 per cent of senior school leaders.

Another way is for initial teacher training to train individuals to become intelligent consumers of research, as outlined in a recent Teach First report.

Helping teachers engage with research helps them to consider what outcomes are best to focus on within a particular intervention, but this still leaves the question of how can these outcomes be measured? Pre-existing measures are available for a range of outcomes, including critical thinking, creativity, grit and meta-cognition.

Where measures are not available, assessments can be developed for bespoke purposes. However, designing assessments that are reliable and valid is time-consuming and requires knowledge of psychometrics. One way to check that an assessment that you have designed is measuring something meaningful is to see if it correlates with an existing related measure.

For example, in a study on critical thinking, I helped to develop a test for the social sciences because one did not already exist, as well as using a standardized general critical thinking test. Comparing patterns of results on both tests allowed me to see whether the new social sciences test generally measured critical thinking.

There’s never been a better time for practitioners to engage with research, but in the first instance do not get hung up on research methodology.

Instead make sure that you start by looking in the right place for the right thing – meaningful measurable outcomes related to your intervention.

Lauren Bellaera is director of research and impact at the Brilliant Club

Your thoughts

Leave a Reply to EducationState Cancel reply

Your email address will not be published. Required fields are marked *

2 Comments

  1. Imagine GPs thinking they should engage in medical research. Would you take the tablets because a GP somewhere had done some personal experimental research on a new medicine using a few of their own patients?

    Why do teachers think they are capable of doing meaningful educational research when even the professionals are pretty hopeless at it. The fact that teachers think their own research will be valid in other contexts is quite worrying. By all means try something new, but don’t call it research.