Counter-intuitively perhaps, being research-informed does not negate engaging in debate. Cat Scutt explains why
The Chartered College of Teaching has recently launched its Certificate in Evidence-Informed Practice, which culminates in teachers engaging in some of the most complex debates in education. In a way this might seem strange – surely acting based on evidence negates the need for debate?
Yet, in a recent research-engagement survey I found myself strongly agreeing with the statement “Education research can be used to support any opinion.” Why? Well, we see it happening all the time.
Differing research methods
Different research often has conflicting findings, sometimes due to the research methods used. One piece of research might use attainment as a measure of effectiveness, while another might use student or teacher reports. These won’t always have the same outcomes!
For example, experiments around ‘the testing effect’ found students don’t always know what works best for them. Students who studied material repeatedly were more confident about how much they remembered than those who had studied the material only once then undertaken recall activities, but actually remembered less.
Being ‘evidence-informed’ requires expert teachers to guard against poor implementation
As we seek to understand what might work best in distance learning – the topic of the Chartered College’s latest report – self-report is a key source. But we need to recognise its limitations. This includes looking carefully at who participated in the research and any biases they might have. For example, teachers working in an online school are perhaps more likely to report positive outcomes for online learning, given where they have chosen to work.
Likewise, schools who have invested heavily in a particular approach to distance learning might be more likely to report positively on its impact – an example of sunk cost bias. This applies to researchers, too. They are not without their own biases. And then there’s the risk of publication bias, a topic covered recently in this very feature by Baz Ramaiah. Studies that find an impact or cover a ‘hot topic’ are more like to be published than those that don’t!
We can also usually find research that backs up our opinions because, as Dylan Wiliam argues, “everything works somewhere, and nothing works everywhere”. Context matters, and boundary conditions are important to understand.
This is more nuanced than just recognising that approaches that work in higher education might not work with young children. Things that work ‘on average’ might affect individual students differently. For example, the EEF reports that setting and streaming has a small positive impact for higher-attaining students but a negative impact overall. Meanwhile, approaches that work with novice learners may become counter-productive when used with experts: the ‘expertise reversal effect’.
Given varied research findings, confirmation bias becomes crucial. We tend to uncritically accept research that confirms our existing beliefs while seeking fault in research that contradicts them. The government regularly demonstrates this selective adoption of research findings to suit their priorities!
Our views can also influence the messages we take from research. Dylan Wiliam once tweeted a summary of a paper that found that “Increased use of student-centred teaching methods is linked to increased student wellbeing but lower achievement, which in turn, link to increased adult life satisfaction, but lower earnings”. The paper itself interested me less than people’s reaction to the tweet. Some immediately seized on this as justification that ‘student-centred teaching methods’ shouldn’t be used, while others saw it as a strong argument for them – depending on the outcomes they prioritised.
Given the importance of gaining buy-in as part of the effective implementation of any intervention, there’s also the risk of creating a vicious cycle. If we believe something won’t work, but are asked to do it anyway, it’s likely it won’t be implemented effectively. It thus won’t have the hoped-for impact, reinforcing our belief that it doesn’t work.
So it’s by no means easy to be ‘evidence-informed’. It requires expert teachers to stand as a bulwark against poor implementation – teachers who are confident in identifying robust research, are aware of possible biases, and understand how results might transfer (or not) to their contexts. That’s exactly what our new certificate seeks to develop and recognise, and why it so strongly embraces difficult debates.