Three new studies call into question the evidence base for feedback, growth mindsets and deliberate practice, reveals Harry Fletcher-Wood
The potential of feedback, growth mindset and deliberate practice to boost learning has become familiar through repetition in books, blog posts and training sessions. We don’t always have time to unearth the roots of these claims, but three recent studies have, each shedding new light on the strengths and limits of these approaches.
The EEF Toolkit ranks feedback as the single most powerful teaching technique. In a thoughtful review, Stefan Ekecrantz traces the origins of this claim. The crucial source is a 1996 meta-analysis of feedback interventions by Kluger and DeNisi. Ekecrantz examines it closely, and raises a couple of concerns about applying its conclusions in schools.
First, Kluger and DeNisi focused on the way feedback affects behaviour – not how it affects learning. Later authors extended Kluger and DeNisi’s conclusions to argue that feedback has powerful effects on learning – but this isn’t fully justified by the original research.
Second, Kluger and DeNisi included a range of studies – including those testing the effect of feedback on workers’ use of ear protection, hockey players’ body checks, and people’s extra-sensory perception (apparently feedback helps). Only nineteen of the 131 studies included were in schools and most focused on changing classroom behaviour – not learning.
Testing existing claims, and improving upon them, is central to science
Just one study looked at students aged 15-18: it examined the effect of feedback on high-achieving students. It didn’t help. Clearly, feedback can influence people’s behaviour – but the effect on learning may be weaker than we believed.
Growth mindset theorists have argued that students with a growth mindset are better at pursuing learning and overcoming challenges. In this study, Alexander Burgoyne, David Hambrick and Brooke Macnamara tested six claims made by growth mindset researchers about the differing beliefs and behaviours of people with growth and fixed mindsets.
Assessing students’ beliefs and efforts, they found that three claims produced insignificant results; two produced statistically significant but insubstantial results; and one produced a significant result in the ‘wrong’ (unexpected) direction: people with a ‘fixed’ mindset responded better to feedback than people with a growth mindset.
Clearly, how students see themselves influences what they do. But – as the researchers conclude – this does not mean that growth mindset is a robust application of this idea: “Our results suggest that the foundations of mindset theory are not firm and, in turn, call into question many assumptions made about the importance of mindset.”
Anders Ericsson consistently argued that deliberate practice is all it takes to develop expertise. In a paper that launched ten thousand others, he found that elite violinists had spent more time in deliberate practice than weaker violinists. (This paper was misinterpreted by Malcolm Gladwell as implying that it takes ten thousand hours to become an expert.)
Twenty-five years later, Brooke Macnamara and her colleagues ‘replicated’ the experiment. They copied the original study’s approach with a few improvements, like ensuring the interviewer didn’t know whether they were interviewing an ‘elite’ or a ‘good’ violinist, which might influence the results.
Surprisingly, they found that ‘good’ violinists had practised more than ‘elite’ ones; both groups had accumulated ten thousand hours’ deliberate practice. Deliberate practice still made a substantial difference to how well they played (around a quarter of the difference). But its influence was less clear cut, and less substantial than claimed.
Testing existing claims, and improving upon them, is central to science. Yet at the time of writing, Macnamara’s replication study has been cited 16 times. The original, 11,825 times. Granted, the original has a 25 year head start, but Macnamara’s paper is more recent, better-conducted and almost certainly more accurate.
This is why I think it’s worth discussing these critiques. Each of the original arguments has merit: feedback, deliberate practice, and how students see themselves all affect their learning. But these limited claims can morph into bolder slogans. “Feedback should be our top priority!” “Deliberate practice is all it takes.” “Growth mindset is everything!”
To use evidence well, we must acknowledge the merits – and the limits – of good ideas.
Thanks Harry, I skimmed Kluger & DeNisi to check Hattie’s claim about feedback, the thing that stuck out was near 40% of feedback studies reported a decrease in student performance, which contradicts Hattie’s claim “nearly everything works” so “all you need is a pulse” to teach. But, the devil is in the detail, i did not notice these studies measured behaviour not performance. Seems the EEF and Hattie mixed this up a lot.
Thanks for making time to draw readers’ attention to the references in this piece. Norman Denzin’s work on triangulation highlights that when researching complex phenomenon such as teaching the use of alternative perspectives is likely to reveal confirmatory, contradictory and contrasting results. It comes as no surprise really to find studies that shed new light on the three approaches that are the focus of this piece. Indeed, we should not be perplexed if we encounter confirmatory, contradictory and contrasting results from research of any education intervention.
All who work in education settings should be very wary of claims that one approach to teaching is by nature better than another approach. Alluring as it may be, the notion that is possible to undertake research that can identify what works, or doesn’t work, in classrooms is generally misleading and counterproductive. It really is time for prominent influential organisations and individuals to stop peddling such propaganda.