A recent study on the effects of peer observation found it made no difference, but there may be a reason for that, argues one of the researchers

In a recent Schools Week article it was reported that a recent trial in which teachers observed their peers did not raise pupil attainment. I was part of the team that conducted the study, and, if you’ll allow me, I’d like to show you that it should not necessarily be written off.

First, a brief detour into study design. The analysis adopted the “intention to treat” format that Education Endowment Foundation uses for all its studies, which measures not what each teacher does, but whether or not they were offered the intervention. In the context of schools, since teachers cannot be forced to participate, and accurate monitoring is often problematic, this is definitely the right approach to take.

Evaluation can give a null result for two reasons

So in this case, instead of asking “do students taught by teachers who participate in peer observation have better outcomes?” the study asked, “does a policy of offering teachers peer observation lead to improvements in students’ outcomes?”

In this approach, the evaluation can give a null result for two reasons. At one extreme, the intervention might truly be ineffective: all those eligible for the treatment took it, but it made no difference to the outcome. Alternatively, it might be that no-one eligible for the treatment took it up, so the evaluation compares two untreated groups and again finds no difference.

Returning to our own intervention, schools that were randomly selected for treatment were provided with observation software licences, iPads and training, and were asked to carry out a set number of lesson observations. While we could rapidly observe whether an observation had taken place through the software, we had no leverage to ensure that they did. Even persuasion was diluted – we had no direct access to teachers, so our messages went indirectly.

In fact, the number of observations carried out was very low. A third of teachers in the treatment group were involved in none at all, and less than a quarter of the schools got all the relevant teachers involved. Non-participation may be a major explanation for the lack of impact: the evaluation compared outcomes for a control group with a “treated” group that was, in reality, barely treated. That raises questions about why so few observations were done, which we discuss below.

Can we still learn anything about the potential for teacher peer observation from this study? Yes, we can. The provisional (unreviewed) results – found by myself, Shenila Rawal from the Oxford Partnership for Education Research and Analysis and Eric Taylor of Harvard University – show that schools which carried out some observations did see significant and sizeable increases in maths and English GCSE scores. If these provisional and preliminary results are confirmed, then these gains are definitely worth having for an inexpensive intervention.

It is important to note that these effects may not be all causal

It is important to note that these effects may not be all causal – and our future analysis will use standard statistical approaches to address this. But the results suggest that we should not give up on peer observation yet.

We return to the question of why so few observations were carried out. Even though only one or two 20-minute observations per half-term were required, using a simple interface on a networked mobile device, teacher workload could still be a factor. Second, while an experimental setting usually gives an intervention a better chance than real life, it was not so in this case.

Our research grant contained no funding for teaching cover, so schools had to organise it themselves. We were also working under strict anonymity conditions and could not contact teachers directly. We had a teacher acting as project liaison in each school, but those people were typically not senior.

So before we ditch forever the idea of teacher peer observation, we would urge caution in responding to the headline results. We do not argue that the NFER report is wrong; we confirm the results. Our aim in writing this piece is not to claim that peer observation definitely works. It is to claim that we cannot say for sure from this study that it does not.

Simon Burgess is a professor of economics at the University of Bristol