Applying metascientific methods to education research reveals some worrying biases but simple steps can help us navigate our way to a more secure grasp of its results, writes Baz Ramaiah

There have been some pretty provocative article titles in the history of science. But few come close to John Ioannidis’ 2005 essay “Why most published research findings are false”.

The content is even more provocative than the title. Ioannidis uses probability theory and statistical modelling to argue that three everyday features of scientific practice undermine its ability to produce accurate and useful findings.

First, “questionable research practices” are ubiquitous in scientific research. Like any one in any job, scientists cut corners to hit their targets. And the one metric that matters for researchers is “number of articles published in high impact journals”. To produce findings that get published in these journals, scientists engage in a wide range of methodological and statistical gymnastics to contort their results into a form that makes them publishable.

Second, there are profound issues with academic journals. Their publication policies tend to favour statistically significant findings with large effect sizes. That means that they are full of claims that particular social programmes or medical treatments are effective, but wholly devoid of any countervailing evidence to show why they might be ineffective. The net result is a scientific literature that is, at best, incomplete, and at worst, comprehensively biased.

Third, journals prefer exciting, novel findings rather than replications of studies that have already been published. This disincentivises scientists from replicating previous studies. With most studies in several disciplines (from molecular genetics to psychology) never having been replicated, entire academic literatures could be composed of lucky one-off findings.

Of the few replication studies conducted within education, nearly half were unsuccessful

This reflexive practice of using science to study scientific practice and process is known as “metascience” and Ioannidis’ 2005 paper is considered its founding document. Over the course of the past 15 years metascience has fortified itself into a scientific discipline in its own right, going on to expose even more problematic research and reporting practices. It has also helped re-engineer systems of reporting, publication and evaluation within some scientific disciplines to ensure that quality scientific research is produced.

What about educational research? While Ioannidis’ paper is targeted at the natural sciences, recent work has shown that his findings generalise to our own field.

A recent survey study of nearly 1,500 educational researchers concluded that most academics in the field have engaged in research and reporting processes that misrepresent the validity of their results.

A sequence of studies over the course of the 2010s have shown that the major educational research journals are skewed towards publishing statistically significant results with large effect sizes while rejecting those that do not meet these criteria.

The field is also beset by replication bias. A 2014 systematic review found that only 0.13 per cent of articles published in the top 100 educational research journals are replications. To put that into context, the field of psychology was plunged into a “replication crisis” when its reproducibility index was found to be 1.1 per cent. What’s more, the 2014 study found that of the few replication studies that were conducted within education, nearly half were unsuccessful.

While we may find these claims unsettling, there are steps that consumers of research can take to safeguard themselves against being led too far astray.

First, pay attention to publication bias estimates at the start of good metanalyses. Funnel plots are the most common method used to generate these estimates. As a rule of thumb, a symmetrical distribution in a funnel plot means there’s little publication bias in the literature. Anything other than that should prompt cautious consumption.

Second, we need to avoid enabling journal biases by sharing any negative findings or replication studies we come across. By bleat or tweet, you can resist the hegemony of positive over negative findings.

Initially controversial, Ioannidis’ paper is now canonical. Educational research should follow the imperative of other research fields: look beyond the paper’s iconoclasm and embrace an opportunity for evolution.