We need to become more nuanced in our reactions to research findings, says Timo Hannay

We are awash in data about education. The Department for Education routinely releases information about schools in England, covering everything from academic attainment and pupil demographics to school staffing and finances. (Schools in other parts of the UK are administered separately and data provision there is patchier.)

Many other government bodies, from Ofsted to the Office for National Statistics, do much the same. Increasingly, private companies are collecting and analysing data too. This is an admirable contribution to transparent government and an empowered citizenry: if none of us truly knows what is happening in our schools then how can we hope to hold to account those who run them? It is also (full disclosure) a godsend for data analysts like me.

The rise of social media has rightly generated concern about personal privacy (not to mention its effects on social discourse and democracy). Education is no different. We must all abide by the EU’s General Data Protection Regulation (GDPR), which empowers individuals against organisations that might otherwise undermine their autonomy or privacy.

Yet data protection should not be our only concern. If we’re going to be faced with more information about schools, then we also have to become more discerning in our reactions to it. A recent piece of research in which SchoolDash was involved provides a case in point.

We conducted an analysis of academic performance at high-deprivation primary schools, as well as a comparison of boys’ and girls’ performance at reading and maths. The deprivation results were picked up and recirculated by a campaigning organisation to illustrate the way in which poorer pupils were being let down. In contrast, results that showed girls falling slightly behind in some maths topics during later primary years were discretely criticised by another advocate, concerned that they would reinforce gender stereotypes. The graph that showed boys underperforming in reading received, to my knowledge, no significant response at all.

We are awash in data about education

Three statistically identical analyses, but three very different reactions. In a sense, though, all were correct. The deprivation campaigners were right to imply that differences in average performance between groups can provide important insights into our education system. The expert concerned about gender stereotyping was also correct that such collective differences effectively tell us nothing about individual students. And anyone who merely shrugged at the graph showing boys behind girls in reading is arguably right to be unconcerned in the sense that such inter-group differences tend to be modest in the overall scheme of things.

Yet, while such different responses are understandable in their own terms, it’s unfortunate that they tend to come from different people. If we’re being intellectually honest, we should feel all of these sentiments simultaneously: concern at persistent inter-group differences, recognition that such gaps tell us nothing about likely individual performance, and a sense of perspective in appreciating that most such differences tend to be relatively small.

So would we be better off ignoring or even banning such analyses? Emphatically not. Most people’s understanding of school effectiveness is based on a worryingly narrow and potentially misleading set of indicators. Exam league tables mostly tell us how able pupils were when they arrived at a school, not how much they benefited from being there – and they don’t go beyond narrow measures of academic performance. Ofsted reports are more nuanced, but only provide a snapshot and are often out of date.

If we want to improve education then we must do better than this, which means embracing the increasingly rich and varied sources of information now at our disposal. But that will also require us to be more discerning and balanced in our judgments. It won’t be easy, but positive change rarely is.