As we move into an increasingly numerical future, many areas of public life – including much of the world of education — appear to be defined by numbers. This short history of the rise of data in education systems reveals some surprises. It’s a punchy read, if squarely aimed at academic readers, and its dry and dense writing makes it unlikely to feature on many teachers’ reading lists. But maybe it should, as it has new insights into the way in which information about schooling has developed.
You might not have guessed, for example, that one of the early drivers for producing data on education systems was the Victorian mania for world exhibitions. In editor Martin Lawn’s chapter on the history of the centralised collection of data, the United States Bureau of Education emerges as a cutting edge provider of annual education reports. These were used to create displays for events such as the Paris Exhibition of 1878, and, as a result, directly inspired the increasingly sophisticated collection and analysis of education data in other countries.
At this early stage, the data was actually real, honest-to-goodness, factually (vaguely)accurate countable data, too. There were endless lists of “number enrolled”, “number in daily attendance”, as well as information on the number of teachers in each school. That said, even in the 1870s the “data was incomplete and impressive only in its collection and not in its accuracy”, a criticism that many level at data collection today.
Lawn charts the rise of the International Association for the Evaluation of Educational Achievement (IEA), founded in 1958. The association brought together a range of people responsible for a whole new set of numbers, test scores based on measures of knowledge that have become “a key element in assessing productivity and efficiency”. As Lawn notes, with some understatement, these numbers have “become key to the governing of education in several European states”.
The somewhat surprising driver for the use of measured test scores as opposed to counting – in which numbers were used to summarise what children had learned, there being no way to “count” knowledge – was the Second World War and the needs and experiences of the military. The chapter on “(Mis-)Trust in Numbers”, based on the history of education data in Sweden, reveals that the development of standardised tests of student outcomes was driven by those trained in the Swedish Institute of Military Psychology.
Post-war education was blinded by the glare of those purporting to use, in the authors’ phrase, “educational science”. This happened elsewhere, of course, not least in the US, where many of the driving forces behind controversial value added measures movement came from military backgrounds. The authors note that “standardised measurements became associated with the goods to come, without anyone really reflecting about possible backwash effects”, and they chart the rise, and fall, of the first phase of test score use in Sweden in the 1960s and 1970s.
Of course, test score “data” has come back in Sweden, and risen inexorably elsewhere. We are living through an era of new public management, in which public goods such as health and education provision are subjected to “markets, management and measurement”.
The lesson from Sweden, and of this book, is that using test scores splits students, teachers and systems into winners and losers, and therefore creates a “language of crisis”, which would not be possible if countable data were used to inform our view of education. Many of us are keen to challenge the “crisis” narrative, in part because we don’t recognise any “crisis” and because the use of test scores is flawed for reasons that this book does not explore.
Children, schools and countries cannot be reduced to numbers. Comparisons between different populations make no sense. A companion volume will, eventually, have to be written: The Inevitable Fall of Test Score “Data” in Education Systems. Let’s hope it is written soon.
Your thoughts