The government needs to stop and think before rushing into an expansion of the alternative provision census, argues Jen Persson
From the disastrous effects of the Universal Credit on children and the pupil premium, through the SEN crisis to the upbeat people of edTechUK, we constantly hear the call for more data. But more isn’t always better.
The Department for Education will start recording pregnancy, mental health or young offender status forever as new items on the Alternative Provision census from January, and distribute it to third parties without the pupil’s consent.
Nick Gibb has told parliament that the census already collects a range of information about individuals, so there is no new privacy risk. He’s wrong.
The privacy impact assessment ahead of any change in data collection is an opportunity to stop and think. Without it, the government is making mistakes in its directions on data.
The main pool that almost every child ends up in at the DfE, is made up of over 20 different datasets: the National Pupil Database. It holds the named records of 23 million people aged 36 and under, who have been state educated since 1996, as well as private pupils who have sat exams. It is now one of the richest datasets in the world.
The DfE gives these records away, but for how many children since 2012, it doesn’t know. According to our analysis, it’s in the order of millions every month.
After five years, it’s shocking that the government has still not told every family in England that it hands our children’s personal confidential data out to people we’ve never heard of. Identifying information has been distributed over 1,000 times since March 2012 to a wide range of recipients. They are sent, but promise not to publish pupil-level data. A private tutoring company and commercial data consultancies are among the users registered.
There is no transparent oversight of how the DfE responds to research requests, how they are ethically applied, or the societal benefit of how the data is used.
There are real risks with labels for life.
There are real risks with labels for life
Algorithms use data to predict the risk of becoming a NEET and target children for intervention. With consent, it might be beneficial. But as academics have identified, there is a risk in the troubled families programme that “any family could be made to fit”. The same data has been requested by researchers for use in predictive policing. A request from the Ministry of Defence for pupil-level information for targeted recruitment marketing was one of only 23 ever refused.
The government isn’t collecting statistics, but the personal life stories of millions of children aged two to 19, adding more each term throughout their education, then joining it to student loans, tax, and DWP records to create “destinations data”. What policy is this shaping, and with what error rate?
The handling and transparency of pupil data must change, as NHS patient data did in 2014 after an audit found our confidential records had been given to reinsurers, and that some uses never recorded.
First, the distribution model must be made safe. The sector also needs a review of data collection, its value and costs. Is school census data turned into useful knowledge? Is it necessary and proportionate to store hundreds of attributes on every pupil at national level forever, and copy and distribute them thousands of times, for millions of children? Everyone needs to be told exactly who has their personal confidential data and why. Our rights to access and correct errors must be restored.
Mistakes in school-collected data harm pupils and families. Children’s confidentiality is not a commodity. Their digital integrity will be necessary for trusted interactions throughout their life.
The government needs to stop and rethink the expansion of its Alternative Provision Census as a matter of urgency.
Jen Persson is director of defenddigitalme