Young people have taught us that the use of algorithms and data-driven technology must work to promote equality, not undermine it. Government should respond to the lessons of the Ofqual grading algorithm and improve accountability for collective harms to rebuild public trust in technology. It’s also time to think about new protection for those disadvantaged by their socio-economic background, writes Anna Thomas, Co-Founder Director of the Institute for the Future of Work.
Young people are in the front line of our battle with COVID-19. Even before the A-level fiasco last week, our young have been shown to be much more financially and psychologically vulnerable to the consequences of the pandemic. We have already seen a range of workplace inequalities in terms of access, conditions and quality of work – and this is to a job market which will be characterised by ‘high and persistent levels of unemployment,’ according to Nobelist and Co-Founder of the Institute for the Future of Work Sir Chris Pissarides.
But the public outcry over the past days about the Ofqual algorithm is about more than the threatened prospects of 100s and 1000s of hopeful job entrants across the country. It also reflects frustration at the absence of meaningful human accountability and governance over automated systems which can project, on mass, past inequalities into the future.
So we need to look behind the undoubtedly good news that the systematic, downgrading of 40% A-level students last week has been overturned and ask how and why this could happen in the first place.
Technology is only as good as the humans developing, applying and governing it; and the human values that underpin these activities. The Ofqual algorithmic standardisation system didn’t have mysterious powers: it was a socio-technical system. Humans decided to deploy the algorithm, and determine its remit and purpose. Humans designed the algorithm (Ofqual or more likely a specialist contractor); humans selected the data points, variables and weighting; and humans decided how historic data, which reflects historic inequalities, should be used. And at a political level, humans decide how to regulate, govern and oversee these decisions too.
The debate, so far, has focused on the need for a technical review of the automated process. But technological solutions are not enough, and cannot be seen in isolation. To avoid a form of ‘techno-chauvinism,’ we need to ask wider questions about the legal, social and political infrastructures in which this situation could come about.
Young people, previously the demographic group most likely to be accepting of big data analysis and data-driven technology by both public and private bodies, have rudely woken up to potential for collective harms and inequalities to determine pathways and shape the future. In real time, and with real results, they have experienced what happens when automated systems, fed by large data sets reflecting past patterns of behaviour and resource, are used to make predictions: pupils an schools in socio-economically disadvantaged areas were marked down most harshly by the statistically model used to replace A-levels.
Against this background, there was no remedy or appeal procedure for individuals or for groups; nor was any person identified as accountable for the wrong. If there had been a route for individual challenge, those most likely disadvantaged by the downgrading would have been be least able to pursue redress. The absence of substantive and procedural ‘fairness’, as it is understood in administrative law, was felt acutely. On the other hand, ‘fairness’ was being approached by the technologists and leaders of Ofqual in an entirely different way.
Similarly, we have seen a gap in how ‘equality’ was considered by different parties. Students, lawyers and social scientists have pointed to striking inequalities of outcome between pupils from different socio-economic backgrounds. On the other hand, Ofqual seem to have regarded a literature review on unconscious bias by teachers as an adequate substitute for building equality into the standardisation process, and a proper equality impact assessment of their own model, with appropriate adjustments.
Perhaps predictably, street protests (‘the algorithm stole my future’) and two Judical Reviews (Foxglove and the Good Law Project) followed.
This experience highlights the disconnect between the technical, legal and social infrastructures in which the Ofqual algorithm did its business - and the consequential accountability gaps experienced by Generation COVID-19.
So what are the first lessons policy-makers can learn from the 2020 A-level debacle?
First, more inclusive methods of systems design and testing are needed urgently. Transparency about technical aspects of the model is important, but was not the main problem. To counter the uncomfortable truth that automated standardisation will pay no attention to individual merit or personhood, the algorithm must not be outsourced to the Ofqual technologists: it must be co-developed.
Social scientists and students would have pointed out the problems of using ranking and historic data; lawyers would have pointed to equality impact assessments, and the principles of fairness in administrative and data protection law; experts, like the Royal Statistical Society would have pointed to the technical limitations of the model; teachers unions would have identified the unequal outcomes for demographic groups and schools.
The Institute for the Future of Work is developing an inclusive social policy methodology and tool with world-leaders in systems and design thinking which might help support co-development of tools like the Ofqual algorithm.
Second, policy-makers across Departments must reduce silos and work much more closely to align policy and regulation. It undermines the Chancellor’s support for young people in his Plan for Jobs if job entrants are unfairly deprived of grade boundaries they need to work; or if the DWP concurrently require this unfortunate year group to prove they are looking for jobs which do not exist, in order to avoid sanctions. And it undermines the Cabinet Office’s commitment to address structural inequalities, for the DfE to proceed with the Ofqual algorithm in the face of warning about unequal impacts.
Closer working between departments would also have drawn attention obvious obstacles or conflicts, such as the dual role of the Ofqual Chair, who heads the Centre for Data Ethics and Innovation.
To help align work in different spheres, the Commission on the Future of Work’s has proposed a cross-department Work 5.0 Strategy which would re-orienting policy-making towards a central, cross-cutting policy objective: good work.
Last, policy-makers must be prepared think big and entertain proposals for new routes for algorithmic accountability. We should now be rigorously critiquing relevant legal and other accountability frameworks to make sure they fit for the age of the automated system – and ensure the accountability gaps exposed over the last week are filled. We can do better than require individual students to object to unfair grading ex post facto.
Technology policy should require advance assessment, consideration and remedies for collective, as well as individual, harms and impacts on equality. This should now include socio-economic disadvantage, which is not currently protected by the Equality Act (s1 Equality Act is only in force in Scotland).
And we will need proactive, affirmative duties for the public and private sector to identify, monitor and report on adverse impacts so that adjustments can be made and we can ensure that new forms of collective harm are not boldly projected into the future. Equality must be an overarching principle, alongside the FAST principles, built into design and implementation - not an afterthought.
Eyes will now turn on the CDEI’s forthcoming Bias Review, which will be expected to rise to these new challenges.
Technology is needed more than ever to support the challenges of the pandemic. It should be used should bring people and policy makers together, leverage human strengths, and improve access to work, work and wellbeing. Policy-makers must now work hard to do this – and regain public trust in technology.
The Institute for the Future of Work’s consultation on equality impact assessments is here - readers are invited to complete it before 27 August. IFOW’s Equality Task Force will be reporting in October 2020. Anna Thomas is also an independent member of the CDEI Bias Review Steering Group.