It’s telling that a single word can define an entire report.
The Centre for Data Ethics and Innovation (which was established with unique mandate to develop a governance regime for data-driven technologies) was tasked by Government to advise on how to address ‘bias’ that may be created or amplified by algorithmic decision-making. A landscape summary was released in July 2019. An interim report, which focused on statistical bias, followed. The final report is published today, on 27th November 2020.
Terms of reference which hinge on the word ‘bias’ - a contested word primarily associated with individual prejudice - have led to a report with many welcome conclusions but one which does not go far enough.
I have an interest to declare: I am one of three independent advisors to the CDEI Review (with Dr Reuben Binns and Robin Allen QC). And although I support the findings of the report, I do think it should have gone further. Rather than advising the Government to provide leadership and coordination by seeking further guidance, the review should have advised that fresh legislation is needed in order to achieve its stated aims.
Last month, IFOW's Equality Task Force, chaired by Helen Mountfield QC, published a report on bias, inequality and accountability for algorithms: Mind The Gap : How to Fill the Equality and AI Accountability Gap in an Automated World.
The pandemic has seen an explosion of digital technologies at work. Over the summer we saw public frustration boil over about the harms and accountability in the wake of the Ofqual A- level grading farrago. Even today, a new survey suggests 1 in 5 employers are tracking workers online or planning to do so.
Invisible and pervasive, automated technologies involving mass data processing have taken over an extraordinary variety of tasks traditionally carried out by people such as HR professionals, vast numbers of managers and many others in response to drives to meet new demands and increase efficiency.
Against this background, the CDEI Bias Review has much to offer. Three things, in particular, stand out:
First, the report rightly proposes moving from an after-the-fact approach, driven largely by individuals with limited access to relevant information after adverse impacts have hit, to pre-emptive action and governance by decision-makers, from the earliest point in the technology innovation cycle and right through its deployment. As the report says, a ‘more rigorous and proactive approach’ to identifying and mitigating bias is now required.
This is a significant shift - and the main message of IFOW’s recent Mind The Gap report which recommends that a new statutory duty should be introduced to ensure that developers and users of algorithms have to think ahead and systematically evaluate assess their wider impacts so they can make reasonable adjustments to avoid adverse effects, especially equality impacts.
Second, it recognises the scale and breadth of both individual and collective harms which are posed by use of automated technologies trained on data that may embed or amplify historic inequalities and patterns of behaviour and resource. In turn, as the scale and speed at which these tools are adopted increase, so too must the pace, breadth and boldness of our policy response to meet these challenges and rebuild public trust. This response should include algorithmic audit and equality impact assessments (as we have argued) helped by the regulators who should start developing compliance and enforcement tools.
The report’s recognition of this issue should not be downplayed: it is a milestone. Challenges connected to the potential of algorithmic systems to amplify and project different forms of individual and collective structural inequality into the future have too often been minimised, or avoided altogether.
Understanding and responding to adverse equality (and other) impacts will mean building cross-disciplinary capabilities and expertise at the CDEI itself and more widely within government, regulators and industry, as IFOW and the Ada Lovelace Institute have recently modelled.
The CDEI report recognises that ‘bias’ in algorithmic decision-making systems (which are inherently socio-technical) reflect wider problems in society: “as work has progressed”, it says, “it has become clear that we cannot separate the question of algorithmic bias from the question of biased decision-making more broadly”.
The CDEI have demonstrated an admirable willingness to develop their own expertise and engage a wider stakeholder base and the public as part of follow-up work: because society as a whole will need to be engaged in the process of assessing the trade-offs and reasonable adjustments required to counter the harms caused by bias. Specifically, a formal forum and mechanisms for wider engagement to give effect to this purpose will be the next step ‘foster effective partnerships between civil society, government, academic and industry.’
Third, many of the recommendations to improve public sector transparency beyond requirements of the existing legal regimes - including a new, mandatory transparency duty and new procurement standards - are strong and supported by a detailed summary of existing legal requirements. But the report (whilst recognising that decision-making is dispersed and traditional divisions do not always stand up) the report stops short of extending this recommendation to the private sector.
This takes us to the report’s Achilles’ heel. The harsh truth is that voluntary guidance, coordination, and self-regulation has not worked. Further advisory or even statutory guidance will not work either. In spite of striking moves in the right direction, the CDEI review stops short from making the next logical leap.
Poorly designed systems do not just prejudge, they actively and always compound those inequalities encoded in their training data. That is the fundamental, outstanding problem. The ethico-legal principle of equality is much more than a niche aspect of bias. It's fundamental. And applying it properly means stopping the amplification of structural inequalities by algorithmic systems, not just understanding them.
If strong, anticipatory governance is indeed crucial - as both IFOW and CDEI reports suggest - then new regulatory mechanisms and a legal framework are required to ensure that the specific actions which have been identified as necessary are taken. Our cross-disciplinary analysis of case studies - which shows the harms acutely felt by workers in the most insecure jobs - has shown that existing legal frameworks have not kept pace with use of algorithmic systems trained on data encoding structural inequalities. There is abundant evidence that principles-based approaches do not translate into practical action in this context. Protection is patchy, inconsistencies abound and enforcement worse. At work, where asymmetries of power and information are already felt acutely, these gaps can be seen quite clearly.
That is why our Equality Task Force recommends fresh legislation: a 'public interest' Accountability for Algorithms Act to ensure algorithmic systems used at work are built and deployed to promote fairness and equality; that equality impacts are rigorously assessed and reasonable adjustments made to counter adverse effects detected; ethical principles which have given us a normative basis for regulation are placed on a statutory footing; and that human agency is affirmed. Humans must be properly accountable for decisions in design and deployment of algorithms trained on data which encodes the inequalities of the past and has far-reaching consequences.
The wider importance of well-governed and law-based institutions to both society and the economy is well established, as our Chair Chris Pissarides, Darren Acemoglu, Paul Romer and others have argued. This is worth remembering at a time when the Government’s Spending Review is primarily aimed at boosting jobs and the economy.
Being first to regulate in this area could have considerable advantages for UK. The UK will preside over the G7 next year; the Council of Europe's Ad Hoc Committee on AI is examining 'the feasibility and potential' of a new legal framework for AI; and the UK is co-chairing the GPAI's Data Governance Working Group. Regulating thoughtfully, and early, should inspire better innovation, give clarity to both developers and users of algorithms, and build trust. In the same way that medical regulation makes the UK an attractive proposition for the life sciences industry, developing high quality regulation would help foster new industries and jobs in responsible technology. These strengths should be leveraged to make the most of ‘first mover’ advantage.
The report is right that we are sitting at a small window of opportunity and in recognising that the law should be improved with time. We say: roll on the second phase of the debate.
A version of this blog was published by the Ada Lovelace Institute on 27 November 2020
Anna Thomas MBE