It was announced in last month’s Queen’s Speech that the UK Government will introduce a Data Reform Bill, which is expected to introduce significant changes to the UK GDPR and 2018 Data Protection Act – both of which have been seminal in safeguarding privacy and wider fundamental rights in the digital era.
The move to reform the UK GDPR sits within an emergent landscape of AI governance measures being considered in both the UK and internationally. Policies mandating algorithmic accountability, for example, have been gaining traction across the globe, and has resulted in the proposal of the EU AI Act and Algorithmic Accountability Act (AAA) in the USA and the 2019 Directive on Automated Decision-Making in Canada.
The UK is therefore ripe to adopt its own legislation for algorithmic accountability.
The Institute for the Future of Work has previously written of the need for a systematic approach to algorithmic accountability in both the public and private sector in the UK – while it has become increasingly commonplace for algorithms to make critical decisions about people's lives, as seen in semi-automated hiring and management in the world of work, regulation has not kept pace.
Central to our proposal is an overarching Accountability for Algorithms Act, which would act as a 'hybrid act' that combines overarching principles from the Data Protection Act, Health and Safety at Work Act and the Environmental Protection Act to give well-established norms in AI governance a statutory base.
Experts at a recent All-Party Parliamentary Group (APPG) on the Future of Work event exploring the future of algorithmic accountability, focusing on international learnings on algorithmic impact assessments (AIAs) emphasised this need for novel legislation in the UK by drawing on the experience of how algorithmic impact assessments (AIAs) have been used in the USA and Canada.
Unlike data protection impact assessments (DPIAs), which are the only other form of impact assessment currently required that explicitly apply to algorithmic systems, AIAs involve mandatory public disclosure. This rule would allow for greater public engagement and empowerment to counter obscure and unaccounted for algorithmic decision-making. As Brittany Smith, Policy Director of Data & Society pointed out at the event, algorithms present a particular challenge as harms are often unevenly distributed within the population, are detected after the fact and are discoverable only in the aggregate.
This makes mechanisms for ex ante detection of harms increasingly warranted – conducting an AIA at different stages of the decision-making process, such as at the point of design and deployment, would hold actors accountable throughout the innovation cycle and supply chain. Currently the main incentives for developers or employers to test the equality and human rights impacts of algorithmic systems, or to disclose these insights to impacted individuals, hinge on will alone.
David Leslie, Director of Ethics and Responsible Innovation Research at the Alan Turing Institute, highlighted three components of an AIA, which he proposed should be integrated into a UK Accountability for Algorithms Act. These include:
The first component, meaningful consultation, would mean that workers affected by algorithmic management systems are actively involved in algorithmic auditing and impact assessments. This is something we agree with, and are establishing a methodology for in a project with the ICO.
Leslie highlighted how algorithmic surveillance deployed at work and the increased use of automated decision-making to allocate jobs and dictate working conditions can reduce worker autonomy, erode relationships of accountability between workers and managers and focus excessively on quantifiable productivity.
Scrutiny over algorithms deployed at work is therefore particularly warranted. Benoit Deshaies, Acting Director of Data and Artificial Intelligence for the Treasury Board of Canada Secretariat, recognised the importance of work as a key area of AI regulation, and cited the Good Work Charter as potential inspiration for the design of work-oriented AIAs in the public sector.
Smith echoed the need for transparency, and the public disclosure of the outcomes of AIAs conducted in order to allow citizens to be clear about when, how and why algorithms are being deployed to make decisions about them, and to be able to hold companies to account.
All panellists at the event also emphasised how the assessments should also be carried out horizontally, across different industries and applications, rather than applying vertically: to only one industry or type of algorithm, as well as the overarching legislation addressing the cross-sectoral nature of algorithmic harms, such as threats to equality, privacy and wellbeing.
A notable development in this vein involves the Digital Cooperation Regulation Forum (DRCF), which is composed of multiple regulators with differing remits. They have recently published two discussion papers on algorithmic processing and the landscape of auditing, and write of the need for the regulators – the CMA, the FCA, the ICO and Ofcom – to explore collaboration in areas of algorithmic harms that cut across regulatory scopes and the need to support the development of AIAs.
As this regulatory space develops quickly, we hope to see regulation of AI in the workplace at the centre of debates.
Watch back, or read the full transcript from the APPG event on international learnings on algorithmic impact assessments.