Last night, before his interview with Elon Musk, Prime Minister Rishi Sunak gave a summary speech reflecting on the AI Safety Summit that had just closed. His remarks dismissing calls for swift action to regulate AI invite a more detailed response, but it was his comments on work that we feel need urgent attention.
Work wasn’t on the formal agenda for the Government’s first global AI Summit. However, it did feature heavily in the US’ AI Executive Order announced as a ‘world-leading’ template a day ahead of the summit and in Vice President Kamala Harris’s speech - which, in interesting political optics, she left Bletchley to make.
Whether the result of this American influence, the civil society voices who were (after wide concerns about an overly-narrow guest list) invited to the Summit, or the strong focus on workplace impacts across a range of official Summit Fringe events this week, it seems Sunak did consider it important to recognise AI’s impact on work, and how workers should perceive AI:
We should look at AI much more as a co-pilot than something that necessarily is going to replace someone’s job. AI is a tool that can help almost everybody do their jobs better, faster, quicker, and that’s how we’re already seeing it being deployed. I’m of the view that technology like AI which enhances productivity over time is beneficial for an economy. It makes things cheaper, it makes the economy more productive.'
Many have perceived in this a nod to Daft Punk’s ‘Harder, Better, Faster, Stronger’ - and perceptions are vital. Whether Sunak was enlisting the French pop duo as a means of courting Macron – notable by his absence – his framing of AI as a high-energy personal assistant is important because this colours the space within which future regulation – if and when it comes – will be constructed. Painting AI in this rather shallow way as mere 'co-pilot' risks a similarly narrow regulatory framework.
Against this depiction, it is critical that policymakers recognise, identify, and understand the breadth of different ways ‘cognitive technologies’ can make things cheaper; and how this links to changes to work. If savings are not made by a worker being displaced, how is this new value captured and at what cost to good work? This is exactly what we have considered in our recent publication Reframing Automation. A partial view won’t deliver an adequate and comprehensive response.
This week, in collaboration with Warwick Business School and as part of the AI Summit Fringe, our Making the Future Work conference. There we heard from academics, regulators, unions, industry figures and frontline workers themselves how all of these different automation impacts of AI are being felt now, and the urgent need for regulation.
While Sunak’s view of AI as a co-pilot represents a rosy ideal, the evidence suggests this relationship only tends to be designed in for tools designed to serve workers who are already in jobs characterised by greater autonomy and higher wages. For many others, AI is not a benevolent co-pilot. Instead, organisations have deployed it in ways which take over the controls and eject them without a parachute. In our sector-focus panel on the Creative Industries, we heard from a voice artist whose work has been decimated by AI models trained on voice recordings gathered at casting sessions, with this outcome not disclosed, consent not sought, and with no appropriate compensation.
A representative from the Communication Workers Union has seen the ‘faster, quicker’ experience of AI, but explained how postal workers are experiencing this at the expense of their wellbeing, or compensation with demands for more deliveries within each shift, and no compensation in wages. This is a classic example of work intensification, where an organisation decides to deploy AI to increase task density - and uphold this standard by penalties like loss of access to work or pay, elsewhere described as ‘robo-firing’. And it is not just delivery workers – some in financial services are unpaid for toilet breaks; with every minute of billable hours on ‘record’ at their desks.
As our research into the adoption of AI through the period of Covid-19 has demonstrated, algorithmic management tools are being used to track work methods and instruct new, less experienced or untrained workers to deliver tasks at a cheaper rate. These kinds of interventions drive new types of workplace ‘health and safety’ risk that require the UK to recognise and regulate for immaterial impacts, and what other legislators recognise as ‘psycho-social’ harms. As highlighted by myself and Professor Phoebe Moore at our conference, Europe is legislating around this through their Platform Worker Directive and California, Washington and New York have created laws to govern the use of algorithmic quotas to manage workers.
Another form of automation overlooked by Sunak’s depiction, but which international legislators are taking far more seriously, is that of matching. This raised serious concerns from contributors to our panel on regulation. Matching is what happens when a delivery rider is given the job of picking up your food order and getting it to you. But it is also becoming more prevalent in the hiring and recruitment process - for example, whether a social media site will even ‘show’ you a particular job advertisement - and is increasingly used to match employees with tasks within a job they already hold. These matching processes present very significant risks to equality, fair pay, and learning.
Rebecca Thomas of the Equality and Human Rights Commission highlighted the hard work underway at the EHRC to devise guidance for businesses on how to ensure that they are complying with Equality Law when deploying AI matching tools. But - as highlighted at the People’s Panel convened for the AI Summit Fringe by Connected by Data - without requirements for transparency, people are often unaware that a system has discriminated against them and are left unable to exercise their existing rights to challenge and contest decisions that an AI system has made. New York has already legislated to address this through a dedicated AI in hiring law.
The workplace is where people will most likely see and experience impacts from AI. Work is important – it is a golden thread that runs through individual lives, communities and economies, binding together people’s talents, capabilities and collaborations. With the many and varied ways that AI is putting stress on the fabric of work and the experience of workers, it is not enough to present this sprawling set of powerful tools in the rather narrow way that Sunak summarised it.
As Wilson Wong of CIPD shared, businesses are crying out for clear red lines, and frightened of what they could be responsible for. In the absence of regulation, many organisations are ‘moving fast and breaking things,’ and this dos present real and present risks to the quality of work available, as well as the amount of it. At the very least – as concluded by a cross-party group of MPs following an inquiry into workplace AI – there should be a requirement for companies to pause and consider how a new system could impact workers. We have created a framework – our Good Work Algorithmic Impact Assessment – to support firms to manage this process. Importantly, new research that we published in September as part of the Pissarides Review into the Future of Work and Wellbeing, funded by the Nuffield Foundation, confirms that when this engagement process happens, the needle is moved to net positive impacts on job numbers and job quality.
Sunak is right: AI can lead to a more productive society if we design it to promote human capabilities and, in turn, achieve more than ‘so-so’ benefits that come from squeezing labour. As the AI Safety Summit closes, it is vital that the government understands – as so many other leading nations already are doing – that the greatest risks from this transformational technology are in the here and now in workplaces.
To deploy Daft Punk again, ‘We've come too far, to give up who we are, so let’s raise the bar...’
What that means in practice is a dedicated focus on good work in debates about next steps in regulation. If we can get that right, then a future where innovation and social good advance together is possible.
Dr Abigail Gilbert