The recent AI Seoul Summit, while less high-profile than its predecessor at Bletchley Park, set the stage for critical advancements and collaborations in the realm of artificial intelligence. Unlike the Bletchley Summit, where delegates consisted of government leaders, high-ranking industry executives and civil society representatives, the Seoul Summit saw a more subdued attendance with companies sending legal representatives rather than CEOs, and civil society having minimal representation.
In my new role as researcher at IFOW focusing on AI’s impacts on work, I attended a gathering in London last week that brought together a diverse range of voices to explore what had happened at the AI Seoul Summit. Panelists reflected on the last six months since the UK’s AI Safety Summit in Bletchley Park in November and considered the summit that will be held in France next year. I arrive at IFOW from the Public Law Project, and am looking forward to exploring the impacts of AI and algorithmic systems on workers and the developing AI governance framework.
One notable shift at the Seoul Summit was the removal of 'safety' from the title and as a focal point of the discussions. Many panelists at the AI Fringe viewed this as a positive change, allowing for more specifically defined conversations about AI's broader impacts. The discussions expanded to address known harms rather than focusing solely on existential risks (X-risk).
The Seoul Summit saw the publication of the interim International Scientific Report on the Safety of Advanced AI, led by Daniel Privitera from the KIRA Center. This report has already been credited as a valuable authoritative source for policy makers seeking an overview of the scientific literature and expert opinion on the general-purpose AI. The interim report sets the groundwork for ongoing international collaboration on the scientific understanding of advanced AI safety.
Scheduled for 10th and 11th February 2025, the France AI Action Summit promises to build on the groundwork laid in both Bletchley Park and Seoul. The final publication of the International Scientific Report on the Safety of Advanced AI will be released ahead of this summit, providing a comprehensive foundation for the discussions.
Henri Verdier, France’s Ambassador for Digital Affairs, outlined the five tracks and working groups for the Paris Summit:
Verdier made clear a promise that the France Summit will emphasise inclusivity, highlighting the need to involve creators, innovators, NGOs, and coalitions of smaller nations. He acknowledged that AI is a race to control power between companies, nations and regional groupings.
Whilst stressing to be prepared to face X-risk, Verdier was resolute in the need to be prepared for the more immediate and everyday concerns like bias, the future of work, human rights, and cybersecurity. We are pleased to hear this commitment at IFOW, but will be looking for how this is delivered in practice.
Encouragingly, the France Summit aims to open up its preparation phase, inviting participation from diverse stakeholders to engage with the five tracks. However, Verdier did not detail the pathways to joining the proposed working groups.
The Seoul AI Summit laid essential groundwork for future international cooperation and safety measures in AI development, building on the first summit of its kind in Bletchley late last year. As we look towards the France AI Action Summit, there is a clear momentum towards inclusive, comprehensive discussions that address both immediate and long-term challenges posed by AI. The collaborative spirit fostered in Seoul is expected to continue and expand in Paris, paving the way for a more responsible and innovative AI future.
IFOW welcomes the shift to broader discussions on the impact of AI, particularly the recognition of the future of work as an immediate and pressing issue. We will be looking to actively engage in the preparative phase of the France Summit, and look forward to seeing how the process is opened up to stakeholders from academia, civil society and beyond.
Mia Leslie