The second global artificial intelligence (AI) summit in South Korea saw dozens of governments and companies double down on their commitments to safely and inclusively develop the technology, but questions remain about who exactly is being included and which risks are given priority.
Speaking with Computer Weekly about developments during the AI Seoul Summit, tech experts and civil society groups said while there was a positive emphasis on expanding AI safety research and deepening international scientific cooperation, they have concerns about the domination of the AI safety field by narrow corporate interests.
They said while the summit ended with some concrete outcomes that can be taken forward before the AI Action Summit due to take place in France in early 2025, there are still a number of areas where further movement is urgently needed.
In particular, they stressed the need for mandatory AI safety commitments from companies; socio-technical evaluations of systems that take into account how they interact with people and institutions in real-world situations; and wider participation from the public, workers and others affected by AI-powered systems.
However, they also said it is “early days yet” and highlighted the importance of the AI Safety Summit events in creating open dialogue between countries and setting the foundation for catalysing future action.
While there was general consensus among attendees of the first AI Safety Summit – which was held by the UK government at Bletchley Park in November 2023 – that it was a good step forward, particularly due to the involvement of China amid rising tensions with western governments, many others were concerned about what comes next and whether more diverse perspectives will be included in the future.
AI Seoul Summit developments
Over the course of the two-day AI Seoul Summit, a number of agreements and pledges were signed by the governments and companies in attendance.
For governments, this includes the European Union (EU) and a group of 10 countries signing the Seoul Declaration, which builds on the Bletchley Deceleration signed six months ago by 28 governments and the EU at the UK’s inaugural AI Safety Summit. It also includes the Seoul Statement of Intent Toward International Cooperation on AI Safety Science, which will see publicly backed research institutes come together to ensure “complementarity and interoperability” between their technical work and general approaches to AI safety.
The Seoul Declaration in particular affirmed “the importance of active multi-stakeholder collaboration” in this area and committed the governments involved to “actively” include a wide range of stakeholders in AI-related discussions.
A larger group of more than two dozen governments also committed to developing shared risk thresholds for frontier AI models to limit their harmful impacts in the Seoul Ministerial Statement, which highlighted the need for effective safeguards and interoperable AI safety testing regimes between countries.
The agreements and pledges made by companies include 16 AI global firms signing the Frontier AI Safety Commitments, which is a specific voluntary set of measures for how they will safely develop the technology, and 14 firms signing the Seoul AI Business Pledge, which is a similar set of commitments made by a mixture of South Korean and international tech firms to approach AI development responsibly.
One of the key voluntary commitments made by the AI companies was not to develop or deploy AI systems if the risks cannot be sufficiently mitigated.
In line with the commitments to advance international cooperation on AI safety – including through setting shared red lines around acceptable risk thresholds and creating a collaborative network of research institutes – the UK government also announced an £8.5m funding programme, which will be run by the UK’s AI Safety Institute (AISI) established in the run-up to the last AI Safety Summit.
The UK government said the grants would be awarded to researchers studying how to best protect society from the risks associated with AI, including deepfakes and cyber attacks, and that the overall programme had been designed to broaden the AISI’s remit to include the field of “systemic AI safety”, which aims to understand and mitigate the impact of AI at a societal level.
Boosting international cooperation
The consensus among those Computer Weekly spoke with is that the biggest achievements of the Seoul Summit were the commitments to proliferate AI safety research bodies and have them cooperate across national boundaries.
We’re starting to see a consensus emerge on the importance of AI safety institutes … [specifically] the importance of collaboration between them Ima Bello, Future of Life Institute
“We’re starting to see a consensus emerge on the importance of AI safety institutes, but it’s not really a consensus about the institutes, it’s a consensus on the importance of collaboration between the safety institutes,” said Ima Bello, the Future of Life Institute’s AI summit representative, who described deeper cooperation between already existing research bodies as “a very practical outcome” of the Seoul Summit.
She said that to strengthen these efforts, the next step would be to establish some kind of coordinator for the safety network.
Bello also highlighted that the interim international state of the science report – which assesses existing research on the risks and capabilities of frontier AI, and which countries agreed at Bletchley should be headed up by AI academic Joshua Bengio – was unanimously welcomed by governments participating in the summit. “We can use that as the basis for informed policy recommendations and informed policy-making,” she said.
Bello said the AI Summits should also be praised because they represent the only international forum where AI safety dominates the agenda, as well as one of the few forums where geopolitical rivals like the US and China sit down together to discuss the issues.
Jamie Moles, a senior technical manager at cyber security firm ExtraHop, said while it may not sound like a big deal, the appearance of more safety institutes internationally and their agreement to work together “is probably the biggest and most significant thing to come out of this”.
Committing to inclusivity
While those Computer Weekly spoke with praised the emphasis on scientific cooperation, they also felt more must be done to ensure that AI research and cooperation is widened to include more diverse perspectives and worldviews.
Eleanor Lightbody, chief executive at Luminance, for example, said while the commitments from 16 AI companies and 27 countries were a “promising start”, more needs to be done to ensure a wider range of voices at the table outside of big tech.
“Indeed, these companies have skin in the game and can’t be expected to mark their own homework,” she said. “While larger AI companies may be able to quickly adjust to regulations, smaller AI companies may not have that ability. That’s why incorporating AI startups and scaleups in these conversations is so important.”
Noting the commitments of AI safety institutes to collaborate with AI developers, as well as the amount of talent these bodies are looking to hire from industry, Adam Cantwell-Corn, head of campaigns and policy at Connected by Data, said there is an education-to-employment pipeline in the tech sector that allows only a narrow set of ideas through.
Describing this as an “insidious and pervasive form of [regulatory] capture that needs to be grappled with”, he said tech firms play a huge role in shaping academic research agendas on AI via funding and access to compute that only they have.
“The companies largely determine what happens in academic circles and therefore the people that actually get to that level of expertise, their intellectual and knowledge set has been shaped by quite a limited and narrow set of perspectives and interests,” he said, adding that this leads to “technical and technocratic” approaches being favoured over “socio-technical” approaches that take into account the wider societal effects of AI systems in the specific contexts they’re deployed.
He added there needs to be a plurality of voices and perspective when considering questions about if and how to introduce new AI: “That includes civil society, it includes affected communities, it includes workers who are going to be implementing it, but also workers who are going to be on the sharp end of it.”
To get around these issues, Cantwell-Corn said governments need to foster independent expertise that isn’t reliant on funding or sponsorship from the companies. “We’ve got a risk of creating groupthink here, [so] we need to galvanise and build the capacity of civil society to contest that,” he said.
Bello largely agreed, noting that while there are many “amazingly talented people” in the AI sector who “would be thrilled to work for the common interest”, the independent funding mechanisms to allow them to do this simply don’t exist right now.
She added that government should therefore be focused on creating “conditions for their freedom of thought and freedom of manoeuvre”, but noted this would be a process rather than an overnight change.
Matt Davies, the UK public policy lead at the Ada Lovelace Institute, said while the commitments and agreements signed at the summit help set a direction of travel and send signals out about the need for responsible, safe and inclusive AI, the problems at the last summit around the lack of trade union and civil society representation are still very much present.
“We’d really like to see more civil society and public engagement in these processes. This was a big criticism we and others made of the Bletchley Summit, and it’s not necessarily been remedied this time around,” he said.
“That also goes for the institutes being set up. So far, there’s been relatively limited civil society engagement with the leadership of those institutes, or presence at board level that could help to set priorities and ensure that public concerns really centred.”
Davies also said there is an urgent need for more socio-technical expertise to be brought to the global network of safety research bodies being set up, which can then work with “regulators and enforcement agencies on contextual downstream evaluation and how these systems are actually working in society”.
Davies shared his hope that the money put aside by the UK government to fund research into systemic AI risks would be a step towards adopting a more socio-technical approach to AI safety.
Considering systemic risks
A major issue around the last AI Summit at Bletchley Park was the balance of risks being discussed, which was criticised by both industry and civil society voices for being too narrowly focused on future speculative risks over real-world harms already occurring.
Commenting on the balance of risks being discussed this time around, Bello said that every risk – whether speculative or already occurring – must be discussed: “We should talk about all of this risk in a really serious manner, and for that we need mental agility … we need all visions.”
For Cantwell-Corn, the main positive of the Seoul Summit was how the frames of reference had been expanded to incorporate more known harms rather than focusing so heavily on speculation.
Pointing to the AI safety research grants announced by the UK government, which contain its first commitment to examining the “systemic risks” of AI systems, Cantwell-Corn said this opens up the opportunity to consider how AI will affect the present and future of work, people’s ability to effectively exercise rights and protections, the distribution of public services, market concentration and corporate governance.
“This is less to do with AI per se, and more to do with long-standing conversations around monopolies and oligopolies, and how they distort markets and create bad social outcomes,” he said. “It’s also about political choices around how AI technologies get adopted, who are the winners and losers of that, and who carries the burden of things like disruption and reskilling.”
On systemic risk, Moles similarly highlighted the impacts of AI on people’s daily lives, and particularly their jobs, if the technology does reach the stage where it can reliably replace a broader range of human labour.
“If we get to the stage where AI can negate many, many jobs and make them not need a human being, then we’re going to have mass unemployment, mass civil unrest and a whole generation or more of people who, because they don’t have the skills, cannot do anything productive for society and earn themselves a decent living,” he said. “That’s the time when universal basic income starts becoming a reasonable proposition.”
However, like Cantwell-Corn, he noted this isn’t so much a problem with AI as it is with the shareholder-focused capitalist system in which it is being developed and deployed.
Highlighting the historical role of Milton Friedman – an influential American economist who argued the main responsibility of a business is to maximise its revenue and increase returns to shareholders – Moles said that capitalism has shifted in the past 40 years away from a stakeholder model whereby people actively contribute to businesses and are looked after as a result, to one where they are disposable.
“We keep hold of them when we need them, and then in the bad times, even if we’re profitable, we let them go, because letting them go makes the bottom line look better and increases value to shareholders,” he said. While this is not an AI-specific issue, he added, there is the risk that such trends are exacerbated by the wider roll-out of these technologies throughout the global economy.
Voluntary commitments
Moles “wasn’t so impressed” by the voluntary commitments made by the 16 AI companies. He highlighted the specific example of OpenAI recently disbanding its unit focused on dealing with long-term AI risks after just a year in operation. “It’s a complete contradiction, they’re saying one thing and doing the other,” he said.
Going forward, Moles said companies need to be publicly and transparently showing concrete examples of when they are putting the brakes on AI so that it builds up trust by moving beyond the need to take their word for it.
“I want to see specifics. I want to see big companies and the AI institutes coming out and saying, ‘We’ve found this specific issue, let’s work together to ensure we have global mitigation for this’,” he said.
However, given the short-term outlooks of many politicians, Moles said he also wouldn’t trust them to effectively regulate for long-term problems they don’t fully understand: “I would much rather the leaders of the AI companies got together and very publicly and very explicitly stated what they’re going to do to make this technology safe.”
If the tech industry truly wants AI to deliver on all the promises it brings, we need to build and deploy systems that people trust and want to engage with Chris McClean, Avanade
While Moles acknowledged the companies are not always best placed to know when to pull the handbrake on AI, due to being so heavily invested in building and profiting off the technology, he said the technology needs to develop further before governments intervene: “Do we trust the companies? I think we’ll have to wait and see.”
Chris McClean, global lead for digital ethics at Avanade, agreed that industry should not let governments drive the conversation around AI regulation.
“Regulation has never been able to keep up with innovation, but if the tech industry truly wants AI to deliver on all the promises it brings, we need to build and deploy systems that people trust and want to engage with. If people aren’t comfortable with AI systems processing their data, guiding important decisions, or streamlining critical workflows, its power starts to dissolve very quickly,” he said.
“Pleasingly, we’ve seen a very clear shift over the past six months in business and technology leaders getting serious about responsible AI policies and practices. While there are still some valid concerns that companies are talking about responsible AI more than actually investing in it, the trend is clear: responsible AI and AI governance efforts are accelerating in organisations of all sizes, regardless of sector.”
Others were similarly critical of the voluntary commitments, but took the view that harder rules would help set the direction of travel for AI.
Davies, for example, said that in the historical development of other sectors like pharmaceuticals, governments did not wait for an evidence base to legislate, and instead gave regulators and agencies meaningful powers to intervene, which then led to the development of the standards and agreements that we have for the sectors today.
“In various jurisdictions, governments introduced a really high bar the companies would have to clear to meet market approval, and then empowered regulators to decide whether evidence was meeting that bar,” he said, adding that while the safety testing regime for that industry does include input from the companies, they have crucially never been allowed to set the terms for themselves in the way technology companies are currently doing for AI.
“The [AI developers] need to meet the approval of a regulator that has the statutory power to remove things from the market, and that fundamentally changes not only the accountability, but also the ways in which the rules are set in the first place.”
He added it also means public interest is taken much more seriously from the outset: “You can look at [the Seoul agreements] in one sense as ‘we’ve got these great agreements, we just now need to bring in accountability mechanisms’ – and we welcome that – but I think there’s a more fundamental problem, in that, if you don’t do that sooner rather than later, then you end up potentially with a set of rules that have largely been set by industry, without governments having the leverage to reshape them.”
He said it was therefore key to give regulators some kind of statutory powers to deny market approval before entering into further negotiations with the companies.
“Governments need to give themselves the tools and the leverage to be able to enter into negotiations with companies, and then develop the right commitments and the right standards,” he said. “At the moment, in a voluntary system, companies have all the leverage, because they can simply decide to walk away.”
For Cantwell-Corn, “the whole idea of voluntary and self-regulation is a classic play by powerful industries to deflect and stave off mandatory regulation, i.e. effective regulation”. He further described the voluntary commitments as a form of “regulatory capture” that represents “a subordination of democracy to the goals and incentives for a really tiny handful of the industry.”
Commenting on the lack of mandatory AI safety rules for companies, Cantwell-Corn added there was a “great power games” element, as rising geopolitical tensions between powerful countries mean they are reluctant to regulate AI if it means they “lose the technological edge”.
Bello also noted there was an element of regulatory capture in the voluntary commitments, adding: “It just makes sense from a sociological perspective to have legislative bodies create walls for companies. If we do a parallel with the pharmaceutical industry, we would never ask laboratories to create their own rules, and never make those voluntary. It’s safe and practical to think of AI systems and models as experimental objects … and for [medicine] experiments we have tests and evaluations and independent bodies to ensure safety standards are met before they can hit the market.”