Half of the industry representatives at the AI Safety Summit hail from the US, the government’s published attendee list shows, as Prime Minister Rishi Sunak looks to make the UK the “geographical home of global AI safety regulation”. Is this not a case of putting the cart before the horse, given the UK’s technology industry is somewhat at a crossroads (see article regarding Arms Holdings listing on the U.S. Nasdaq and not on the L.S.E here)?
The full guest list for the two-day summit shows 40 “industry and related organisations” attending, with 20 of those headquartered in the US. From that list of 40, there are 34 technology companies, with the remaining six including trade bodies such as TechUK.
Analysis shows that 18 of the 34 technology companies are larger established firms, ranging from publicly listed cybersecurity firm Darktrace to US tech giants such as Meta and Google. The remaining 16 tech companies are startups and scaleups, but skew towards older and larger firms. They include Germany’s defence tech startup Helsing, founded in 2021, and the UK’s Faculty AI, founded in 2014.
The government published the full list of AI Safety Summit attendees on Tuesday, a day before the summit takes place at Bletchley Park in Milton Keynes. Topics for the two-day summit include the use of AI in election disruption, the erosion of social trust and the exacerbation of global inequalities.
Government officials from 27 countries will be attending, including US Vice President Kamala Harris. Multilateral organisations including the United Nations, the European Commission and Organisation for Economic Co-operation and Development (OECD) will be represented at the AI summit.
Elon Musk, the billionaire CEO of Tesla and owner of X, formerly Twitter, will be attending and hosting a subsequent conversation planned with Rishi Sunak. Chinese tech giants Alibaba and Tencent are also sending representatives with China agreeing to work with the United States, European Union and other countries to collectively manage the risk from artificial intelligence.
Academic representation includes the University of Oxford and the University of Birmingham. Civil society groups include the Ada Lovelace Institute and Centre for AI Safety.
Some attendees were already publicly known in the run-up to the summit. Earlier guest lists indicated greater representation from larger US tech companies. This attracted criticism from the UK’s startup community, who argued that they were being left out of the discussion and indeed coming across as a snub for smaller Start-ups.
“As it stands, this ‘only-giants-can-speak’ approach carries across a worrying message to the AI startup community,” said Dr Roeland P-J E Decorte, CEO of Decorte Future Industries.
The final guest list shows a more even balance of Big Tech companies and startups at the AI Safety Summit. With the exception of France’s Mistral, founded earlier this year, there is little representation from earlier-stage startups.
Nigel Toon, CEO of Graphcore, a British company developing hardware and software for AI applications, told the UKTN Podcast that over-relying on input from larger tech companies risks commercial interests interfering with regulation. “I think one of the things we’ve got to be really cautious of… is we’ve got to be very careful of AI tech leaders who throw their hands up and say ‘regulate me, regulate me’,” said Toon, who is attending the AI Safety Summit.
“Because the risk is that governments are not in a good position to actually do that and they will instantly go back to those same people and ask for them to help them come up with the regulation – and that could easily result in commercial interests coming to the fore.” For some, the sheer size of the US tech sector means a large presence from across the Atlantic was inescapable.
“There is merit to bringing together a room full of knowledge and expertise, and at the moment lots of that comes from the US,” said Frances Spooner, partner at law firm Marriott Harrison.
“So it was almost inevitable the numbers would skew towards the bigger companies; we can probably learn from them. But if the UK is to truly ‘own’ the regulatory landscape it will need to ensure fair representation is given to different interests and not just Big Tech.”
Opting for the symbolism of Bletchley Park, which was home to World War II codebreakers, has limited the size of the summit’s guest list. Matt Clifford, Entrepreneur First CEO and the prime minister’s representative for the AI Safety Summit, revealed in October that the conference will be limited to “about 100″ attendees. However, the size limitations gave the tech department a “real focus and discipline around attendance and around agenda”.
US President Joe Biden issued an executive order requiring AI developers to share safety results with the US government, in a move that some saw as undermining the UK’s summit by placing the US at the heart of the debate and clearly demonstrating a case of size matters, perhaps!