How can the UK strengthen the role of people and communities in “Sovereign AI”?

In recent months the UK government has agreed Memorandums of Understanding (MoU) with a range of large, US-based AI companies. The latest is with OpenAI. It says that:
“this partnership will support the UK’s goal to build Sovereign AI in the UK: ensuring that the UK continues to drive critical AI research and participates actively in development of this unique technology”
The MoU was negotiated by the UK Government’s new Sovereign AI Unit, created following the AI Opportunities Action Plan.
If we ignore the risk that Sovereign AI is just a PR term, then we see 3 possible interpretations of what the UK government means by ‘Sovereign AI’.
- Sovereign AI means that the UK’s government remains in control of AI infrastructure as it is built by a mix of UK-based and overseas AI suppliers. In this case the embedding of non-UK companies in providing critical functions would need to be avoided so that sufficient control can be retained, especially during the current turbulent geopolitical times. To provide a simple example, the government’s Redbox AI tool to summarise official documents for Ministers should not be using cloud-hosted models from OpenAI. Particularly when the US government has said that it will remove climate change from the list of risks that model developers should recognise and act on.
- Sovereign AI means that the UK government invests in and builds its own commercial and public AI ecosystem. This would mean that the UK benefits from possible future economic growth in the AI sector, and can deliver public projects without needing to pay foreign companies, who will offshore profits and avoid taxes. It removes the risk of foreign interference in both critical and non-critical functions.
- Sovereign AI means that not only the government but people in the UK have a strong influence over the way in which the state chooses to approach the future of AI infrastructure and services both now and on an ongoing basis. In which case where is the commitment to democracy and human rights in the current headlong rush into AI adoption across public services and wider society? Where is the ongoing engagement with and empowerment of the public, communities, workers, individuals?
Perhaps the responsible government minister, Peter Kyle, can clarify but as it stands the current government position seems to be a blend of 1 & 2.
We think more focus needs to be given to 3.
Power is at the heart of how AI will continue to be developed and deployed. As the government and businesses look to hand over decision making processes from people to technology, we need to ensure that this doesn’t deepen democratic deficits.
At the Sycamore Collective we believe technology should give people more power to shape their lives not less.
We know how important it is to abide by democratic principles and to empower communities and workers to live better lives. We see a vital need to regulate how technology giants and governments can use technology. This matters for combatting monopolies and resisting authoritarianism - whether at home or abroad.
When we look across the Sovereign AI unit, the MoUs with AI companies, and wider AI policy there are obvious gaps.
There is still no clear plan for regulating any of the many aspects of AI, instead the government has weakened data protection rules while the Chancellor recently described regulation as a “boot on the neck” of businesses. The UK government’s inaction risks the country taking Donald Trump’s rules on AI rather than making its own, regardless of its proclamations of ‘sovereignty’.
The Sovereign AI unit is engaging narrowly - there is no place for consumers, workers, civil society or the UK’s four nations and many regions. To truly unlock the opportunities of AI, and other digital technologies, we need to support and grow capabilities across the whole economy and in our vital public services.
For example, achieving Net Zero remains a priority for the public. So any future plans for AI infrastructure development must offer clarity on energy use and carbon footprint, as well as local environmental and public health impacts. As the government is determined to support the widespread development of data centres then public backing needs to be built into the strategy, the choices of where to build, and the ongoing operation of the centres. Are data centres going to be backed if the taps in people’s homes run dry or if the UK’s already terrible record on water pollution gets even worse?
Regulating effectively and growing useful capabilities will take time. We are still learning how to do these things for decades- old technologies like the internet and social media. AI just adds more complexity.
The government could help us all understand its plans by being clearer on what it means by “sovereign AI” and why it is signing these MoUs with large US firms but, in the meantime, we do think there is one question that the government, and technology professionals more broadly, could usefully start by asking and debating:
How do we strengthen the role of people and communities in shaping and exerting their power over not just AI policy, but also the AI infrastructure and services that the government hopes many of us will build and use?