"I have a plan": Comparing AI action plans in the EU, the UK and the U.S.
Will economic competition overshadow AI safety?

IN politics, plans should not be overestimated – but they do reveal intend. That the United Kingdom, the EU and the United States have published “AI action plans” in recent months is a sign that the AI policy debate has shifted and that artificial intelligence is increasingly seen as a growth engine by policy makers.
But is there any other common ground between these plans? 2023 and 2024 have been characterised by a surprisingly large amount of international cooperation on the G7 and OECD level and even in the halls of the Vatican. Does the new focus on economic impact mean that cooperation on AI safety is replaced by competition?
Each of the three plans – the UK’s “AI Opportunities Action Plan”, the EU’s “AI Continent Action Plan” and “America’s AI Action Plan” have different intentions: The U.S. plan is an executive strategy. It is the basis for subsequent Executive Orders directing agencies, procurement rules, and standards guidance. The EU, on the other hand, can mobilise programme funding and shared infrastructure – and the EU AI Act already sets a common rulebook. The UK’s plan is a mix of government as a “first customer” and proposed regulation.
This means that both the tonality and the content of each of these plans differ slightly, and yet, they all touch on a number of key themes: AI infrastructure, data, AI adoption, skills, industrial policy, and regulation.
Let's look at each of these topics in more detail.
AI Infrastructure
Infrastructure is a key component of all three plans. The UK has the most internationalist approach and differentiates between “sovereign compute”, “domestic compute” and “international compute”. The U.S. and the EU on the other hand focus on expanding domestic compute through public investments (in the EU) and streamlining permitting (in the US). Both the UK and the EU intend to use the new compute power for supporting domestic companies to grow without having to worry about access to high-end compute power.
WHO’S AHEAD? The United Kingdom has the least protectionist and most pragmatic approach to building new data centres: The idea of getting access to compute power in other countries is an interesting and pro-trade approach and the Brits’ plan to use the national compute to advance the Labour government’s “missions” and projects of national importance promises a better bang for the buck compared to the EU which focuses a lot on providing access to SMEs. The UK also proposes “AI Growth Zones” with lighter permitting but still considers environmental concerns.
Data
Data features prominently in the EU’s “AI Continent Action Plan”. The U.S. merely commits to developing standards for scientific data and incentivise researchers to make more data available while the UK equally only proposes more open government data (which certainly is very helpful) and a “British media asset training data set” which I hope will contain “Yes, Minister”. The EU on the other hand has already adopted legislation to incentivise data sharing across industries (albeit with limited success) and is currently in the process of simplifying and modernising these rules.
WHO’S AHEAD? Not only did the EU show foresight by including the text-and-data-mining exemption into the EU Copyright Directive (something the Brits want to copy), but the Continent’s thinking about creating data marketplaces is also more advanced than in the UK and the U.S. However, there is a big gap between the EU’s intentions with data marketplaces and their implementation. It remains unclear what should incentivise companies to share data with others and what added benefits marketplaces would bring to that.
AI Adoption
Harnessing AI to kick-start growth is the key theme in all plans, starting with the U.S. that sees itself in a geopolitical race with China. The U.S. therefore proposes a “try first” culture across all industries and advanced adoption in government through interagency collaboration, talent exchange and sharing best practices. In addition, the U.S. wants to quickly adopt domain-specific standards to drive the adoption of AI.
The UK's “AI Action Plan” sees the UK government as the “first customer” and urges public administrations to use their purchasing power to support the British AI ecosystem by “moving fast and learning things”. The EU focuses their adoption efforts on SMEs and wants to transform the “Digital Innovation Hubs” that were created in the past five years into “Experience Centres for AI”. That is probably not going to go fast and will be hard to scale. The EU’s announcement to host “structured dialogues” to foster adoption in key sectors sounds more promising, even though not very innovative.
WHO’S AHEAD? The U.S. government gives officials and companies the license to integrate AI deeply into their operations. And while this is not without risks (contrary to popular belief, AI is not unregulated in the U.S.), this will surely lead to faster adoption.
Skills
When it comes to skills, there is one big difference between the U.S. plan and the visions of the UK and the EU: Immigration. While the UK and the EU want to attract international talent (the former even with a dedicated headhunting unit), the U.S. plan does not even mention the term once. But with such a density of talent (and large pay checks), there is probably also no need for a dedicated AI immigration push in the U.S.
The need to up-skill workers for the age of AI is a no-brainer and again, the UK has the most pragmatic approach by suggesting publicly funded skills programmes for AI like in Singapore, South Korea – or France.
WHO’S AHEAD? The UK has the most pragmatic approach. When it comes to education, public money is almost always well spent, and that is also true for AI skills development.
Industrial Policy
Each country starts from a very different position with regard to their industrial base, so naturally, the focus is quite different. To the surprise of many, the U.S. plan praised the role of open source AI and is also most advanced when it comes to accelerating scientific development through AI. The UK equally wants to appoint sectoral “AI champions” in sectors where the UK already has a strong presence and – ultimately – support the creation of “national champions”. The EU has the most comprehensive understanding of industrial policy – from building infrastructure to supporting European generative AI models.
WHO’S AHEAD? tbd; it is hard to directly compare the different approaches, but clearly each jurisdiction plans to push adoption in key industries.
Regulation
The perspective on regulation shows the greatest level of differences between each jurisdiction. After two and half years of close cooperation on AI safety, governments are starting to explore different paths.
In terms of regulation, the EU has already cast their dice with the EU AI Act which has already started to come into force. But in an effort to speed up AI adoption, Brussels is planning to follow-up with the “Apply AI” strategy and a simplify compliance with the AI Act.
The U.S. plan promises to revise or repeal regulations and rules that could hinder AI development and suggests to tie funding for federal states to their stance on AI regulation. In other areas, the Trump Administration adds compliance burden, such as with the obligation for federal agencies to only procure AI that is “free of ideological bias”.
The UK seeks a middle way by maintaining the AI Safety Institute’s mandate to do research on model evaluation, foundational safety and societal resilience. They also want to oblige regulators to regularly report how they “supported growth” by enabling safe AI adoption (I can already imagine Sir Humphrey writing this report).
WHO’S AHEAD? The transatlantic partnership. It was maybe a bit surprising that “America’s AI Action Plan” featured a whole section on international diplomacy (even though under the premise of maintaining U.S. leadership in AI). And the EU, too, committed itself to working with likeminded partners.
The first era of the internet was characterised by liberal policies such as Section 230 and the European E-Commerce Directive, followed by the Brussels Effect when European rules like the General Data Protection Regulation started to have a global impact to counterbalance the Wild West years of the world wide web.
The AI Safety Summits of 2023 and 2024 showed that there might be a third way: While every country is racing to reap the benefits of AI adoption, there is still a lot of interest in international cooperation on safety and standards. This can only lift all the boats.
Publisher’s note: In a previous post, I have compared the UK’s “AI Opportunities Action Plan” and the EU's approach towards AI.
On a slightly different note having discovered a major technical issue - these are some additional thoughts about how AI regulation needs mechanism for people to register discovered issues as we do in medical devices or the aeronautical industry where safety is a prime issue.
https://kevinhaylett.substack.com/p/there-is-no-ai-safety