What the EU can learn from the UK's "AI Opportunities Action Plan"
The UK's mission-driven plan challenges the EU's more cautious approach
THIS Monday, the UK government published its “AI Opportunities Action Plan” (“AI Plan”), a set of 50 policy proposals drafted by British entrepreneur Matt Clifford. As the UK is trying to develop a “third way” for AI policy, it is worthwhile to have closer look at how the British plan compares to the EU.
The AI opportunity
The AI Plan follows a vision that is very similar to the EU’s: The UK aims to be an “AI maker, not an AI taker”. In other words, the UK wants to create national champions along the whole AI value chain. To achieve this, the UK pledges to invest in AI foundations (particularly computing and data infrastructure) and push the adoption of AI. With regard to adoption, the plan’s focus is on the public sector with the hope that this will subsequently encourage private sector investments in AI.
WHAT THIS MEANS FOR THE EU: While the vision is similar, the sequence of priorities is different in the EU. EU digital policy is driven by the implicit tension between creating European champions and driving adoption and sometimes, the EU creates champions without customers. The UK tries to mitigate this risk by leveraging the purchasing power of the UK government, which is more difficult (although not impossible) to achieve for the European Commission as national governments will be more hesitant to coordinate their procurement decisions.
Building AI infrastructure
In the context of the AI Plan, things like compute power, training data, skills and talent, but also regulation and AI safety are summarised under the broad umbrella of AI infrastructure.
According to Clifford, “access to compute will be a key pillar to economic security” which makes “ownership of critical strategic assets” essential. To achieve this, the AI Plan suggests a tiered approach that distinguishes between “sovereign AI compute”, “domestic compute” and access to “international compute” through agreements and partnerships.
In relation to the EU’s own ambitions in AI, a few policy proposals in the AI plan stand out:
Clifford proposes to expand the capacity of the AI Research Resource (ARR) by 20x until 2030. While he emphasises that this does not equal an increase in budget by 20x (the ARR's budget is currently around one billion Euro), it is still possible that the UK’s investments in AI compute will surpass the EU’s current financial commitments.
Dedicated ARR programme directors with significant autonomy shall decide on the allocation of the UK’s sovereign AI compute, based on the five “missions” (or priorities) that Labour has set.
“AI Growth Zones” shall be established to speed up the building of new data centers; and
international compute agreements with likeminded partners (including the EU) should be negotiated.
WHAT THIS MEANS FOR THE EU: The tiered approach to compute infrastructure could be a model for the EU, given the current deadlock in the sovereignty debate. It admits that some level of sovereignty is useful, but does not attempt to make sovereignty a de-facto standard and thus a protectionist measure.
The proposal to designate “compute czars” that help would-be champions to get access to AI compute in strategic areas such as health, manufacturing or green technologies is an interesting variation of the EU’s “AI Factories” that support initiatives which promise a high return on investment.
Access to data
The AI Plan acknowledges that data is a key ingredient of AI-driven growth and proposes the creation of a National Data Library that not only publishes data sets, but also shapes which data is being collected for the future training of AI models. Access to proprietary data sets could be linked to the allocation of compute power and incentivise research and development. The UK government is also encouraged to license a copyright-cleared data set with media assets – a controversial proposal which has already attracted some criticism.
WHAT THIS MEANS FOR THE EU: With regard to data access, the EU is probably ahead of the UK with initiatives such as the Common European Data Spaces and laws like the General Data Protection Regulation (GDPR) and the Data Act that regulate data access. But while the effectiveness of the Data Act still needs to be tried and tested, the GDPR has its own flaws, including a high bar for data anonymisation (which is a key element of the UK’s plans to make health data accessible to research).
Teaching skills and attracting talent
The amount of AI talent that the UK needs to train according to the AI Plan is enormous: Within five years, Britain is supposed to “train tens of thousands of additional AI professionals”. Given that a Bachelor degree already takes three years, these professionals essentially needed to start studying yesterday. That is why there is also an urgent need to broaden educational pathways into AI through professional education and up-skilling.
An interesting approach to convincing talent to relocate to the UK is the proposal to establish an “internal headhunting capability on par with top AI firms to bring a small number of elite individuals to the UK”.
WHAT THIS MEANS FOR THE EU: Given that the EU is not a nation state but a union of states, it will be politically difficult to mirror the UK’s efforts in AI headhunting at the EU level. But I can certainly imagine leaders from France and Germany, Poland or Portugal jet-setting the West Coast, poaching founders to build their business in Europe ...
Regulating AI (or not)
The chapter on regulation is full of ambiguity from a European point of view: While the EU gets a shoutout for the innovation-friendly text and data mining exemption in the Copyright Directive and for the idea of regulatory sandboxes for innovative AI solutions, the AI Plan generally proposes a more conservative approach to regulation.
The work of the AI Safety Institute (which among other things conducts pre-deployment evaluations of new foundation models) gets praise by Clifford, too, while the AI Plan still suggests a formal regulation of foundation models (a consultation on this topic is expected soon).
In general, however, the UK is likely to follow a sectoral approach to AI regulation. In addition, the UK government pledges to support the “AI assurance ecosystem”, which essentially means a self-regulatory approach to most AI use cases.
WHAT THIS MEANS FOR THE EU: The EU has of course already implemented AI regulation. With regard to foundation models and low risk AI, the UK seems to be following an approach similar to the EU AI Act, while the regulation of riskier AI models will follow a sectoral approach. The EU on the other hand has both horizontal laws (the AI Act) and sectoral rules e.g. for AI in medicine technology.
Driving the adoption of AI
The adoption of AI is seen as a critical element of the AI Plan. Central to this goal is the rapid adoption of AI in the public sector, following a “scan, pilot and scale” approach where the public sector identifies an AI use case, creates a pilot and scales adoption quickly when a pilot succeeds.
WHAT THIS MEANS FOR THE EU: Given that the EU has only very limited say in public procurement, this approach cannot be easily used as a blueprint for the EU, even though member states could adopt this model for themselves.
Driving adoption in the EU will in reality be slower than in the UK, because existing laws make adoption more complex. The AI Plan for example states that teachers were able to cut down the 15+ hours per week that they spend on lesson planning and marking. Under the EU AI Act, however, the use of AI to grade an essay could be a “high risk” use case. Guidelines on how AI can be safely used in education and other sectors should therefore be a priority to accelerate the adoption of AI in Europe.
Comparing the UK’s and EU’s approach to AI
Of course, the AI Plan cannot be a blueprint for the EU: the EU has only limited powers and a lot of the proposals made by Clifford would have to be adopted by EU member states rather than the European Union.
And still, it is valuable to compare the approaches and acknowledge both the similarities (e.g. on the importance of developing national champions along the AI value chain) and differences (a sectoral rather than horizontal approach to regulation). Even though the EU might not be able to scour the world for AI talent, it can create standards for data collection and use, for safe anonymisation of personal data for AI training and for the use of AI in schools.
Finally, what I found striking is the inspiring language of the AI Opportunities Plan which written in the spirit of Mariana Mazzucato’s “mission economy”. Unfortunately, this language does not seem to resonate well with decision makers in the European Union as the Financial Times reporter Martin Sandbu wrote recently. Sandbu diagnoses a “committment phobia” in the EU: while it often surpasses itself in times of crisis, it is less good in setting long-term goals and then vigorously pursuing them.
This seems to be a great moment in time to learn this skill.
📚 Read on
In this noteworthy essay, Carl-Benedikt Frey and others sketch out how policy choices influence the adoption of AI and productivity growth.
This slightly older (September 2023) but still relevant focus group study by Milltown Partners) sheds some light on the public’s expectations to achieve responsible AI. The interviewees were based in the UK, the US and Germany.
In this video, Google’s Nicklas Lundblad speaks about the delicate balancing act of regulating AI for safety and economic benefits, drawing heavily on many examples from (contemporary) history of tech regulation.