AI policy is a relatively new field. Given the strategic importance of artificial intelligence in the long run, it’s important for Governments to ensure the field continues to grow and remains competitive. One way to do this is to look at barriers, and fostering an environment that’s more favourable to innovation and breakthroughs. One of these barriers at the moment is the availability of compute resources, and in Europe, the slow adoption of cloud technology.
Over the past few years, Governments have announced a number of national and supranational projects relating to cloud provision. The German economy minister Peter Altmaier has described the GAIA-X proposal as the EU’s most “important digital aspiration in a generation”. Congresswoman Anna Eshoo claims the new National AI Research Resource Task Force Act will “accelerate and strengthen AI research across the US by removing the high-cost barrier to entry of compute and data resources”.
Here I want to briefly explore what the UK, the EU, and the US are doing in this space: what are the differences, and which looks best placed to succeed? I then offer some thoughts on why I think the US proposal is more sensible than the EU plan, as well as some broad recommendations.

So who is doing what?
THE UK APPROACH: STRATEGIES!
This piece mostly focuses on the EU and the US - but for completeness, a few words on the UK. The UK, as far as I am aware, is mostly silent on cloud/compute infrastructure. The AI policy landscape has ossified a bit here and the substance behind a lot of announcements remains opaque and difficult to scrutinise. On cloud and compute, the UK Crown Commercial Service recently announced an agreement with Google, although this mostly seeks to help the digitisation of public sector agencies. This is great but doesn’t address the compute gap or support academics access compute at scale.
Unfortunately, a lot of AI policy in the UK at the moment is focused on announcements, quangos, and infinite ‘strategies’. There are many reasons for this, including the Brexit-related work paralysing the civil service over the past two years. In July, the UK Government also briefly expressed some intentions in a R&D Roadmap, but this is the only sentence about cloud infrastructure: “And we want to develop a National Digital Research Infrastructure to link supercomputers, clouds, networks, software and people.”
THE EU APPROACH: A SUPRANATIONAL VISION
The EU has proposed Gaia-X, a Franco-German project first presented to the public at the Digital Summit in October 2019. The idea is to create a European cloud data infrastructure. It also comes with its own (non-binding) principles.
Gaia-X will reportedly offer a secure environment for B2B data-sharing within the EU and function as a platform for businesses to search for data storage providers – in Orange’s words, a "repository of existing services". Other reports claim it will be a “platform joining up cloud-hosting services from dozens of companies allowing business to move their data freely with all information protected under Europe's tough data processing rules”. The Technical Architecture however seems to point towards something a bit more sophisticated than a database or a search engine. Is there actually a need for any of this? I’m not really convinced yet.
One question is how foreign cloud providers will be treated. Europe should avoid the Chinese model and resist unduly limiting foreign competition (this harms consumer welfare). The Gaia-X documentation does state that “Users and providers will have equal and non-discriminatory access to the GAIA-X ecosystem.” but some technonationalist voices at the Commission feel differently.
THE US APPROACH: A RESEARCH-FIRST INTERVENTION
On June 4, 2020, Anna G. Eshoo, (D-CA), Anthony Gonzales (R-OH), and Mikie Sherrill (D-NJ) introduced the National AI Research Resource Task Force Act. This seems to go in a better direction. The national artificial intelligence research resource they seek to build is defined as “a system that provides researchers and students across scientific fields and disciplines with access to compute resources, co-located with publicly available, artificial intelligence-ready government and nongovernment data sets and a research environment with appropriate educational tools and user support”.
Data Center Dynamics reports that “The platform would likely run on commercial cloud systems, marking a shift from federal research having traditionally taken place in government labs and supercomputers. Funding levels, how cloud providers would be paid, and who would own the data would be decided by the task force and Congress.” Details are scarce, so it’s difficult to assess the specific mechanisms and frameworks.
The proposal is backed by leading companies (NVIDIA, AWS, Google etc) and, importantly, organizations such as the Allen Institute for AI, IEEE, and OpenAI.
Comparing the two approaches
Good policy starts with identifying good objectives
In a sense, both initiatives try to achieve a common objective: supporting a national cloud infrastructure to fuel R&D and growth. To do so, the EU’s policies ultimately target SMEs and the private sector; whereas the US targets academia and R&D centres. The EU also seems to have an additional desire to create a homegrown cloud champion as part of their digital sovereignty agenda. And this is why I think Gaia-X will not succeed:
1. It does not address a market failure: it’s intended to help SMEs, rather than the R&D ecosystem. In Europe, SMEs struggle with cloud for different reasons, often related to skills or cost. Gaia-X will address neither cause. During the EC’s data strategy consultation, 75% of cloud users indicated they had the necessary flexibility to procure other cloud services.
2. If it’s just a cloud provider database/search engine, it’s probably not necessary and a waste of resources. SMEs don’t need help finding cloud providers, they need help digitising and growing. Some commentators described Gaia-X as an attempt to create a publicly funded European cloud giant; that doesn’t seem to be the case, but if it is it’s doomed to fail (as Quaero did).
To me, targeting academics makes more intuitive sense given that it is more difficult for them to access compute than the private sector labs. As Stanford HAI notes, “There is a wide gulf between the few companies that can afford these resources and everyone else. To put this in perspective, Google used nearly $1.5 million in compute cycles to train the Meena chatbot it announced earlier this year. Such costs for a single research project are out of reach for most corporations, let alone for academic researchers.”
So already the US approach seems more sensible. The proposal also differs from the Gaia-X project in significant ways:
The US Government is addressing a market failure by investing in areas where the private sector won’t.
The platform will run on commercial cloud systems, so market incentives are not skewed.
This is a shift away from federal research having traditionally taken place in government labs and supercomputers.
This could accelerate and strengthen AI research across the US by removing the high-cost barrier to entry of compute and data resources. As Eric Schmidt explains, “this infrastructure would democratize AI R&D outside of elite universities and big technology companies and further enable the application of AI approaches across scientific fields and disciplines, unlocking breakthroughs that will drive growth in our economy and strengthen national security.”
There is still a lot to be worked out, like how the data going through this is to be treated, but overall, this seems promising. The EU should probably try to emulate this, not AWS. But regardless of the stated objectives, there are other reasons to not be too optimistic about Gaia-X:
It doesn’t help European competitiveness: Saku Panditharatne makes a very salient point in her newsletter: “Unfortunately, the core idea is correct, countries will have to build their own technology in order to preserve their independence and way of life. The way to do actually that, is to support commercial culture and rule of law, so home-grown entrepreneurs can build effective technology companies.” This is a great point. Policymakers should remain laser-focused on economic growth more generally – an open immigration policy for example will likely be more effective than an AI-specific industrial package. As venture capitalist Michael Jackson points out, “Officials already have all the powers they need to help local startups grow. They could revamp the Continent's complicated labor laws that vary widely between countries, overhaul how workers for fledgling companies pay tax on potentially lucrative stock options and invest heavily in the region's often creaking digital infrastructure. But such practical policymaking doesn't win headlines or votes.”
It feels protectionist: Peter Altmeier claims that Gaia-X will help “cut overreliance on U.S. or Asian providers.” The 22 suppliers participating are either French or German. This is a shame as strengthening the trans-Atlantic bond between the US and Europe is now more important than ever (although admittedly this is pretty difficult with someone like Trump in power). This is true for economic reasons, but also from a foreign policy perspective - GPAI is a good step in this direction. A de facto exclusion of US companies is counterproductive, and I don’t think will help make the EU a leader in anything. If the EU wants hyperscalers, it should focus on economic policy rather than try to create one from scratch. Some people argue that learning industries deserve some protection: this might be true in some cases for some developing countries: but the EU is not a developing country, and I suspect its failure to create tech giants is mostly due to the investment environment, attitudes towards technology, and culture/risk aversion. Industry protection is also frequently counterproductive or misapplied in any event.
It’s bad economic policy: top down Government projects like these frequently fail. These initiatives follow a dangerous recent trend, like France trying to build a state-run travel site for tourists meant to rival Airbnb and TripAdvisor. This is misguided: Governments struggle to build apps, let alone tech infrastructure. The Economist warns: “Instead of pursuing an activist industrial policy, Europe should put consumers first. That means enforcing competition.” This isn’t just about protectionism: megaprojects are extremely difficult to do well, and public authorities are famously inefficient at managing large costly projects, particularly when markets are better suited to deliver them. See for example this excerpt from James Q. Wilson’s excellent textbook on public policy:

It’s structurally unsound: as Benedict Evans writes, it’s more interesting to think about the ways tech has changed to make such a thing a fool's errand: “the shift from telcos, large industrials and governments deciding the future to decentralised 'permissionless' innovation by huge numbers of software companies operating all over the stack.” In the UK, rather than creating an national digital bank, policies like Open Banking allowed the creation of challenger banks like Revolut and Monzo – this is much better for consumers. I don’t see any reason to treat cloud any differently. And with an allocated budget of €1.5m a year, I’m not sure Gaia-X will be particularly effective. The Congressional Budget Office doesn’t provide a cost estimate for the US proposal, but my guess is that this will be worth a lot more than €1.5m per year.

So what should the EU do?
Creating a national tech champion from scratch is doomed to fail, and creating a centralised database of pre-selected cloud providers is unlikely to do anything useful. Gaia-X tries to do a bit of both: it seeks to establish federated data and infrastructure ecosystems. To me, it’s unclear whether there is actually a need for this, and if there is one, whether this need is best filled by the EU (rather than organically through usual market mechanisms).
The Gaia-X proposals try to achieve too many goals at the same time: data sovereignty, facilitating data transfers and access to compute, creating standards, helping SMEs digitise, and so on. Ultimately, what this will look like in practice remains unclear. GDPR made many lawyers rich, and I’m not yet convinced that Gaia-X will make EU landscape any clearer. But while it’s early days and I could be wrong, I feel like Gaia-X should probably go for a narrower approach.
First, it should focus on harmonising of laws, enabling APIs and supporting industry efforts on standardisation and interoperability. This would actually help SMEs significantly: if a Polish cloud company can easily enter the Spanish market without spending huge amounts on compliance costs, it will be able to grow far more quickly than if it had its website listed under an EU database or website. In February, the Economist hoped that Gaia-X could be a tool to implement granular national data policy, instead of resorting to crude digital protectionism. Right now however, data policy in the EU continues to be very fragmented.
One of Gaia-X’s outputs will be the creation of an ‘Architecture of Standards’. This is potentially positive, since national standards on cloud computing can create market barriers which prevent smaller cloud SMEs from scaling up. But a lot will depend on how this is designed in practice and by whom. The US Chamber of Commerce correctly observes that “As far as governments are concerned, their role should focus on supporting industry-driven standardisation efforts.” Indeed, only 39,8% of respondents to the EU data white paper considered that Governments should take an active role in the prioritisation and coordination of standardisation needs, creation and updates. Digital Europe also suggests that “Global standardisation activities are preferred over European-centred activities, and even more over national activities as domestic standards create further market fragmentation and technical trade barriers.” Something like the CE standard seems like a good model to emulate, although this definitely isn’t my area of expertise.
Second, it should be redesigned to focus on providing cloud compute resources to researchers. As described above, tackling this barrier will help ensure a more balanced playing field between universities and private research labs, and also democratise the scrutiny of machine learning models. The German Federal Ministry for Economic Affairs and Energy noted that Gaia-X “will be accompanied by facilitating research & development (R&D) programs where needed” – this is not enough.
Back in April, a group of researchers published an excellent report on mechanisms to support verifiable claims in AI research and development. It’s one of the few papers that offers concrete recommendations and goes beyond principles, case studies, and overly general statements. One of the problems they identify is the gap in compute resources between industry and academia: “In recent years, a large number of academic AI researchers have transitioned into industry AI labs. One reason for this shift is the greater availability of computing resources in industry compared to academia. This talent shift has resulted in a range of widely useful software frameworks and algorithmic insights, but has also raised concerns about the growing disparity between the computational resources available to academia and industry”.
The National Security Commission on Artificial Intelligence had also highlighted this risk a month before – and while this relates to the US, the same can easily be said about Europe. This is important now and will remain a problem in the near future: analysis from OpenAI shows that while we’re getting more efficient at training neural nets, since 2012 the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.
Like a lot of initiatives in AI policy, addressing this requires the participation of many different players, and the coordination of Government. Doing so well can help empower researchers to scrutinise AI systems for ethical and safety considerations and enable the development of open source alternatives. Indeed, there is a rich literature on bias, fairness, transparency, accountability and so on. But for this kind of research to continue growing at the same pace as machine learning developments, academics need sufficient access to compute.
Lastly, it should acknowledge that companies operate in an international environment where extra-European data flows are key. Just like talent, labour, goods, and services – data is more useful when it can flow freely. There are of course some legitimate concerns around security, privacy, and governance; and these should undoubtedly be tackled where and when appropriate. But as the US Chamber of Commerce rightly warns, “Restrictions on the use of technology developed outside of the EU risks disadvantaging Europe’s own AI capacity, as many businesses benefit from partnerships with non-EU organizations, including those providing cloud capabilities and AI-related components, datasets, and software.”
This is crucial: protectionist and inward-looking policies will most likely backfire and harm the European Union in the long term. As the Center for Data Innovation writes, there are a number of dubious proposals for a “European Internet” circulating across EU institutions, trumpeting intentions to discriminate against foreign providers and boost European firms—while using China as an example to follow. This is dangerous, and Europe should resist “America First”-type policies for its tech sector. Nor should policymakers present this as a false dichotomy between ‘privacy/security’ and ‘free trade/openness’. The former values can adequately be protected without resorting to isolationist tendencies.
Should the EU take these steps, and devote more resources to tackling the underlying policy bottlenecks that have so far restrained the EU tech sector, I would be far more optimistic about the EU’s digital future. We are still in early days and designing large-scale policies on data, cloud and AI is very difficult in practice, so I expect a lot of these proposals to morph over time. Hopefully for the better.
NB: While I read extensively about artificial intelligence, institutional economics and foreign policy - I’m no expert on cloud technology; so if you have comments, critiques, or suggestions, super grateful if you shared them with me!

Illustrations: Clouds made from canvas and wood, scenery for a production of Jean-Philippe Rameau's opera, 'Dardanus'. The Palace of Fontainebleau, 1783