Europe’s unbreakable dependency on American AI

Nuclear power plant in Pennsylvania being refurbished for $1.6 billion: 'The energy demands of these data centres are so great that Microsoft, for instance, is taking over and restarting a nuclear reactor in Pennsylvania, delivering over 800 megawatts, aside from other energy investments.' (Photo by Chip Somodevilla/Getty Images)

Share

The global AI race has become the defining feature of our times. It is now taking place on an unfathomable scale and speed, and cutting across all domains. The latest example is SpaceX’s acquisition of xAI, with Elon Musk placing his largest bet yet on orbital data centres that can generate vast compute powered by hundreds of gigawatts, and eventually terawatts, of space-based solar energy. The idea is that better AI models lead to faster innovation and discovery, higher productivity and smarter technology – overall, to a stronger national economy and military, and thus a massive strategic advantage. It’s increasingly clear to most observers that the future, writ large, will be shaped by AI power to a decisive degree. 

Certainly, some doubt or at least uncertainty still remains over this narrative. AI tech is still new, the hype is off the charts, and the practical and societal difficulties ahead are unprecedented. Nonetheless, even if there will be setbacks or even crashes along the way – like the dotcom bubble was for the internet – AI is here to stay and to upend the world as we knew it. There is no escape from the deep questions it raises for decision-makers.

Finding the right response to the strategic challenge of AI, therefore, is now probably the single most important problem facing any geopolitical player with great power ambitions – or perhaps even with the more modest ambition of long-term independence and survival. The task is made harder because there is something unique about this technology, compared to all that’s gone before. The stand-out feature of advanced AI on the horizon is its recursive growth cycle, in that AI will soon be able to improve itself autonomously. More AI begets more AI, in an emerging self-reinforcing loop limited only by compute-industrial capacity. This is why the penalty of falling behind in this race can be perpetual strategic subordination – and why the window of time in which this will be decided will likely not last more than a few years.

It is worth clarifying a bit more what this strategic challenge raised by AI actually is – and also how it is changing. The essential aspect of the AI race at the moment is the competition between different “foundational models” – the core technology at the heart of frontier AI. It is a short list, and the names are familiar by now: OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, xAI’s Grok, Meta’s Llama, Mistral; and, on the Chinese side, Deepseek, together with the less known Alibaba Qwen and Moonshot Kimi. They all use the same basic approach to the underlying code design (transformers and prediction). 

The main differentiator among them is the training process: The combination of scale and quality of data used, plus post-training techniques. Very roughly speaking, the more data is fed into the model, and the better this data is (and the faster it can be processed), the “smarter” and more capable the AI model gets. It is training capacity, therefore, that is the core problem at this present time. There is nothing elegant or clever about the solution. It comes down to a brute-force application of industrial resources: Large-scale data centres running countless high-performance chips, requiring vast amounts of electricity to keep going. 

The term of art for this is “hyperscaling”. Given the staggering resources and investment required, this has only really been a two-horse race so far, between the US and China. For example, the four American hyperscalers – Microsoft, Google, Amazon and Meta – are collectively planning to spend around $650 billion in 2026 alone, on AI-related infrastructure. The energy demands of these data centres are so great that Microsoft, for instance, is taking over and restarting a nuclear reactor in Pennsylvania, delivering over 800 megawatts, aside from other energy investments. OpenAI’s power requirements for the new data centres it is already building under its Stargate project stand at some 7 gigawatts, the equivalent of about five to seven nuclear reactors.

Needless to say, Europe’s efforts fall far short of any of this. It has only between five and 10 per cent of global AI compute capacity – compared to 60-75 per cent in the US – amounting to less than 10 gigawatts. Europe’s largest single project, Mistral’s 1.5 gigawatt AI campus near Paris, is only 20 per cent the size of OpenAI’s Stargate, which in turn is just one of America’s multiple AI infrastructure projects in development. 

As for money, the EU’s flagship InvestAI initiative aims to mobilise €200 billion for AI (€150 billion of which from private sources) but over an unspecified number of years, likely until 2030. As a rough estimate, Europe’s total annual AI investment across all sources is currently perhaps between 10 and 15 per cent of what is being done in the United States. And despite Europe trying to up its game, the US is forging ahead even faster, so that the AI gap between the two is widening not narrowing.

Europe’s supreme misfortune is that the AI problem is hitting it at the same time as the defence problem and the energy problem. Rearmament is already a costly priority – as a result of political choices freely made by European capitals – while high energy costs are wreaking havoc with Europe’s industrial competitiveness and add an expensive premium on any attempts at AI “hyperscaling”. And this is to say nothing of all the other domestic pressures on European budgets in terms of social spending, public services and so on. On current trends, and with the same political system and approaches in place, there is obviously no real way of solving this interlocking set of problems; the best that can be hoped for is managed decline into increasing irrelevance and subordination to the greater powers of the world – the US, China and, in military terms, at least for a while, Russia.

It is in this unforgiving context that European opinion has turned increasingly strongly against the United States since Donald Trump’s return to office. There is a clear and perhaps justifiable recognition – and fear – not only of the fact that Europe cannot rely on the US alliance for both trade and defence as it used to, but also that America is becoming a potentially hostile power. The Greenland affair was a tipping point in this respect and forced even some of America’s most loyal European friends to see the sense of reducing dependency on the US as much as possible.

The problem is that, in the hotter heads out there among the European governing classes, these sentiments sometimes translate into virulent anti-Americanism and foolish calls for a “break” with the US and a turn to full European “autonomy” in every key respect. It is hard enough to execute such a decoupling in the defence domain, for reasons often covered by this column. But is it even harder to part with the US or seriously “challenge” it when it comes to AI and data.

It is Europe’s critical inferiority in AI that has now emerged as its fundamental Achilles’ heel and the unbreakable chain – or leash – that ties it to America. European officials and posturing politicians can agitate all day long for various “bans” on US Big Tech, waving various EU rules and regulations passed by Brussels to make life hard for foreign companies. 

But the truth is, Europe holds no real cards on AI against the US: It needs American AI tech in order to keep pace economically and militarily, and cannot afford a total rupture or some kind of US AI embargo on the EU – which is ultimately where the logic of confrontation leads. Even Europe’s own efforts at scaling its frontier AIs depend largely on US chips like those made by Nvidia. Deprived of access to the best (US) AI tech, Europe would rapidly lose any economic competitiveness it still has left and tumble out of the first ranks of global power – right at the moment when AI is beginning to be implemented at scale across modern economies and its transformational power is starting to show, including in the military field.

There is a fragile and narrow path forward, though. Anti-Trump rants aside, Europe’s actual current strategy seems to be to continue to use the best available AIs (i.e., the US ones) for competitiveness and cross-economy adoption, while systematically reducing long-term structural dependence on non-European providers. Essentially it’s an attempt to balance cooperation and openness with sovereignty and regulatory control – a kind of strategic pragmatism where Europe naturally tries to impose its own values and protections (for transparency, safety etc – nothing intrinsically wrong there) while also aiming for sovereignty and “tech autonomy” through its own, more modest, AI efforts.

The only hope of this working, in the sense of Europe being able to scale domestic foundation models in a relevant time-frame and avoid falling into permanent AI subordination to US or China, is if “hyperscaling” itself – as practiced by the Americans – slows down. Or, better said, if the marginal gains from continued scaling by US frontier models begin to diminish. There are some possible indications of this perhaps starting to be the case. Over the past year there has been growing discussion around the fact that the rate of AI model improvement has begun to slow down, or even to plateau, despite more training capacity (compute) being thrown at it. However, there is no clear consensus on this, and other reports note continued improvement but still hold out the possibility of a downturn by 2030.

If diminishing returns in pre-training scaling become more pronounced – as some prominent voices (including Ilya Sutskever) and analyses suggest is already happening or likely in the near future – the burden of driving further advances in frontier AI could increasingly shift from sheer training compute to post-training innovations, inference-time scaling, algorithmic breakthroughs, and more efficient techniques.  In this scenario, Europe would still require substantial (and expensive) investments in data centres and compute infrastructure to train foundational models competitive with current global standards. But at least it would not have to chase after US companies who are continuously hyperscaling their pre-training capabilities. 

If the Americans – and indeed the Chinese – begin to plateau, Europe could potentially reduce the gap in a few years of (very expensive) investments, and then it would become a more balanced contest – at least as far as the development of AI tech itself is concerned. Of course, the real value of AI is unlocked through its practical application and deployment across the economy – which Europe is also would struggle with – but that is a separate issue.

Unfortunately, this most positive of scenarios is rather unlikely. The hard fact is that the AI competition swallows so many resources that only the US and China can truly afford to play strongly in this game – while Europe struggles under its many other burdens, risks and disadvantages. 

AI is now a much stronger area of European dependency on the US than defence was – or still is – and there is little use in trying to pretend this is not the case. Whether they like it or not, European leaders must find ways to continue cooperating closely with America. A comprehensive “reset” of European-US relations could achieve a great deal, if only there was political vision and willpower to drive it.