EU’s vain desire to be AI regulatory boss: 7.5bn customers say No

AI safety? No, says EU, only fear and regulation (Photo by Alastair Grant - WPA Pool/Getty Images)


Brussels is again full of self-congratulation after the EU Parliament and Council passed the Artificial Intelligence Act, presented as the world’s first regulatory framework for AI. 

The new law parses AI applications into categories ranging from Unacceptable Risk (which are banned outright) down to Minimal Risk (which are not regulated). High Risk applications that “pose significant threats to health, safety, or the fundamental rights of persons” must be evaluated before they are permitted in EU markets.

The law exemplifies the EU’s commitment to the precautionary principle, which specifies regulation to mitigate a plausible risk before such risks have actually emerged. By now, the downside of the precautionary principle should be manifest, specifically that it enfranchises worst case scenarios as the basis for public policy.

Panic over “Frankenfoods” led the EU to ban a number of useful GMOs including drought-resistant seeds and Golden Rice offering critical nutrients to children.

Even worse, the EU bullied poor countries suffering the impact of extreme weather and poor nutrition into rejecting these innovative GMOs.

“Fracking” is another scary word that led Europe to ban technology that could have made the EU self-sufficient in natural gas for decades.

The German government phased out nuclear power after the Fukushima disaster, presumably to mitigate the risk of a similar tsunami swamping German reactors.

In each of these cases, the precautionary principle made an implausible worst case scenario the basis of laws at odds with subsequent reality. GMOs, nuclear power and fracking have proven to be safe and the risks associated with them are manageable.

The post-material neuroses pioneered by the German Green Party have become European law despite their lack of a firm scientific foundation. The only real harm was imposed on EU citizens deprived of cheap power and gas, and agricultural varieties better suited to climate change.

The new AI Act reflects a more nuanced approach to risk by giving the Commission expert panels who will determine how new AI applications will be assigned to the law’s risk categories. The law anticipates that most AI applications will fall into the unregulated minimal risk category, encompassing for example the use of AI in video games.

However, the neat categories established by the new law are at odds with the messy reality of AI’s evolution. Artificial Intelligence code will migrate freely up and down the EU’s categories, often serendipitously, without any explicit attempt by its developers to evade EU law.

AI developed for innocuous data analysis in one country could be used for intrusive social scoring in another. Every social media platform has various AI-powered utilities embedded in its programming, and we now know that social media poses a threat to the psychological health of teenage girls.

Does this mean that Instagram is now a High Risk application subject to a “Fundamental Rights Impact Assessment?” Are the harms associated with this and other social media platforms an acceptable precedent for future AI, or will the EU impose swingeing fines based on an expanded threat assessment of these applications?

The recent history of the new GDPR privacy law shows the EU will impose massive fines for the transfer of data to less regulated jurisdictions, without needing to prove the actual misuse of the data there. The use of proprietary AI tools in San Francisco to analyse data transferred from the EU will presumably result in similar penalties. 

Yet AI code will soon become so ubiquitous, useful and unremarkable that EU’s attempts to fine companies into submission for failing to abide by the law’s rigid categories will only deter companies using AI from offering their innovative technologies in the EU market.

The EU and its vanity assume the laws it adopts to protect its citizens will compel companies to adopt EU standards worldwide in order to retain access to its 450 million consumers.  The theory being that companies will adopt the EU’s restrictive regime as their global standard as means of selling a standard product everywhere.

It may be pretty to think so, but there are still seven and half billion other consumers out there beyond EU borders who might use new AI applications no matter their EU risk designations. The EU might hold a critical mass of the world’s wealthiest consumers today, but its share of global buying power is declining.

The future prosperity of Europe’s declining population will depend on productivity-enhancing technologies like AI, which will allow fewer workers to generate more wealth. The bloc simply cannot afford to regulate its way out of competing in yet another leading technological sector.

Why start a new company based on an innovative machine learning algorithm in the EU, where your sales strategy depends on satisfying the arbitrary risk rulings of Commission bureaucrats? Better to build your business in the less regulated ecosystems of Silicon Valley or Bangalore and freely explore the broadest possible range of market applications outside the EU. 

The European Union’s vain desire to become a regulatory superpower will not make the EU the world’s rule-giver, but only fuel the further fragmentation of the global economy into separate fiefdoms, governed by conflicting regulatory regimes.

In this scenario, the lightest touch regime will host the most innovative technologies. A Europe isolated by regulatory fiat from critical innovations elsewhere will become a region mired in chronic slow growth, in other words, an economic backwater.