The EU is striking the right balance on regulating AI

The welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen. Cutting-edge natural language processing technology such as ChatGPT is a form of generative AI that can answer just about any question, explain philosophy, write you a beautiful poem, and do a whole lot more. (Photo by Leon Neal/Getty Images)

Share

As Artificial Intelligence continues to develop, its potential abuse is panicking the public. Will it replace human labour entirely? Will it be dangerous if it gets out of control, for example, with military uses? Will it eventually be used against us and threaten humanity?

The panic is not wholly justified. AI will likely fail to replace areas of human endeavour such as art, craftsmanship and artisanship. When industrial production developed, it failed to wipe out handmade work. They continue to sell at much higher prices and demand remains significant. High-end, luxury clothing brands still produce handmade items. It is precisely the fact that those items were made with bare hands, piece by piece, with the flaws that testify to the human labour behind them, that makes them unique and valuable to buyers.

The same is true for art. When AI attempted to create portraits called “Lensa AI,” they went viral for a few weeks on social media. Many enjoyed portraits of themselves in different, unusual scenarios. The work used to generate those images was often copied from other real life artists.

This fad quickly subsided after about a month. The use of those images was often ridiculed and the hype dissipated quickly. Those artists from whom the AI was inspired continue to sell their art, made with their own hands.

Yet there are areas in which AI can be threatening. Those include military uses and the infringement of citizens’ privacy and rights.

The U.S. military has been using AI since before it became known or common in public life. Over time, AI has developed to be able to perform more complex, military jobs, and has almost eliminated the need for human input. From combat simulation, to processing data, AI can be used to perform many different roles.

The military seeks to harness artificial intelligence in combat situations to create an advantage against the enemy, but in the likelihood of technical failures, AI could lead to miscalculation and unintended escalation of a conflict.

In terms of the defence application of AI, the EU has failed to pass legislation regulating it. This is perhaps because military decisions are still made by member states independently. In the recent proposed EU bill regulating AI, it states that “AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy.”

But the EU has done the right thing by debating legislation that attempts to regulate AI in areas that could infringe upon the individual rights of its citizens. In particular, it has made the point that it wants to ensure “that AI works for people and is a force for good in society.”

The relevant European parliamentary committee voted to further strengthen the legislative proposal before the plenary vote in June. It is now focused on drafting the details of the “AI Act.” The bill is set to create a European legal framework for AI to address fundamental rights and safety risks presented by AI systems, a civil liability framework where liability rules are adapted to AI, and a revision of safety legislation, such as machinery regulation and general product safety directives.

In particular, there will be different levels of risk, ranging from unacceptable risk to high-risk applications.

The use of a social credit system, the likes of China’s model, where citizens are ranked “good” to “bad” based on their registered behaviours, will be banned. Real-time biometric identification in public spaces – where AI can scan faces and automatically identify people –  will also be banned. These are positive developments that show how the EU is seeking to protect the privacy of its citizens, and should be encouraged.

Other systems that apply to employment and education, will be flagged as high-risk. These include CV-scanning tools that rank job applications. These are efforts that show the EU is set to protect employees and workers. Violations will draw fines of up to 6 per cent of a company’s global annual revenue.

But EU lawmakers face challenges too. How do you define what AI is as the technology rapidly develops, outpacing possible legislation? “A lot of lawyers out there work with laws and codes that are sometimes even hundreds of years old,” Dragos Tudorache,  a Romanian MEP co-sponsoring the bill, said. A way to do that is to set core ethical obligations that remain true throughout time, irrespective of how quickly AI transforms.


Alessandra Bocchi is Associate Editor of Brussels Signal