People visit the immersive exhibition 'Cybernetic Dali' at the Ideal Digital Art Center in Barcelona, Spain, 20 September 2021. The exhibition offers an immersive experience through the dreamlike scenes created by Dali in his paintings with projections and interactive installations, holograms, virtual reality and artificial intelligence. EPA-EFE/Alejandro Garcia

News

EU law-makers are struggling to keep up with Artificial Intelligence

Share

Artificial Intelligence platforms are progressing so fast that EU lawmakers face an “impossible task”.

In Brussels they are drafting the EU’s AI Act, the world’s first Artificial Intelligence law. But as lawmakers try to “keep pace” with AI, the technology simply “flies ahead,” says a law professor who has been following the progression of the act since 2021.

“AI technology keeps evolving and the EU lawmaker tries to keep pace, but that is impossible,” says Vera Lúcia Raposo from the NOVA School of Law in Lisbon. “Technology flies, while the law drags slowly.”

She is not without sympathy for EU lawmakers: “When it comes to regulations about such complex topics, it is almost inevitable to be the target of criticism.” Some of her original concerns with the act “have been solved” but “still, it seems that the plot thickens.”

She notes that while there will now be a “distinction” between general purpose AI and the latest so-called foundation models of generative AI, both will be “subject to very demanding requirements.”

Foundation model AI are sophisticated generative platforms—like ChatGPT—that power other AI applications. They don’t do one specific thing like general purpose AI—such as AI-based facial recognition, for example—which can be more easily classified.

Amendments to the AI Act draft agreed on May 5 by lawmakers, after months of negotiations, focus on the role of foundation model AI and assessing what poses a “high risk of causing harm,” says Euroactiv.

“Key to assessing whether the AI poses a risk is assessing whether it has an intended purpose,” said an EU source involved in negotiations over the act who wished to remain anonymous. “But some foundation model AIs don’t have a clear purpose, so what do you do in that case?”

Because of the difficulty in assessing that risk, the source says, the act is “only establishing a light framework” based on consulting with as many stakeholders as possible:

“I think we have gauged what are the biggest issues [while] keeping it balanced and getting consensus. That’s the right way to go.”

The European People’s Party (EPP), the largest political group in the European Parliament, has urged a balance being struck between “risks” and “opportunities” when it comes to regulating AI.

“Some people seem to have a fear-driven approach to AI and this stifles the opportunities of the new technology,” the EPP said in a May 11 press release. “The EPP Group wants a harmonised and flexible regulatory environment that takes into account all needs and prevents unnecessary administrative burdens for SMEs and start-ups.”

The EPP says the AI Act is a “world first and a ground-breaking piece of legislation”. It “could become the de facto global standard to regulating Artificial Intelligence, ensuring that such technology is developed and used in a responsible, ethical manner, while also supporting innovation and economic growth.”

Cutting-edge natural language processing technology such as ChatGPT can answer just about any question, explain philosophy, write you a beautiful poem, “challenge incorrect premises”, write essays and do a whole lot more, explains ChatGPTonline.

It can be used by businesses to streamline customer service operations and provide customers with faster responses and more personalised, tailored services.

More of a concern for lawmakers is AI systems being used to influence voting behaviour. This is deemed high-risk by the AI Act. An exception was made for AI tools used to organise political campaigns, which are not directly seen by–and hence not influential on–the public.

Ensuring transparency about content generated by AI foundation models versus content that came from a human source is one of the act’s provisions.

At the same time as juggling the AI Act, the EU is bringing together its Digital Service Act (DSA), the cornerstone of the European Commission’s attempts to get to grips with our increasingly digital world.

On May 11 the European Parliament’s Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate for the AI Act, with 84 votes in favour, 7 against and 12 abstentions. It now proceeds to plenary in the parliament.

The EPP says it wants to maintain the option for law enforcement to “use biometric recognition” in criminal investigations, when searching for the victims of crime such as missing children, and preventing “imminent threats” such as attacks from terrorists.

“No matter how many changes and updates will be carried out, the ultimate certainty that we have is that the AI [Act] will get inevitably outdated,” Raposo says. “Outdated the very same day it is finally approved.”