Germany, France and Italy have reached an accord on how artificial intelligence (AI) should be managed – an agreement that differs from the European Parliament version finalised in June.
Rather than sanctions and penalties, the three European nations opted for self-regulation via a voluntary code of conduct.
Sergey Lagodinsky, a German shadow rapporteur from the Greens, told Brussels Signal: “Self-regulation does not sufficiently reply to the concerns these high-impact AI systems raise.”
He said: “We need dependent and transparent, future-proof rules for this developing technology, while giving it enough room to develop in Europe – the Parliament proposal does this.”
The tri-national agreement comes at a time when the European Commission, the European Parliament and the Council of the European Union are negotiating on how the EU will treat the development and use of AI.
In mid-June this year, the Parliament took a risk-based approach regarding future rules. That included the prohibition of AI applications that might pose an unacceptable danger and enforcing stringent rules for so-called high-risk use cases.
Regulations are intended to limit the potential negative aspects of AI applications and to avoid possible discriminatory outcomes while utilising AI’s innovative power.
France, Germany and Italy had a different view on the matter and threatened to derail the legislative process altogether.
In June, the European Parliament suggested that the code of conduct ought to be obligatory solely for prominent AI providers, predominantly those based in the US.
The three countries expressed concern that this might be seen as giving a competitive edge to smaller European providers and thus might undermine trust in them. That, in turn, could potentially reduce their customer base and sales.
They went to the Spanish presidency of the EU Council, responsible for negotiations on behalf of Member States, to push for a change in approach.
That led to a walkout by officials from the European Parliament who disagreed with altering the basic concepts of the legislation, leaving Spain to try to patch up relationships.
Despite that, the three countries seem determined to scupper the technology-neutral and risk-based approach of the proposed AI Act that is designed to preserve innovation and safety at the same time.
France, Germany and Italy say the regulations governing conduct and transparency must be universally binding.
Their agreed standpoint suggests that no sanctions should be applied in the first instance. They should only be implemented after a nominal period of time if breaches of the code of conduct are detected.
They call for a designated European entity to supervise compliance with the future requirements.
German Minister for Digital Affairs Volker Wissing told Reuters that he was delighted an agreement had been struck with France and Italy to limit the use of AI.
“We need to regulate the applications and not the technology if we want to play in the top AI league worldwide,” he said.