The Germany Government has endorsed the European Union Artificial Intelligence Act, clearing a key hurdle to enable it to become law before June’s European Parliament elections.
Some in the tech sector are not convinced it is suitable for purpose.
German approval comes after the Parliament and European Council reached agreement on the Act in December.
Berlin’s backing gives the AI law an achievable timeline in which to pass final Parliament endorsement in its last pre-poll plenary session of April 22-25.
The Free Democratic Party (FDP), part of Olaf Scholz’s three-party ruling coalition, previously blocked German approval on civil liberties grounds. The FDP’s digital minister Volker Wissing then withdrew his objections, saying there had been “an acceptable compromise”.
That now makes it unlikely other countries will try to form a blocking minority in the Council, so that body could adopt the final version of the Act on February 2.
While the AI Act lumbers on though, not all in the tech sector fully embrace the European Commission’s enthusiasm for the new raft of regulations.
“It’s striking how much the EU AI Act is referred to as a ‘competitive advantage’ by everyone else than those building AI companies,” said Victor Riparbelli, founder of US tech start-up Synthesia.
“If the EU wants strong regulation, it will need to counterbalance with actual competitive advantages.”
That might include “access to public datasets, GPU subsidies, not state-owned ‘supercomputers’, and incentives-driven mass-adoption of AI in the public and private sector,” Riparbelli suggested.
Although the Parliament has strongly backed the Act, a coalition within the Council, including Germany, France, Italy, and Austria, had opposed crucial aspects of the proposal. Those included its provisions to regulate “foundation” AI models – general models that can be customised for a broad range of purposes.
Restricting these could undermine EU AI firms’ competitiveness compared with US rivals, those countries argued.
Wissing had also argued the AI Act’s safeguards regarding the use of facial recognition algorithms to track criminals and suspects were too weak.
The Act states that, in real time, governments can only use AI facial recognition to search for a person if there is a risk to life or a threat of terrorist attack regarding such an individual.
Critics, though, noted the rules are more lax for material that is in excess of 48 hours old, saying governments could still use AI-powered facial recognition to impede civil rights.
A coalition of AI companies and European governments have also lobbied for about six months to remove references to “copyright” from the AI Act. “So far, they have failed”, said a representative of the European Guild for AI Regulation.
“A lot of things will have to be solved with lawsuits,” cautioned the official, adding: “the AI Act isn’t perfect.”
For example, from a copyright perspective, the representative said: “It isn’t precisely clear how effective it will be in repairing the damage that has already been done” by training existing AI models on copyrighted data.
The text as it stands, though, re-inserts a clause from the European Parliament that states: “Any use of copyright-protected content requires the authorisation of the rights holder concerned unless relevant copyright exceptions apply.”
In other developments, in a 258-page consolidated version of draft text shared by Parliament senior policy adviser Laura Caroli, the EU plans to establish an AI Office to enforce the new law, should it be passed.
The resulting compromise was “the one that made everybody equally unhappy”, Caroli said.