Generative AI-produced material is already widespread in Bangladesh ahead of elections in January, as well as in the Ukraine-Russia and Israel-Hamas wars.
Those are just the testing grounds for multiple European elections. Austria, Belgium, Croatia, Finland, Iceland, Lithuania, Portugal, Romania, and Slovakia all have nationwide elections in 2024. Germany, Ireland, Poland, and Malta will hold local elections, and Spain two regional elections. Worldwide, one-quarter of the world’s population live in countries that will have elections next year.
There is currently “a perfect storm of ill-prepared governments, and large-scale possibility for deception and fraud, spreading misinformation at speed”, Sara Ibrahim, a barrister at London’s Gatehouse Chambers who works and writes often on artificial intelligence, tells Brussels Signal.
In Europe, court systems have been an early warning. The legal system is “already experiencing people citing made-up cases hallucinated by ChatGPT, so a real stress to public resources at a time when the economy is hardly robust”, Ibrahim says.
Back amid Bangladesh’s elections, an online news outlet called BD Politico posted a clip in September on X that showed a studio news anchor for “World News” presenting footage of rioting, saying US diplomats were interfering in the country’s elections.
The video was made by Los Angeles AI video generator HeyGen. The anchor, “Edward”, is one of several avatars on offer to users who can subscribe to the platform for $24 a month.
During the UK Labour Party’s September conference, an audio clip of Sir Keir Starmer verbally abusing his aides was viewed 1.5 million times. Then in November, another clip circulated widely of London’s Mayor Sadiq Khan calling for the rescheduling of Armistice Day due to a pro-Palestinian march.
Both clips proved to be AI generated, with the Metropolitan Police deciding in the second case no offence had been committed.
Politicians “are especially vulnerable. There is a lot of training data, in footage of them, and standing at podiums, sitting at desks, is something that is especially easy to fake,”said Henry Ajder, founder of a generative AI startup in Cambridge.
Pakistan’s Imran Khan has even taken this one step further, and used AI-generated voice doubles of himself to call for support while he is in prison, says Ajder.
In this year’s violent conflicts, video clips from Gaza and Ukraine showing bloody, abandoned babies have on closer examination proven to be deep fakes, with fingers curled in anatomically impossible ways, or unnatural eye colours, says Imran Ahmed, chief executive of the Washington, DC-based Center for Countering Digital Hate.
“The cost of producing and disseminating extremist material has never been lower,” says Ahmed.
AI-generated anti-Semitic images now appear frequently on X, and “unregulated AI will turbocharge hate and disinformation”, he says.
The coming year will see “the most people going to elections, at the same time we’re seeing a pretty lightning quick evolution and deployment of AI”, says Ajder.
MidJourney have just released version 6 of their visual AI generation tool, offering “incredible photorealism, it can target real people, but also realistic but non-personalised scenes of riots and migrant movements,” he says.
In the realm of text, deploying large language models like ChatGPT at scale “lets actors essentially flood social media to try to build a narrative, potentially sway opinions, not necessarily on open social media but also having direct conversations on forums,” adds Ajder.
Two noteworthy global centres of weaponised disinformation this year have included an industrial park outside Tel Aviv (where “Team Jorge”, run by a former special forces operative, claims to have manipulated over 30 elections around the world), and troll farms in and around Manila, says Oxford Analytica.
Russia spends an annual $1.2 billion on pro-Moscow disinformation abroad, according to European Commission estimates.
The EU’s Digital Services Act’s effectiveness “has yet to be proven”, Oxford Analytica says.
Regulators “can’t stop a person on their computer using an incredibly accessible tool to generate audio or video, share it on Twitter at an opportune moment, or spread it on Discord groups,” says Ajder.
But “what they can do, and I think should be doing, is generative AI use by politicians and political parties should be clearly disclosed. Or at very least only permitted to be used without disclosure when it’s incredibly clear to audiences it’s not real, and is parody,” he says.