ICE agents are being demonised in Europe. (Photo by John Moore/Getty Images)

News

Liars: German national TV caught using AI images of fake ICE arrest

Share

Germany’s ZDF public broadcaster has come under fire over an episode of its flagship heute journal news programme after it emerged it broadcast an AI-generated video clip to illustrate alleged brutality by US Immigration and Customs Enforcement (ICE) officers.

The footage, clearly bearing the watermark of OpenAI’s Sora text-to-video tool, was used without proper labelling as synthetic content, prompting accusations of journalistic malpractice and fuelling wider debate over the use of generative AI in news reporting.

The incident occurred during the heute journal broadcast on February 15. Presenter Dunja Hayali introduced a segment criticising ICE practices under the US President Donald Trump administration, stating that officers were “leading parents away in front of their children’s eyes”.

To underscore claims of excessive force during arrests and deportations, the programme aired a short clip depicting dramatic scenes of detentions.

Viewers quickly noticed the prominent Sora watermark superimposed on the footage, a telltale sign that the material had been created by OpenAI’s generative video platform rather than captured in reality.

Independent outlets, including Apollo News, NIUS and the ÖRR Blog, were among the first to highlight the discrepancy, pointing out visible AI artefacts such as inconsistencies in uniforms, hands and logos that only loosely resembled actual ICE insignia.

The clip’s inclusion appeared deliberate, as it aligned with the report’s narrative, yet no on-screen disclaimer identified it as artificial.

Following public outcry and media scrutiny, ZDF withdrew the full episode from its online archive and YouTube channel before it airbrushed the episode and re-uploaded it, without fake videos and adding the editorial note: “Video subsequently changed for editorial reasons”.

In response to queries from Apollo News, a ZDF spokesperson acknowledged that the broadcaster had intentionally used the material for illustrative purposes, showing “that a climate of fear is created with both real and AI-generated images” but admitted a failure in labelling, describing the omission as an oversight.

ZDF maintained that the segment’s overall reporting on ICE operations remained factually grounded in documented cases, although critics argued the synthetic clip undermined credibility by presenting fabricated visuals as representative.

Particularly embarrassing is the fact that Hayali, first warned viewers that not all videos published on social media about ICE missions were real, only for the broadcaster to unintentionally use such AI created imagery itself.

The affair has drawn sharp criticism from conservative commentators and media watchdogs, who accused ZDF of attempting to sensationalise US immigration enforcement to fit an anti-Trump narrative.

This is not the first time generative AI has intersected with ICE-related reporting.

Since OpenAI’s Sora tool became widely accessible in late 2025, social media platforms have been flooded with synthetic videos purporting to show extreme ICE raids, often featuring dramatic arrests in public spaces.

Fact-checkers from AFP, Reuters, and others have repeatedly identified such clips as AI creations, many bearing Sora watermarks or exhibiting classic generation flaws such as unnatural movements.

In the US, viral Sora-generated footage has depicted raids in supermarkets and streets, stoking both outrage and scepticism about mass deportations.

Guidelines from bodies such as the European Broadcasting Union emphasise clear labelling of AI-generated content to preserve trust, yet enforcement remains inconsistent.