The internet is awash with AI-generated antisemitism that is slipping past traditional social media safeguards. Extremists are now using artificial intelligence to produce memes, music, and visuals with no human intervention, making hateful content harder to detect and often allowing it to remain online. AI-powered trends like the “3,000 years ago” meme use fictional caricatures to delegitimise Jewish identity and reinforce harmful stereotypes.
In this interview, Brussels Signal host Justin Stares speaks with Tal-Or Cohen Montemayor, founder and CEO of the non-profit organisation CyberWell. CyberWell tracks, analyses, and fights online antisemitism, providing critical data and alerts to social media platforms and policymakers. Montemayor explains how extremists are exploiting AI to make anti-Jewish content more convincing, visual, and pervasive, often under the guise of humour. These AI-generated trends pose new challenges for platforms tasked with enforcing rules designed to protect minority groups.
The discussion dives into key issues facing the online world today. Should existing digital policies apply to AI-generated content in the same way they apply to human-created posts? How can social media platforms strike a balance between freedom of expression and the need to prevent hate speech? And are tech giants doing enough to enforce their own rules?
Montemayor highlights the importance of proactive measures: platforms must not only remove harmful content but also limit its reach through algorithms that determine what users see, thereby reducing the amplification of antisemitic messaging.
Montemayor also explores the historical context of antisemitic propaganda, showing how cartoons and other visual media have long been used to stoke fear and hatred. She points out that while humour can sometimes serve as a cover for hate, AI amplifies the problem by making offensive content more realistic, more interactive, and harder to trace.
Watch this interview to understand how AI is reshaping antisemitism online.