Amnesty International has accused social media giant X of playing a key role in the spread of “racist and Islamophobic narratives” that ignited violence following the Southport riots in the UK last year.
The 22-page report published on August 6 examined X’s open-source recommendation algorithm and said that the platform was “systematically designed to promote and amplify content provoking strong reactions and controversy”.
According to Amnesty International, this can provoke “serious human rights risks”.
However, speaking with Brussels Signal on August 7, X emphasised their commitment to user safety.
“ Our safety teams use a combination of machine learning and human review to proactively take swift action against content and accounts that violate our rules, including our Violent Speech, Hateful Conduct and Synthetic and Manipulated Media policies, before they are able to impact the safety of our platform”, they said.
The company also highlighted the role of Community Notes: “Our crowd-sourced fact-checking feature, Community Notes, plays an important role in supporting the work of our safety teams to address potentially misleading posts across the X platform.“
But Pat de Brún, head of Big Tech Accountability at Amnesty International, said: “Our analysis shows that X’s algorithmic design and policy choices contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence observed in several locations across the UK last year, and which continues to present a serious human rights risk today.”
In the report, the NGO group argued that 48 hours after a deadly Southport stabbing on July 29 last year, “incendiary posts by far-right influencers spread rapidly on X”.
Hashtags such as #Southport, #Stabbing, and #EnoughisEnough quickly trended, posted by users pushing “unverified and inflammatory claims” that the attacker was “a Muslim” who had “come to the UK by boat”, the report stated.
The day after the UK’s Online Safety Act came into force, protests outside hotels housing asylum seekers were effectively erased from public view for youngsters on platforms including X. https://t.co/GK819qJc9W
— Brussels Signal (@brusselssignal) July 29, 2025
Amnesty explicitly linked this surge in hateful content to the platform’s drastic shift under US billionaire Elon Musk’s leadership since late 2022.
According to the report, Musk’s takeover saw the dismantling or severe weakening of critical safety guardrails, including mass lay-offs of content moderation staff, to the reinstatement of previously banned accounts such as those of notorious UK right-wing figures including Tommy Robinson.
Musk’s disbandment of Twitter’s Trust and Safety advisory council, firing of trust and safety engineers and a publicly declared more permissive approach to online commentary coincided with a documented spike in hate speech on the platform, according to the report.
The NGO also based its argument on a study from US academic institutions UC Berkeley, UCLA and USC, which found hate speech levels remained approximately 50 per cent higher after Musk’s acquisition than before, with increased engagement measured by a doubling of “likes” on hate posts.
Amnesty argued that at the heart of the problem was X’s recommender system with the “For You” timeline.
Unlike a chronological feed or one limited to accounts users follow, this machine-learning-driven system curates content designed to maximise user engagement.
The group argued it constantly predicted “What content will this user likely interact with?” and boosted posts accordingly.
Amnesty stressed this model was not unique to X but reflected a wider industry trend where algorithms incentivised content that provoked strong engagement, including inflammatory, discriminatory, or harmful posts.
The report reignited concerns about Big Tech’s responsibility in the current political climate.
It called for urgent reforms to X’s algorithmic design and safety policies.
“Without effective safeguards, the likelihood increases that inflammatory or hostile posts will gain traction in periods of heightened social tension,” said de Brún.
On June 3, the French National Digital Council warned that social media “in their current forms are not compatible with democracy” due to the algorithms unable to foster moderate debate online.
Co-President of the French National Digital Council, Gilles Babinet, has warned that social media “in their current forms are not compatible with democracy”. https://t.co/Jd7ZhuZlcX
— Brussels Signal (@brusselssignal) June 3, 2025