After months of potshots, the EU’s war with Twitter / X has now officially begun.
On Monday, Brussels opened formal proceedings against Elon Musk’s social-media giant, its first under its new internet-regulation law, the Digital Services Act (DSA).
The probe will focus on X’s suspected non-compliance with DSA obligations to remove illegal content on the site in the EU. “We will make full use of our toolbox to protect our citizens and democracies”, said Thierry Breton, Commissioner for the Internal Market, the EU’s self-proclaimed “digital enforcer”.
The DSA, which came into force in August, obliges large online platforms like X, Meta, and YouTube to swiftly take down illegal content, hate speech, and so-called disinformation. And the law has teeth: firms that fail to comply can be fined up to 6 per cent of their annual global revenue and potentially even have their licence to operate in the EU revoked.
X under Elon Musk, which has a free-speech ethos quite unlike previous regimes and unlike other social-media giants, has long been in the European Commission’s crosshairs.
In September, Brussels singled out X for allegedly having an especially high level of disinformation compared with other sites. The following month, it launched an investigation into X over alleged disinformation following October 7 and the Hamas-Israel conflict.
After analysing X’s DSA transparency report last month, the Commission is calling for further access to internal X data for its investigation. The probe will also investigate a suspected “deceptive design” in the user interface regarding the blue tick, available through a premium X subscription.
The report, which details X’s DSA compliance, demonstrates the root of the EU’s problem with X. Contrary to the frequent portrayal of X as a total free-for-all swimming with hate-speech, X does take down illegal content.
But it also seeks to balance content moderation with freedom of speech for users. As it explains: “The risks of getting it wrong at the extremes are great: on one hand, you can leave up content that’s really dangerous; on the other, you run the risk of censorship.”
Accordingly, the report shows that from August 28 to October 20, X received 35,006 reports of “illegal or harmful speech” from EU countries, of which it ruled that 23,061 cases did not violate of its policies, meaning it took no action.
This reflects X’s free-speech policy that posts that are “awful but lawful” may have their reach reduced, but will not be taken down. The Commission, however, is not happy with this middle-way approach, with Breton urging Musk in October to be more responsive to “relevant law enforcement authorities and Europol” – i.e., to take down more posts when requested.
The probe will also look at the new Community Notes feature. This crowdsourced fact-checking feature allows certain users to propose community notes underneath viral posts that may be wrong or misleading, which are then voted on to determine whether they are shown publicly. Posts that receive a community note are demonetised – disincentivising further misleading content – but are not taken down.
The Community Notes algorithm is also designed to shield notes from political bias, with notes more likely to appear if they have been voted helpful by users from a variety of political persuasions. The advantage of Community Notes is that it takes the important business of fact-checking out of the hands of pseudo-independent “fact-checkers”, instead empowering ordinary users to weigh up what they think is true.
But this, it seems, is precisely what Brussels views as the problem with Musk’s X. After all, when it comes to alleged disinformation and hate speech online, the key question is who gets to decide what counts. The DSA code of practice, for instance, defines disinformation as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm”.
If this sounds like a rather vague definition, EU bodies make it clear that “disinformation” is interpreted very expansively by EU elites.
Consider a recent publication by the European Digital Monitoring Observatory, a Commission-funded fact-checking hub, on “Disinformation narratives during the 2023 elections in Europe”.
The report lists so-called disinformation narratives that have been fact-checked this year by the EDMO on various political issues. We learn that ahead of the Estonian election, posts claiming that renewable energy is more expensive than gas were “fact-checked” as false; in Spain, where 43,000 illegal migrants landed in the first 10 months of this year, suggestions that migration may cause violence or be a “drain [on] public money” are listed as disinformation; and so too is Vox party leader Santiago Abascal’s criticism of Spain’s gender self-ID law, which he said is a threat to women.
In other words, political “narratives” opposed to EU-favoured policies on climate change, immigration and LGBT issues are labelled “disinformation” by the EU’s own “fact-checker”. This from a body which works with the European Commission to draft and implement the DSA disinformation Code of Practice.
This should serve as a reminder that behind this probe is not just the EU’s grudge against X, but its ability to control what its citizens can and cannot read online.
In response to the probe, X has said it is “co-operating with the regulatory process”, adding that it is “important that this process remains free of political influence and follows the law”. Yet the DSA was drawn up by the Commission with the express purpose of bringing companies like X to heel – now its enforcement mechanism, will be interpreted by that same Commission.
In light of how politicised the issue of online disinformation has become, X’s hope for a fair-minded and non-political investigation can only be described as wishful thinking.
Europe’s capital needs media to challenge the status quo: That’s why Brussels Signal is launching