Elon Musk’s Twitter hit with holocaust denial hate speech lawsuit in Germany

Europe

Twitter owner and self-proclaimed “free speech absolutist“, Elon Musk, is facing a legal challenge in Germany over how the platform handles antisemitic hate speech.

The lawsuit, which was filed yesterday in the Berlin regional court by HateAid, a group that campaigns against hate speech, and the European Union of Jewish Students (EUJS), argues that Musk-owned Twitter is failing to enforce its own rules against antisemitic content, including holocaust denial.

Holocaust denial is a crime in Germany — which has strict laws prohibiting antisemitic hate speech — making the Berlin court a compelling arena to hear such a challenge.

“[A]lthough Twitter prohibits antisemitic hostilities in its Rules and Policies, the platform leaves a lot of such content online. Even if the platform is alerted about it by users,” the litigants argue. “Current studies prove that 84% of posts containing antisemitic hate speech were not reviewed by social media platforms, as shown in a study by the Center for Countering Digital Hate. Which means that Twitter knows Jews are being publicly attacked on the platform every day and that antisemitism is becoming a normality in our society. And that the platform’s response is by no means adequate.”

For his part, Musk has repeatedly claimed Twitter will respect all laws in the countries where it operates (including European speech laws). Although he has yet to make any public comment on this specific lawsuit.

Since the Tesla CEO took over Twitter at the end of October, he has drastically reduced Twitter’s headcount, including in core safety functions like content moderation — also slashing staff in regional offices around Europe, including in Germany. Plus he’s entirely disbanded Twitter’s Trust and Safety Council and reinstated scores of accounts that had previously been banned for breaking Twitter’s rules — creating conditions that look ideal for hate speech to flourish unchecked.

Over Musk’s roughly three month run as Twitter CEO, there have been anecdotal reports — and some studies — suggesting an increase in hate on the platform. While many former users have blamed a rise in hate and abuse for abandoning the platform since he took over.

Notably the lawsuit is focused on examples of hate speech that have been posted to Twitter over the past three months since Musk was in charge, per Bloomberg, which reported on the litigation earlier.

So it looks like an interesting legal test for Musk as the lawsuit applies an external lens to how the platform is enforcing anti-hate speech policies in an era of erratic (and drastic) operational reconfiguration under the new owner’s watch.

While the billionaire libertarian generally tries to deflect criticism that he’s steering Twitter into toxic waters — via a mix of denial, fishing for boosterism, targeted attacks on critics and ongoing self-aggrandizement (of what he couches as a quasi-neo-enlightenment effort to be a handmaiden to the future of human civilization, by ‘freeing the bird’, as he couches his Twitter speech ‘reforms’) — he did admit to an early surge in hate on the platform back in November.

At the time, tweeting a chart to illustrate a claim that Twitter engineers had succeeded in reducing hate speech impressions to a third less than “pre-spike levels” (as he christened the sudden uptick in hate seen in the period directly after his takeover of Twitter). Although he also suggested that spike was only linked to a small number of accounts, rather than to any wider reduction in the efficacy of content moderation since he took over and set about ripping up the existing rulebook.

While Musk seems to enjoy cultivating an impression that he’s a “free speech absolutist”, the truth, as ever with the space cowboy, looks far less binary.

For example, at Twitter he has taken a series of apparently unilateral and arbitrary decisions on whether to censor (or not) certain posts and/or accounts — including, initially, unbanning Kanye West (aka Ye) and then re-banning him for tweeting an image of a Swastika with a Star of David; the latter being a symbol of Judaism, the former a Nazi emblem.

Or unbanning former US president Donald Trump’s account, which was suspended after the violent attack on the US capital by Trump supporters — but steadfastly refusing to reinstate InfoWars’ hate preacher, Alex Jones, as Musk appears to object to Jones’ infamous conspiracy falsehood that children who died in the Sandy Hook school shooting were actors.

Other decisions taken by Musk around Twitter content moderation appear to be driven purely by self interest — such as banning an account that tweeted the location of his private jet (which he dubbed “assassination coordinates”). Last year he also suspended a number of journalists who reported on the episode as he argued their reporting had the same implications for his personal safety — before reversing course in the face of a storm of criticism that he was censoring the free press.

Yet when not banning journalists, Musk has literally invited a number of hand-picked hacks in to sift through internal documents — and publish what he’s dubbed the “Twitter files” — in what looks like a naked (but very tedious) bid to shape the narrative about how the platform’s former leadership handled content moderation and related issues, like inbound from state agencies making requests for tweet takedowns etc; and throw fuel on conservative conspiracy theories that claim systematic shadowbanning and/or downranking of their content vs liberal views.

(Whereas actual research conducted by Twitter, pre-Musk, looking at its algorithmic amplification of political tweets found, on the contrary, its AIs actually give more uplift to right wing views, concluding: “In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left.” But who cares about non-cherry-picked data right?)

On abuse and hate, Musk is also quite capable of dishing it out himself on Twitter — using his tactic of megaphoning trolling and mockery of vulnerable groups (or “wokism”) to toss red meat to his right wing base at the expense of people who are at a disproportionate risk of being abused, such as the trans and non-binary people whose pronouns he’s deliberately mocked.

Musk has also stooped to tweeting and/or amplifying targeted attacks on individuals that have led to abusive pile-ons by his followers — such as the one that forced Twitter’s former head of trust and safety, Yoel Roth, to flee his own home. So hypocrisy about personal safety risks? Very much.

Even a casual observer of Musk-Twitter would surely conclude there’s a lack of consistency to the Chief Twit’s decision-making — which, if this arbitrariness filters through into patchy and partial enforcement of platform policies, spells bad news for the trust and safety of Twitter users (and RIP for any concept of ‘conversational health’ on the platform).

Whether Musk’s inconsistencies will also lead to a court order in Germany requiring Twitter to take down illegal hate speech, via this HateAid-EUJS lawsuit, remains to be seen.

“Twitter’s actions are based solely on its own, intransparent rules, relying on the fact that users have no chance to appeal — for example, when it comes to the non-deletion of incitements to hatred,” argues Josephine Ballon, head of legal for HateAid in a statement.

“There has been no single case where a social network was prosecuted for this by the authorities. This is why civil society has to get involved, looking for ways to demand the removal of such content. We as an NGO act as representative for the affected communities which are subject to hostility and incitements of hatred on a daily basis. Thus we can build pressure on the platforms in the long term.”

Interestingly, the lawsuit does not appear to be being brought under Germany’s long-standing hate speech takedown law — aka NetzDG — which, at least on paper, gives regulators the power to sanction platforms up to tens of millions of dollars if they fail to swiftly remove illegal content that’s reported to them.

But, as Ballon notes, there have not been any NetzDG prosecutions related to content takedown breaches (although messaging app Telegram was recently fined a small amount for breaches related to not having proper reporting channels or legal representation in place).

One local lawyer we spoke to, who is not directly involved in the HateAid-EUJS case, suggested there’s been something of a tacit arrangement between federal authorities and social media firm that Germany won’t enforce NetzDG on the content moderation issue — also with an eye on incoming EU digital regulation as the Digital Services Act, which starts to apply later this year for larger platforms, harmonizes governance and content reporting rules across the bloc under a single, pan-EU framework that should replace the older German hate speech regulation regime.

For their part, the litigants in this hate speech case against Twitter say they want to get legal clarity on whether individuals (and advocacy groups) can sue in court for the removal of “punishable, antisemitic and inciting content” — such as Holocaust denial — even when they are not personally insulted or threatened by the content.

In an FAQ on a webpage detailing their arguments, they explain [emphasis theirs]:

Whether we can demand this is to be decided by the court. To date it is unclear to what extent Twitter users, on the basis of Twitter’s Rules and Policies, are entitled to demand the deletion of such content in cases where they are not themselves affected. We believe that Twitter has to abide by its own rules which it boasts about in its contract terms — to remove antisemitic posts and make sure that Jews can feel safe on the platform.  

With our action, we take Twitter up on its contractual promises. We believe that platforms must delete antisemitic content – obviously, the platform needs to be compelled into doing so. 

If they are successful, they say their hope is it will become easier for users to assert their rights to the deletion of illegal content against other major platforms, too. So there could be wider implications if the suit prevails. 

With this fundamental process, we want to have the courts clearly establish that platforms like Twitter are already obliged to protect users from antisemitic digital violence based on their own user agreements,” they add. “Such a judgment will make it easier for users to assert their rights against the major platform operators in the future. The principle behind it is simple: If the terms of the contract state that hate speech is prohibited, then Twitter owes the user to remove it. This could then be enforced, for example, by NGOs such as HateAid to make the Internet more secure.”

Twitter was contacted for a response to the lawsuit — but since Musk took over the platform has abandoned having a routine external comms function and has yet to respond to any of TechCrunch’s requests for comment. (But we still asked.)

It’s worth noting that, pre-Musk, Twitter wasn’t earning overwhelming plaudits for success in tackling illegal hate speech either.

Back in November, the most recent EU report monitoring the bloc’s anti-hate speech code — a voluntary agreement which Twitter and a number of other social media platforms have been signed up to for years — found that, prior to Musk’s takeover, Twitter was performing relatively poorly vs other signatories when it came to quickly responding to reports of illegal hate speech, with the Commission reporting that it removed just 45.4% of such content within 24 hours (vs an aggregate removal rate of 63.6%). While, over the monitored period of March 28 to May 13, Twitter received the second largest number of reports of illegal hate speech (Facebook got the most) — reporting just under 1,100 reports. So it appeared to be both hosting a relatively large amount of illegal hate speech (vs peer platforms) and trailing its rivals in how quickly it deleted toxic stuff.

So it will certainly be interesting to see the state of those metrics when (or if) Musk-owned Twitter reports a fresh batch of data to the Commission later this year.

Products You May Like

Articles You May Like

Oyo founder seeks new investment at $3.8B valuation
Codecrafters wants to challenge seasoned developers with hard-to-build projects
Influur wants to stand out from other influencer marketplaces by promising timely payout
OneCell Diagnostics bags $16M to help limit cancer reoccurrence using AI
Google ships first developer preview of Android 16 to speed up feature rollouts

Leave a Reply

Your email address will not be published. Required fields are marked *