EU’s top diplomat fires warning shot at Elon Musk over Twitter disinformation

Europe

A controversial move by Twitter-owner Elon Musk to end free access to its APIs by February 9 has attracted criticism from the European Union’s top diplomat who has warned it could threaten the ability of researchers to study disinformation at a critical time — with Russia aggressively weaponizing disinformation to try to provide cover for its war in Ukraine.

In a speech today, detailing how the bloc has responded to Russia stepping up online disinformation campaigns since its invasion of Ukraine last February, high commissioner Josep Borrell urged more research into how social media platforms are being used to spread Kremlin propaganda.

“We have to do more research on the social media platforms. Study how [disinformation] flow[s], where does it come from, and which are the results,” he said in the speech to the European Union’s diplomatic service (EEAS).

Borrell singled out Twitter — and Musk as its owner — for naming and shaming — saying the EU is concerned about the news that Twitter is planning to restrict free access to its APIs which he warned would be “a serious step back from early commitments”.

“Early commitments” is likely a reference to Twitter being a signatory of the bloc’s Code of Practice on Online Disinformation from the get-go, back in 2018, when the Commission unveiled the voluntary initiative to encourage social media platforms to tackle the ‘fake news’ crisis (as it was still sometimes referred to at the time).

The EU has also banned some Russian state media — so, since March, Twitter has been legally required to ensure its platform does not host the Kremlin-backed media outlets Russia Today (RT) and Sputnik (plus any subsidiaries). Any failure to block the channels from distributing content in the EU would beach the ban.

Disinformation research has come on a lot since the bloc’s 2018 Disinformation Code was laid down — and plenty is understood about the types of techniques and tactics used to manipulate information and spread and amplify false perceptions to manipulate public opinion, erode trust in democratic institutions and interfere with free and fair elections.

But — as is the case with cybersecurity — threats to the integrity of information don’t stand still. And Borrell’s speech emphasized the need for Western democracies to do more to protect themselves against malicious foreign infoOps by increasing their investment in the study of information threats, with the commissioner also brandishing a copy of the EU’s first report on foreign information manipulation and threats — which he said showed “clear trends in the threats against our information space”.

A main finding of the report confirms “a new wave of disinformation techniques” — techniques which he said are being used by Russian to fabricate false images and videos in a bid to spread and amplify anti-Ukraine propaganda — as well as warning over what he described as a “worrisome” cooperation between threat actors like Russia and China.

“Diplomatic accounts and state-controlled channels manipulate perceptions about the European Union — blaming the West for all the consequences of the war in Ukraine and they amplify lies about military-led Western biolabs in Ukraine targeting its neighbours,” said Borrell, adding: “It is something that needs a response.

“We need to anticipate and deter such activities with concrete actions and measures. We need to continue supporting Ukraine… and finally we have to be more ambitious in building resilience to authoritarian regimes that try to create this disinformation and manipulate information.”

While propaganda as a tactic is nothing new in human history, Borrell said the threat attached to the manipulation of information has clearly been supercharged by the Internet and digital tools accelerating information distribution — arguing that liberal Western democracies therefore need to organize a commensurately serious response to such as fast-scaling disinformation risk.

Despite some fresh high-level attention to the issue from the EU — with the EU’s high commissioner seeking to build on existing efforts to raise awareness of Kremlin propaganda around the Ukraine war (such as the EU vs Disinformation campaign) —- it’s fair to say the bloc’s lawmakers still haven’t managed to find a convincing ‘front foot’ for effectively countering online disinformation.

The issue is a tricky one, given how stronger action may present an easy target for peddling bad faith claims that interventions to protect the integrity of information amount to censorship of freedom of expression. However a flood of manufactured disinformation is clearly the real threat to democratic free speech — as the weaponization and systematic mass manufacture of fake speech by authoritarian regimes means it’s real people’s opinions getting drowned out, as Borrell pointed out in the speech.

“We have seen artificial networks being created to spread [Kremlin] disinformation. They have been flooding the information space in order to avoid that any other voice can be heard any more. We have seen attempts to confuse and mislead people with an ever-changing narrative and versions of the event. The aim is to make that nobody believes anything, any report, anymore,” he argued.

“To make people believe that, well, at the end, everything is a lie — how can I distinguish the lies and the truth? And they want to erode trust in all media and our institutions. And I want to say that — today — we have to take this very seriously. It’s not just a matter for the specialist. It’s not just a matter for the people working on the information system. It’s something that the citizen has to be aware — and we have to address this politically at the highest possible level.”

The EU’s Code of Practice on Disinformation is still not legally binding — and, unsurprisingly, over its run the nearly five-year-old initiative has failed to stem various waves of propaganda, whether related to coronavirus disinformation or Ukraine.

The bloc has at least partially recognized this failure. Hence it announced a strengthened Code last year — and also, most notably, said it would link observance of the Code to compliance with the (legally binding) Digital Services Act (DSA) — which entered into force last year and will start to apply for a sub-set of larger platforms later this year (with the bulk of digital services expected to be compliant in 2024).

So EU lawmakers will be hoping for better times ahead if the DSA acts as a stick to force platforms to take countering disinformation more seriously. 

For now, though, the gap between the EU’s efforts to-date to clean up disinformation and what’s actually going on online looks massive.

And, well, the disconnect is getting embarrassing.

Thing is, on paper, Twitter under Musk remains a signatory to the EU’s Code. Yet, in practice, Twitter’s new owner has taken a series of decisions that very obviously run directly counter to the initiative — dismantling existing misleading information policies on COVID-19, for example, and creating instant chaos soon after taking over last year around account verification by letting anyone pay him to get a blue check mark and triggering a instant flood of malicious impersonation.

And for all their claims of being ahead of the curve on digital regulation, EU lawmakers have so far not been able to do much more than issue a few words of warning — telling Twitter to meet its “commitments”. Or warning it of the “huge work” needed if Twitter is to be in a position to comply with the DSA — whensoever it might apply to the platform.

Even more embarrassingly for the EU’s reputations as a digital rule setter, Musk has frequently been accused of personally amplifying Kremlin propaganda.

In one notorious instance last year this led to an intervention by Ukraine’s president himself, who tweeted a sarcastic poll — asking his followers to vote on which @elonmusk they “liked more”: The options offered being ‘one who supports Ukraine’ or ‘one who supports Russia’.

Needless to say the Ukraine supporting option won the poll. But Western democratic institutions continue to present like the massive losers on the disinformation issue — as they appear powerless to prevent the likes of Musk, who is now literally the CEO of Twitter, from wilfully (or, at best, credulously) spreading the Kremlin’s lies.

Borrell’s swipe today — at “Twitter’s owner” — is perhaps the closet the EU has come to calling Musk out. And, more broadly, to recognizing the need for a more systematic approach if the increasingly toxic threat of authoritarian disinformation is to be rooted out — and driven out.

Musk, meanwhile, continues to use the platform he borrowed billions of dollars to buy last year to amplify Kremlin disinformation.

Only this week it was left to Twitter users who haven’t already abandoned the platform because of how Musk is trashing it to call out another instance of the “Chief Twit” doing the Russian regime’s job for it — after he had responded credulously to some fake metrics purporting to list high rates of Ukraine casualties by uncritically remarking on a “tragic loss of life“.

Musk’s response on being called out for amplifying the Kremlin’s Ukraine war propaganda was not to delete his rubberstamping response to false claims — which had been posted by an account literally using a picture of the Russian president Vladimir Putin wearing a halo but rather he suggested that Twitter’s Community Note feature could be used to “correct the numbers”.

As Twitter user David Rothschild quickly pointed out, you can’t correct a massive lie with a “small correction”. Doing that suggests you’re complicit in allowing a huge lie to continue circulating — one which paints a false narrative about Russia’s war in Ukraine that’s helpful to Russia’s efforts to undermine support for Ukraine continuing to fight Russia.

“We need more transparency and accountability, not less,” said Borrell today — as he called on Twitter and on Musk not to renege on earlier commitments to fight disinformation, adding: “I call on Twitter — and on its owner — to ensure that all obligations that they have taken will be honored.”

In the speech, he went on to urge those working on tackling information manipulation to get more organized — and come up with interoperable systems for sharing analysis and best practice — announcing that the EU would be doing more too by setting up a new central resource he said would be used for gathering info on disinformation threats and promoting the sharing of intelligence.

“This is a long-range fight. It’s not going to be won overnight,” he warned. “We have to have the tools. And this information sharing and analysis center will strengthen our responses and enable us to protect our democracies better.”

TechCrunch reached out to Twitter for a response to Borrell’s remarks — and to ask if it plans to reconsider ending free access to its APIs for researchers.

Following criticism that Musk’s plan to end free API access would likely kill off scores of useful Twitter bots, he recently announced an arbitrary reprieve for bots providing “good content that is free” — whatever “good” means in that context. But he appears to have been silent on the researcher API issue so far. (And on the threats to democratic interests posed by the sort of ‘bad content’ he has a penchant for spreading himself.)

At the time of writing Twitter had not responded to our questions about whether it will reconsider cutting off researchers in light of concerns about the impact doing so will have on the ability of experts to study disinformation. However another of Musk’s decisions on taking over Twitter was to liquidate its external comms department — and ignore media requests for comment — so we’re not expecting a response.

Products You May Like

Articles You May Like

Building physical tech is back in fashion thanks to AI, robotics, and defense
Five years later, Netflix hit with Dutch data access fine
Apple might be working on a smart doorbell
AI startup Odyssee’s new tool can generate photorealistic 3D worlds
YouTube will now let creators opt in to third-party AI training

Leave a Reply

Your email address will not be published. Required fields are marked *