UK to combat Russia’s ‘hostile online warfare’ by forcing internet firms to remove disinformation

Europe

The U.K. government is pushing to make “foreign interference” such as disinformation a priority offence under its proposed Online Safety Bill, forcing technology firms to remove contravening content shared by foreign state actors.

The move follows recent legislation announced by the U.K. that is designed to deter foreign state actors seeking to “undermine U.K. interests,” which includes targeting attempts at foreign interference in elections with heftier maximum penalties. The proposed legislation comes shortly after MI5 warned that a Chinese agent with links to the Chinese Communist Party (CCP) had infiltrated Parliament, while subsequently the U.K. has also been ramping up its efforts to counter Russian disinformation and “troll factories” seeking to spread disinformation around the war in Ukraine. And then there was the prank call to Ben Wallace, the U.K.’s Secretary of Sate for Defence, from Russian hoaxers pretending to be Ukrainian Prime Minister Denys Shmyhal.

It’s also worth noting that the U.K. is no stranger to disinformation controversy, perhaps most notably around Russia’s alleged interference in the 2016 Brexit referendum vote which saw the U.K. exit the European Union. A subsequent report found that the British government and intelligence agencies didn’t conduct any real assessment of Russia’s attempts to interfere with the referendum, despite the evidence on hand.

Russia and ‘hostile online warfare’

While today’s announcement applies to disinformation from all foreign actors, the U.K.’s Digital Secretary Nadine Dorries pointed specifically to recent “hostile online warfare” emanating from Russia.

“The invasion of Ukraine has yet again shown how readily Russia can and will weaponise social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression,” Dorries said in a statement published by the Department for Digital, Culture, Media & Sport. “We cannot allow foreign states or their puppets to use the internet to conduct hostile online warfare unimpeded. That’s why we are strengthening our new internet safety protections to make sure social media firms identify and root out state-backed disinformation.”

This essentially sees the U.K. draw closer ties between two new bills that are currently making their way through Parliament — the National Security Bill, which was introduced at the Queen’s Speech in May as a replacement to existing espionage laws, and the Online Safety Bill, which includes new rules on how online platforms should manage dubious online content. Under the latter bill, which is expected to come into force later this year, online platforms such as Facebook or Twitter would be required to take proactive action against illegal or “harmful” content, and could face fines of up to £18 million ($22 million) or 10% of their global annual turnover, depending on which is higher. On top of that, the government’s regulatory body Ofcom would have new powers to block access to specific websites.

Priority offence

As a so-called “priority offence,” disinformation joins a host of offences already covered in the Online Safety Bill, including terrorism, harassment and stalking, hate crime, people trafficking, extreme pornography, and more.

With this latest amendment, social media companies, search engines, and other digital entities that host user-generated content will “have a legal duty to take proactive, preventative action” to minimize exposure to state-sponsored disinformation that seeks to interfere with the U.K.

Part of this will involve identifying fake accounts that have been set up by groups or individuals representing foreign states, with the express purpose of influencing democratic or legal processes. It will also include the spread of “hacked information to undermine democratic institutions,” which — while not entirely clear — may include accurate content that has been surreptitiously procured from the U.K. government or political parties. So this might mean that Facebook et al will be forced to remove content if it includes embarrassing reveals about prominent British politicians.

But if we’ve learned anything over the past decade of managing user-generated content online, it’s that it’s incredibly difficult to do so at scale — and even then, it’s often not easy to tell whether a user is legitimate or a bad actor employed by a foreign government. Faced by the prospect of gargantuan fines, it’s a challenge that could see a lot of legitimate online content or accounts caught in the firing line as internet companies struggle to comply with the legislation.

Products You May Like

Articles You May Like

Fondo wants to mitigate the American accountant shortage with its AI bookkeeping service
Building trust in crypto with Jonathan Levin of Chainalysis
Cambridge materials science spin-out Molyon is on a mission to make next-gen batteries fly
Google ships first developer preview of Android 16 to speed up feature rollouts
Battery unicorn Northvolt files for bankruptcy, co-founder and CEO resigns

Leave a Reply

Your email address will not be published. Required fields are marked *