While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.
The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.
Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.
Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.
A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.
The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.
The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.
A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.
The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.
It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.
The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.
The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.
Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.
The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.
At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.
It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.