Twitter’s attempt to monetize porn reportedly halted due to child safety warnings

Enterprise

Despite serving as the online watercooler for journalists, politicians and VCs, Twitter isn’t the most profitable social network on the block. Amid internal shakeups and increased pressure from investors to make more money, Twitter reportedly considered monetizing adult content.

According to a report from The Verge, Twitter was poised to become a competitor to OnlyFans by allowing adult creators to sell subscriptions on the social media platform. That idea might sound strange at first, but it’s not actually that outlandish — some adult creators already rely on Twitter as a means to advertise their OnlyFans accounts, since Twitter is one of the only major platforms on which posting porn doesn’t violate guidelines.

But Twitter apparently put this project on hold after an 84-employee “red team,” designed to test the product for security flaws, found that Twitter cannot detect child sexual abuse material (CSAM) and non-consensual nudity at scale. Twitter also lacked tools to verify that creators and consumers of adult content were above the age of 18. According to the report, Twitter’s Health team had been warning higher-ups about the platform’s CSAM problem since February 2021.

To detect such content, Twitter uses a database developed by Microsoft called PhotoDNA, which helps platforms quickly identify and remove known CSAM. But if a piece of CSAM isn’t already part of that database, newer or digitally altered images can evade detection.

“You see people saying, ‘Well, Twitter is doing a bad job,’” said Matthew Green, an associate professor at the Johns Hopkins Information Security Institute. “And then it turns out that Twitter is using the same PhotoDNA scanning technology that almost everybody is.”

Twitter’s yearly revenue — about $5 billion in 2021 — is small compared to a company like Google, which earned $257 billion in revenue last year. Google has the financial means to develop more sophisticated technology to identify CSAM, but these machine learning-powered mechanisms aren’t foolproof. Meta also uses Google’s Content Safety API to detect CSAM.

“This new kind of experimental technology is not the industry standard,” Green explained.

In one recent case, a father noticed that his toddler’s genitals were swollen and painful, so he contacted his son’s doctor. In advance of a telemedicine appointment, the father sent photos of his son’s infection to the doctor. Google’s content moderation systems flagged these medical images as CSAM, locking the father out of all of his Google accounts. The police were alerted and began investigating the father, but ironically, they couldn’t get in touch with him, since his Google Fi phone number was disconnected.

“These tools are powerful in that they can find new stuff, but they’re also error prone,” Green told TechCrunch. “Machine learning doesn’t know the difference between sending something to your doctor and actual child sexual abuse.”

Although this type of technology is deployed to protect children from exploitation, critics worry that the cost of this protection — mass surveillance and scanning of personal data — is too high. Apple planned to roll out its own CSAM detection technology called NeuralHash last year, but the product was scrapped after security experts and privacy advocates pointed out that the technology could be easily abused by government authorities.

“Systems like this could report on vulnerable minorities, including LGBT parents in locations where police and community members are not friendly to them,” wrote Joe Mullin, a policy analyst for the Electronic Frontier Foundation, in a blog post. “Google’s system could wrongly report parents to authorities in autocratic countries, or locations with corrupt police, where wrongly accused parents could not be assured of proper due process.”

This doesn’t mean that social platforms can’t do more to protect children from exploitation. Until February, Twitter didn’t have a way for users to flag content containing CSAM, meaning that some of the website’s most harmful content could remain online for long periods of time after user reports. Last year, two people sued Twitter for allegedly profiting off of videos that were recorded of them as teenage victims of sex trafficking; the case is headed to the U.S. Ninth Circuit Court of Appeals. In this case, the plaintiffs claimed that Twitter did not remove the videos when notified about them. The videos amassed over 167,000 views.

Twitter faces a tough problem: the platform is large enough that detecting all CSAM is nearly impossible, but it doesn’t make enough money to invest in more robust safeguards. According to The Verge’s report, Elon Musk’s potential acquisition of Twitter has also impacted the priorities of health and safety teams at the company. Last week, Twitter allegedly reorganized its health team to instead focus on identifying spam accounts — Musk has ardently claimed that Twitter is lying about the prevalence of bots on the platform, citing this as his reason for wanting to terminate the $44 billion deal.

“Everything that Twitter does that’s good or bad is going to get weighed now in light of, ‘How does this affect the trial [with Musk]?” Green said. “There might be billions of dollars at stake.”

Twitter did not respond to TechCrunch’s request for comment.

Products You May Like

Articles You May Like

Amazon brings generative AI-powered recaps to Prime Video
Pinstripe wants to redefine the way online sellers sell secondhand clothing
Regulators deliver successive blows to Amazon and Meta’s nuclear power ambitions
‘Whatever you want Ben’: Inside Ben Horowitz’s cozy relationship with the Las Vegas Police Department
How AI startup Conflixis is protecting hospitals from corrupt doctors 

Leave a Reply

Your email address will not be published. Required fields are marked *