The next era of moderation will be verified

Social

Since the dawn of the internet, knowing (or, perhaps more accurately, not knowing) who is on the other side of the screen has been one of the biggest mysteries and thrills. In the early days of social media and online forums, anonymous usernames were the norm and meant you could pretend to be whoever you wanted to be.

As exciting and liberating as this freedom was, the problems quickly became apparent — predators of all kinds have used this cloak of anonymity to prey upon unsuspecting victims, harass anyone they dislike or disagree with, and spread misinformation without consequence.

For years, the conversation around moderation has been focused on two key pillars. First, what rules to write: What content is deemed acceptable or forbidden, how do we define these terms, and who makes the final call on the gray areas? And second, how to enforce them: How can we leverage both humans and AI to find and flag inappropriate or even illegal content?

While these continue to be important elements to any moderation strategy, this approach only flags bad actors after an offense. There is another equally critical tool in our arsenal that isn’t getting the attention it deserves: verification.

Most people think of verification as the “blue checkmark” — a badge of honor bestowed upon the elite and celebrities among us. However, verification is becoming an increasingly important tool in moderation efforts to combat nefarious issues like harassment and hate speech.

That blue checkmark is more than just a signal showing who’s important — it also confirms that a person is who they say they are, which is an incredibly powerful means to hold people accountable for their actions.

One of the biggest challenges that social media platforms face today is the explosion of fake accounts, with the Brad Pitt impersonator on Clubhouse being one of the more recent examples. Bots and sock puppets spread lies and misinformation like wildfire, and they propagate more quickly than moderators can ban them.

This is why Instagram began implementing new verification measures last year to combat this exact issue. By verifying users’ real identities, Instagram said it “will be able to better understand when accounts are attempting to mislead their followers, hold them accountable, and keep our community safe.”

It’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective.

The urgency to implement verification is also bigger than just stopping the spread of questionable content. It can also help companies ensure they’re staying on the right side of the law.

Following an exposé revealing illegal content was being uploaded to Pornhub’s site, the company banned posts from nonverified users and deleted all content uploaded from unverified sources (more than 80% of the videos hosted on its platform). It has since implemented new measures to verify its users to prevent this kind of issue from infiltrating its systems again in the future.

Companies of all kinds should be looking at this case as a cautionary tale — if there had been verification from the beginning, the systems would have been in a much better place to identify bad actors and keep them out.

However, it’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective. Bad actors are savvy and continually updating their methods to circumvent systems. Using a single-point solution to verify users — such as through a photo ID — might sound sufficient on its face, but it’s relatively easy for a motivated fraudster to overcome.

At Persona, we’ve detected increasingly sophisticated fraud attempts ranging from using celebrity photos and data to create accounts to intricate photoshopping of IDs and even using deepfakes to mimic a live selfie.

That’s why it’s critical for verification systems to take multiple signals into account when verifying users, including actively collected customer information (like a photo ID), passive signals (their IP address or browser fingerprint), and third-party data sources (like phone and email risk lists). By combining multiple data points, a valid but stolen ID won’t pass through the gates because signals like location or behavioral patterns will raise a red flag that this user’s identity is likely fraudulent or at the very least warrants further investigation.

This kind of holistic verification system will enable social and user-generated-content platforms to not only deter and flag bad actors but also prevent them from repeatedly entering your platform under new usernames and emails, a common tactic of trolls and account abusers who have previously been banned.

Beyond individual account abusers, a multisignal approach can help manage an arguably bigger problem for social media platforms: coordinated disinformation campaigns. Any issue involving groups of bad actors is like battling the multiheaded Hydra — you cut off one head only to have two more grow back in its place.

Yet killing the beast is possible when you have a comprehensive verification system that can help surface groups of bad actors based on shared properties (e.g., location). While these groups will continue to look for new ways in, multifaceted verification that is tailored for the end user can help keep them from running rampant.

Historically, identity verification systems like Jumio or Trulioo were designed for specific industries, like financial services. But we’re starting to see the rise in demand for industry-agnostic solutions like Persona to keep up with these new and emerging use cases for verification. Nearly every industry that operates online can benefit from verification, even ones like social media, where there isn’t necessarily a financial transaction to protect.

It’s not a question of if verification will become a part of the solution for challenges like moderation, but rather a question of when. The technology and tools exist today, and it’s up to social media platforms to decide that it’s time to make this a priority.

Products You May Like

Articles You May Like

Venture funding in Europe in 2024 fell to $45 billion, says Atomico
Crusoe, a rumored OpenAI data center supplier, has secured $686M in new funds, filing shows
Lighthouse, an analytics provider for the hospitality sector, lights up with $370M at a $1B valuation
Building trust in crypto with Jonathan Levin of Chainalysis
Lightning looks to make managing AI a piece of cake

Leave a Reply

Your email address will not be published. Required fields are marked *