Papercup raises $20M for AI that automatically dubs videos

Fundings and Exits

Dubbing is a lucrative market, with Verified Market Research predicting that film dubbing services alone could generate $3.6 billion annually by 2027. But it’s also a laborious and costly process. On average, it can take an hour of recording studio time for five minutes of narration; one calculator pegs the price at $75 per minute for even a simple video.

The promise of AI in this domain, specifically natural language processing, is speeding up the task by creating human-sounding dubs across multiple languages. One British startup pursuing this, Papercup, claims its technology is being employed by media giants Sky News, Discovery, and Business Insider and was used to translate 30 seasons of Bob Ross’ iconic show, The Joy of Painting.

CEO Jesse Shemen estimates that more than 300 million people have watched videos translated by Papercup over the past 12 months.

“There is a significant mismatch between demand for localization and translation and the ability to fulfil the demand,” Shemen said. “Shows likes [Netflix’s] ‘Squid Game’ validate the thesis that people will watch content created anywhere, in any language, if it is entertaining and interesting. This is why the sector is so primed for growth.”

To wit, Papercup today announced that it raised $20 million in a Series A funding round led by Octopus Ventures with participation from Local Globe, Sands Capital, Sky and Guardian Media Ventures, Entrepreneur First, and BDMI. It brings the London-based company’s total raised to date to roughly $30.5 million, most of which will be put toward research around expressive AI-generated voices and expanding Papercup’s support for foreign languages, Shemen told TechCrunch via email.

Founded in 2017 by Shemen and Jiameng Gao, Papercup offers an AI-powered dubbing solution that identities human voices in a target film or show and generates dubs in a new language. Video content producers upload their videos, specify a language, wait for Papercup’s teams of native speakers to quality-check the audio, and receive a translation with a synthetic voiceover.

Shemen makes the claim that Papercup’s platform can generate dubs at a scale and pace that can’t be matched by manual methods. Beyond the custom translations that it creates for customers, Papercup offers a catalog of voices with “realistic” tones and emotions. Many of these have been used in internal communications, corporate announcements, and educational materials in addition to films and TV, according to Shemen.

“Our ‘human in the loop’ approach means that human translators provide quality control and guarantee accuracy, but need to be much less hands-on than if they were providing the whole translation meaning they can work faster and across more translations,” Shemen said. “People watched more video content during the pandemic which significantly increased demands for our services.”

The market for AI-generated “synthetic media” is growing. Video- and voice-focused firms including Synthesia, Respeecher, Resemble AI, and Deepdub have launched AI dubbing tools for shows and movies. Beyond startups, Nvidia has been developing technology that alters video in a way that takes an actor’s facial expressions and matches them with a new language.

But there might be downsides. As The Washington Post’s Steven Zeitchik points out, AI-dubbed content without attention to detail could lose its “local flavor.” Expressions in one language might not mean the same thing in another. Moreover, AI dubs pose ethical questions, like whether to recreate the voice of a person who’s passed away.

Also murky are the ramifications of voices generated from working actors’ performances. The Wall Street Journal reports that more than one company has attempted to replicate Morgan Freeman’s voice in private demos, and studios are increasingly adding provisions in contracts that seek to use synthetic voices in place of performers “when necessary” — for example to tweak lines of dialogue during post-production.

Shemen positions Papercup as a largely neutral platform, albeit one that monitors the use of its platform for potential abuse (like creating deepfakes). Work is underway on real-time translation for content like news and sporting events, Shemen revealed, as well as the ability to more granularly control and refine the expressivity of its AI-generated voices.

“The value of [dubbing] is clear: people retain 41% of information when watching a short video that’s not in their language — when subtitled they retain 50% and when dubbed through Papercup they retain 70%. That’s a 40% uplift on subtitling alone,” Shemen said. “With truly emotive cross-lingual AI dubbing, Papercup tackles all forms of content, making video and audio more accessible and enjoyable for everyone.”

Papercup currently employs 38 people in London and a translator network across three continents. The company expects this to double by the end of the year.

Products You May Like

Articles You May Like

Trump’s pro-fracking energy secretary pick has also invested in geothermal and nuclear startups
Candela brings its P-12 electric ferry to Tahoe and adds another $14M to build more
Best gifts for frequent travelers
TikTok parent ByteDance reportedly values itself at $300 billion
Juna.ai wants to use AI agents to make factories more energy-efficient

Leave a Reply

Your email address will not be published. Required fields are marked *