AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones

Mobile

From the cringe-inducing Jar Jar Binks to unconvincing virtual Leia and Luke, Disney’s history with CG characters is, shall we say, mixed. But that’s not stopping them from replacing one of the most recognizable voices in cinema history, Darth Vader, with an AI-powered voice replica based on James Earl Jones.

The retirement of Jones, now 91, from the role, is of course well-earned. But if Disney continues to have its way (and there is no force in the world that can stop it), Vader is far from done. It would be unthinkable to recast the character, but if Jones is done, what can they do?

The solution is Respeecher, a Ukrainian company that trains text-to-speech machine learning models with the (licensed and released) recordings of actors who, for whatever reason, will no longer play a part.

Vanity Fair just ran a great story on how the company managed to put together the Vader replacement voice for Disney’s “Obi-Wan Kenobi” — while the country was being invaded by Russia. Interesting enough, but others noted that it serves as confirmation that the iconic voice of Vader would officially from now on be rendered by AI.

This is far from the first case where a well-known actor has had their voice synthesized or altered in this way. Another notable recent example is “Top Gun: Maverick,” in which the voice of Val Kilmer (reprising his role as Iceman) was synthesized due to the actor’s medical condition.

That sounded good, but a handful of whispered lines aren’t quite the same as a 1:1 replacement for a voice even children have known (and feared) for decades. Can a small company working at the cutting edge of machine learning tech pull it off?

You can judge for yourself — here’s one compilation of clips — and to me it seems pretty solid. The main criticism of that show wasn’t Vader’s voice, that’s for sure. If you weren’t expecting anything, you would probably just assume it was Jones speaking the lines, not another actor’s voice being modified to fit the bill.

The giveaway is that it doesn’t actually sound like Jones does now — it sounds like he did in the ’70s and ’80s when the original trilogy came out. That’s what anyone seeing Obi-Wan and Vader fight will expect, probably, but it’s a bit strange to think about.

It opens up a whole new can of worms. Sure, an actor may license their voice work for a character, but what about when that character ages? What about a totally different character they voice, but that there is some similarity to? What recourse do they have if their voice synthesis files leak and people are using it willy-nilly?

It’s an interesting new field to work in, but it’s hardly without pitfalls and ethical conundra. Disney has already broken the seal on many transformative technologies in filmmaking and television, and borne the deserved criticism when what it put out did not meet audiences’ expectations.

But they can take the hits and roll with them — maybe even take a page from George Lucas’s book and try to rewrite history, improving the rendering of Grand Moff Tarkin in a bid to make us forget how waxy he looked originally. As long as the technology is used to advance and complement the creativity of writers, directors and everyone else who makes movies magic, and not to save a buck or escape tricky rights situations, I can get behind it.

Products You May Like

Articles You May Like

TuSimple’s former CEO wants a new board that will liquidate the company
If climate tech is dead, what comes next?
British university spinoff Mindgard protects companies from AI threats
OpenAI says it has no plans for a Sora API — yet
iRobot co-founder’s new home robot startup hopes to raise $30M

Leave a Reply

Your email address will not be published. Required fields are marked *