Last week, a song using AI deepfakes of Drake and the Weeknd’s voices went viral, but neither major artist was involved in its creation. Meanwhile, Grimes has taken to Twitter to offer 50% royalties on any AI-generated song that uses her voice, then declared that she is interested in “killing copyright,” which would probably undermine her ability to collect royalties in the first place. We might be living in the weirdest timeline, but unless Grimes is working on any secret inter-dimensional transit projects (you never know), the music industry has to reckon with what to do next.
Musicians like Holly Herndon and YACHT have embraced AI as a tool to push the limits of their creativity. YACHT trained an AI on 14 years of their music, then synthesized the results into the album “Chain Tripping;” Herndon created Holly+, a website that freely allows anyone to create deepfake music using her own voice.
While Herndon may openly invite people to experiment with AI art using her likeness, most artists don’t even know that people can model their voice before it’s too late. Therein lies the problem.
In Spotify’s recent quarterly earnings call, CEO Daniel Ek spoke about the company’s approach to AI-generated music. Despite Spotify taking down “Heart on my Sleeve,” the AI song that uses deepfakes of Drake and the Weeknd, Ek seems cautiously optimistic about the fast-developing technology.
“[AI] should lead to more music,” Ek said on the call. “More music, obviously, we think is great culturally.”
For a big business like Spotify, that might be true: If more people use their streaming service to listen to more music, then they get more money. But for many artists and music fans, AI poses a threat.
“When artists are already struggling, it seems like a dangerous step,” entertainment lawyer Henderson Cole told TechCrunch.
Between abysmal streaming payouts and the long-term impact of COVID-19 on the live music industry, musicians have been having a rough go of it, to say the least. Now, like visual artists, these performers have become guinea pigs for technology that appropriates their work without consent.
“Music has a special social role in the development of technology,” Erickson told TechCrunch. “It can be attached to any kind of emerging technology as a way of providing a use case or selling general interest and attracting investment.”
We saw this happen with the crypto industry, which at one point seemed poised to change the status quo of music royalties and ticketing, but has yet to reach anything close to mass adoption.
Sometimes these new technologies do take hold, though. As a historical example, Erickson points to sampling, or the practice of iterating on snippets of other artists’ work in new recordings. So long as a musician gets permission from the artist and their label, sampling is fair game.
“It was centered in community rather than the technology itself,” Erickson said about sampling. Of course, in cases where music was sampled without the artists’ consent, some high-profile lawsuits ensued. Now, it’s only a matter of time before we see rights holders get over AI-generated music.
Under certain circumstances, copyrighted material can be used without explicit permission if it is considered “fair use.” Fair use analysis considers whether a work was created for profit, the amount of copyrighted material it uses, how transformative it is and if it might economically impact the original.
Though a fair use argument could be constructed in favor of AI music, Cole thinks it’s doubtful that it would hold much weight in practice.
“In a world where Ed Sheeran and Robin Thicke are getting sued just for sounding similar to a hit song, someone using AI to copy an artists’ voice or musical sound seems unlikely to be allowed,” Cole said.
It takes a long time for the legal system to catch up with new technology, but for now, major labels like Universal Music Group (UMG) have spoken out in opposition to the use of generative AI.