
The global music industry is ramping up legal, technical, and political efforts to stop unauthorized AI-generated content using copyrighted material. Despite some detection tools and lawsuits, progress remains slow and fragmented across platforms and jurisdictions.
The music industry is fighting on multiple fronts—through streaming platforms, in the courts, and with lawmakers—to prevent the looting and misuse of its content by generative artificial intelligence (AI). But the results so far remain underwhelming.
75,000. That’s the number of deepfakes that Sony Music says it has already requested to be taken down across the internet—a figure that reflects the scale of the phenomenon.
Many argue that the technology is advanced enough to detect songs produced by AI software without the participation of the artist.
“Even if they sound realistic, songs created with AI show subtle irregularities in frequency, rhythm, and digital signature that you don’t find in the human voice,” explains Pindrop, a company that specializes in voice identification.
But it only takes a few minutes to find on YouTube or Spotify—two of the biggest music streaming platforms—a fake 2Pac rap about pizza or a cover of a K-pop hit falsely attributed to Ariana Grande, a song she never performed.
“We take this very seriously and are working on new tools to improve AI fake detection,” said Sam Duboff, Head of Regulatory Policy at Spotify, in a video posted this week on the YouTube channel Indie Music Academy.
YouTube also stated that it is “refining its technology with partners” and may make announcements in the coming weeks.
“Bad actors are one step ahead of the industry, which now has to react after failing to anticipate the threat,” said Jeremy Goldman, an analyst at research firm Emarketer.
“YouTube has billions of dollars at stake,” he added, “so we can assume they’ll figure it out—because they don’t want to see their platform turn into an AI nightmare.”
Deregulation Concerns
Beyond deepfakes, the music industry is increasingly concerned about the unauthorized use of its content to train generative AI interfaces like Suno, Udio, and Mubert.
In June, several major labels filed a lawsuit in federal court in New York against Udio’s parent company, accusing it of training its software using “recordings protected by intellectual property rights with the ultimate goal of diverting listeners, fans, and potential paying users.”
More than nine months later, no trial date has been set—nor in a similar case filed in Massachusetts targeting Suno.
At the center of the legal debate is the concept of "fair use," a legal doctrine that, under certain conditions, limits the enforcement of copyright protections.
“We’re in a true gray area,” said Joseph Fishman, a law professor at Vanderbilt University, referring to how courts might interpret the criteria for fair use.
Initial rulings may not close the chapter entirely. “If different courts start reaching different conclusions,” Fishman warned, “it could eventually end up before the Supreme Court.”
Meanwhile, major players in AI music continue to feed their models with protected data, raising the question of whether the fight is already lost.
“I’m not sure it’s too late,” Fishman added. “A lot of these systems have been developed using copyrighted music, but new models are constantly emerging—and they may have to comply with a future court ruling.”
So far, labels, artists, and producers have also had little success on the legislative front.
Numerous bills have been introduced in the U.S. Congress, but none have made any headway.
A few U.S. states, including Tennessee, have passed laws, though these mostly focus on regulating deepfakes.
President Donald Trump has positioned himself as a champion of deregulation—especially in the realm of AI.
Several AI giants, particularly Meta, have seized the opportunity. Meta has argued that “the government should make it clear that using public data to train models constitutes fair use.”
In the UK, the Labour government has launched a consultation on relaxing intellectual property laws to give AI developers easier access to data.
In protest, more than 1,000 artists banded together in late February to release a silent album titled Is This What We Want?
According to Jeremy Goldman, AI’s unchecked rise continues to haunt the music industry because “it’s very fragmented, which puts it at a disadvantage when it comes to solving this issue.”
With AFP
Comments