The music industry fights on the platforms, in court and with politicians to avoid the plundering and improper appropriation of its contents through the generative artificial intelligence (AI), but the results are limited for the moment.
The Sony Music label claims to have asked to withdraw 75,000 Deepfakes from the Internet, which illustrates the magnitude of the phenomenon.
Many say there are technology to detect these songs produced by AI software, without the participation of the artist.
“Although they seem realistic, the songs created with AI have slight irregularities of frequency, rhythm and digital signature that are not found in the human voice,” explains Pindrop, specialized in the identification of voices.
It is enough for a few minutes to detect, on YouTube or Spotify, the two main musical streaming destinations, a fake 2pac rap on pizzas or a version of Ariana Grande of a K-pop success that he never interpreted.
“We take this very seriously and we are developing new tools to improve” the detection of the false, explained this week Sam Duboff, head of Spotify regulatory policy, on the YouTube Canal Indie Music Academy.
YouTube also claimed to be “perfecting (its) technology with (its) partners”, and could make ads in the coming weeks.
Jeremy Goldman, an emarketer analyst, points out that “the malicious actors are one step ahead” of the industry.
“YouTube has billions of dollars at stake,” he adds, “so one would think that they will fix them to solve the problem (…) because they do not want to see how their platform becomes a nightmare of AI.”
“Fair use”
But more than with the Deepfakes, the music industry is concerned about the unauthorized use of its contents to develop specialized generative interfaces such as Suno, Udio and Mubert.
In June, several large record stamps filed a lawsuit in a federal court in New York against the Udio parent company, which they accuse of developing their software using “recordings protected by intellectual property with the ultimate goal of diverting listeners, fans and potential paid users”.
More than nine months after the lawsuit, there is still no date for eventual trial. Nor is there for a similar case against Suno in Massachusetts.
In the center of the legal debate is the notion of “fair use”, which can limit the application of intellectual property rights under certain conditions.
“We are in an area of authentic uncertainty” about interpretation, by courts, of the criteria, says Joseph Fishman, a professor of law at the University of Vanderbilt.
However, the first sentences will not assume the last word, because “if the courts begin to disagree” in their rulings, the Supreme Court could have to pronounce, the academic warns.
Meanwhile, the main actors in the musical AI continue to develop their models with protected data, which raises the question of whether the battle has not been lost.
“I’m not sure” it’s too late, says Joseph Fishman. Many of these interfaces have developed using copyright protected material, but do not stop new models “, and it is possible that they must take into account a possible binding judicial sentence.
At the moment, stamps, artists and producers have not had much success in the third front of this offensive, the legislative.
Numerous bills have been submitted to the United States Congress, but so far all have been outlined.
Some states, Tennessee in particular, have approved laws that focus mainly on Deepfakes.
To make things worse, Donald Trump has erected in Paladin of deregulation, particularly AI.
Several giants of artificial intelligence have climbed into the car, in particular goal, for whom “the government should make it clear that the use of public data to develop models is unequivocally a fair use.”
If the Trump administration followed this advice, it would incline the balance against music professionals, although the courts are likely to have the last word.
The panorama is not much better in the United Kingdom, where the Labor Government has launched a consultation with a view of the intellectual property law to facilitate access to IA developers.
In protest, more than 1,000 artists joined to publish at the end of February a silent album entitled “Is this what we want?”
In Jeremy Goldman’s opinion, AI abuses continue to affect the music industry because “it is very fragmented, which places it at a disadvantage when solving the problem.”