Japan warns that Sora 2 threatens the anime industry and accuses OpenAI of violating copyright

The Japan Commercial Broadcasters Association (JBA) has accused OpenAI and its Sora 2 model infringe the copyrights of some Japanese anime and other content production companies, by using their material to train their models without permission, after the dissemination of videos with “identical or very similar” content.

OpenAI presented its new version of the Sora 2 generative AI model for video creation at the end of September, a version capable of generating highly realistic soundscapes and synchronizing dialogue and sound effects, among other improvements.

In this sense, the JBA, which represents a total of 207 broadcasting companies in Japan, has denounced that, since its launch, Sora 2 has allowed videos to be generated with “identical” content the one created by some of these companies, which is protected by copyright, and which has been spread over the Internet.

As explained in a statement, collected by the Asahi Shimbun, this practice is considered to be the result of learning the model, which has been trained with original content owned by the aforementioned companies during its development and training phase.

In this regard, he has stated that this practice requires “the prior authorization of the rights holders” and that, the training of generative AI services with content owned by companies “Not only does it infringe your copyright, but it also constitutes a civil crime, such as trademark damage and defamation”.

In addition, the organization has also pointed out in the statement that this practice of OpenAI “could significantly harm” the interests “economic and personal” of the parties involved in the production of commercial content, referring to parties such as the original authors, scriptwriters, composers or the production team.

All of this, “could destroy Japan’s culture and content production ecosystem”the JBA has ruled, while reflecting that, beyond anime, the same thing happens if content similar to news programs broadcast by one of its member companies is generated, “there would be a significant disruption of public life”.

In this case, he has emphasized that this type of behavior leads to the creation of ‘deepfake’ videos, as well as false images of disasters, politicians or hate speech. “This type of content could stoke public anxiety, distort good judgment and seriously undermine the value of impartial information from broadcasters”who are part of the association, as concluded.

With all this, taking into account everything mentioned in the statement, the JBA has conveyed some considerations to take into account for AI development companies such as OpenAI. It is the case of applying measures to prevent unauthorized use of other companies’ contentboth for training models and for generating videos or images with similar content.

In addition to these guidelines, the organization has also detailed the need to remove these similar contents “even though they have already been generated and distributed”in particular, through websites managed by the developers themselves.

Finally, it has also urged generative AI development companies to “respond honestly” to the claims of the affected companies in relation to the infringement of their copyright.

By Editor

Leave a Reply