A blow to the ChatGPT developer: the first class action lawsuit was filed against the OpenAI company

After growing and sparking enthusiasm among many technology lovers, the ChatGPT program takes a hit. For the first time, this morning (Monday) a request was submitted to the Central District Court in Lod for approval of a class action against the company OpenAI, developer and operator of the popular artificial intelligence software, alleging violation of privacy, collection and use of information contrary to the law about the users, especially minor users, and failure to register a database.

The submitted document claims, among other things: “The OpenAI company neglected its duties towards the users, which could lead to a catastrophe. Minors can enter and use the ChatGPT software freely and without any restrictions and be exposed to inappropriate content.”

According to the submitters of the document, “Open AI reserves for itself, contrary to the law, the possibility to transfer personal information of its users to an unlimited number of third parties, as it sees fit and without obtaining the consent of the users in advance. The company actually owns a huge database, which includes the personal information of dozens Thousands if not hundreds of thousands of Israelis – the use of their information by the company constitutes a serious violation of their privacy.”

The lawsuit was filed by attorneys Shaul Zioni, Eli Philersdorf, Reot Zeitlbach and Chen Naman, from the Zionist Philersdorf Philip lawyers office. The lawsuit was filed on behalf of all chat users and in a special way on behalf of minors.

Attorney Shaul Zioni, one of the plaintiffs of the lawsuit, explained: “The lawsuit claims that the artificial intelligence software is a very powerful tool, and the information produced from it may have a significant impact on the conduct of the hundreds of millions of its users. Therefore, it was expected that the OpenAI company would take extra care and make sure that the users of the software were at least of the appropriate age, and that there would be extreme care in relation to preserving the privacy of the users.”

“But unfortunately, in light of the ‘race’ run by the OpenAI company to be the first in the world to introduce artificial intelligence software to the public, it completely neglected its duties towards users, in many aspects. Neglecting these duties could lead to a catastrophe, no less and nothing more, in relation to user rights in a world of Use of artificial intelligence. And it is not for nothing that in recent weeks there has been a call to stop the development of these terrible products,” he said.

In the request for approval, it is claimed that although the company itself stated in its terms of use that the use of the chatbot is prohibited for children under the age of 13 or for children under the age of 18 who do not have the permission of a guardian, in practice it does not request as part of the registration procedure to declare or confirm that the user is over the age of 13, no Asks to show permission from a parent or guardian from anyone under the age of 18, so in fact, minors can enter and use the ChatGPT software freely and without any restrictions and be exposed to inappropriate content.

According to the applicants, this is illegal and promiscuous conduct by the company, and this is in light of the fact that the company itself admits that ChatGPT sometimes provides incorrect answers and false and negative content since it currently has no source of truth, because when the question is not clear the software will usually guess and that the software sometimes responds to offensive instructions or displays behavior Tilted.

In addition, it was explained in the lawsuit that the company admits that it uses the enormous amount of information it collects about the users, including minors, which includes, among other things, the content of chat bot conversations for the purpose of “improving” its services, but it does not specify what exactly is the use made of it, how it is kept and secured by her and for how long she keeps the information.

The company states that it “may” provide users’ personal information to “third parties” without obtaining the user’s prior consent, but it does not reveal to users the specific identity of those unknown “third parties”, but describes them in a vague way that does not say anything like “suppliers ” or “parties to the transaction” and thereby went ahead and in fact reserves for itself, contrary to the law, the possibility to transfer personal information of its users to an unlimited number of third parties, as it sees fit and without obtaining the consent of the users in advance.

The request for approval was submitted a few days after the Authority for the Protection of Privacy in Italy opened an investigation against the OpenAI company and decided to temporarily ban the ChatGPT activity in the country, due to serious concerns about harming the privacy of users and in light of the lack of filtering mechanisms that verify that the users of the tool are over 13 years old.

Also, on March 29, a letter was published by about 1,500 leading technologists, including Elon Musk and Steve Wozniak, who demanded that the “training” of ChatGPT 4 be stopped for a period of six months due to the dangers inherent in irresponsible software development These are for humanity.

By Editor

Leave a Reply