A new report warns about the risks involved in generative artificial intelligence developments for user privacy. From a survey, Cisco indicated that “familiarity with AI is increasing, with 63% of very active users in the use of generative”, but at the same time one one growing concern about “unintentional risks” which entails the burden of personal and private data in systems such as Chatgpt, Claude or Gemini.
“Made in 12 countries With information from 2,600 privacy and security professionals, the eighth edition of the Data Privacy Reference Study demonstrates the growing importance of establishing solid bases to release the complete potential of AI ”, explain from the pioneer company in telecommunications technology. The full report can be read in this link.
Among the outstanding discoveries, Cisco relieved that “although many organizations report significant trade in Genai, the privacy of data remains an important risk. In particular, 64% of respondents cares about inadvertently sharing sensitive information Publicly or with competitors, although almost half admits to having entered personal or non -public data on Genai tools. ”
Data leaks with AI tools have not been alien to the digital threat ecosystem. In March 2023, Openai recognized a vulnerability in Chatgpt that allowed some users to access alien conversation records and personal data linked to their accounts. More generally, technological giants such as Facebook (now Meta), Google and Microsoft also faced privacy incidents, from massive data gaps to poor management of sensitive information to train algorithms.
“Privacy and adequate governance of data are fundamental to the responsible AI,” says Dev Stahlkopf, legal director of Cisco. “For organizations that work towards IA preparation, privacy investments establish an essential basis, helping to accelerate effective AI,” he adds.
Other studies are more alarmists: Hackerone, for example, offensive security company, says that 74% of organizations They are already using generativebut only 18% know the risks it implies for companies and the systems themselves.
Where are the data?
An interesting report of the report is that respondents are confident that data location is safer, but that global suppliers They are more reliable To ensure information.
“Despite the increase in operational costs of data location, 90% of organizations believe that local storage is inherently safer, while 91% (an increase of five percentage points year after year) is confident that the global suppliers offer better data protection. These two data points reveal today’s complex Panorama of today: these companies are valued for their capabilities, but local storage is perceived as safer, ”explains the report.
“The impulse for data location reflects a growing interest in the sovereignty of the data,” says Harvey Jang, Cisco Privacy Director. “However, a prosperous global digital economy depends on flows Reliable cross -border data. Interoperable frameworks such as the global cross -border privacy rules will play a vital role in growth while crucial concerns of privacy and security are effectively approached, ”complements.
The situation in Argentina
Argentina has faced, during the last five years, a huge amount of Data leaks, cyber attacks and safety gaps.
“The report highlights that 90% of respondents believe that the data is inherently safer when they are stored locally, that is, within the borders of their own country. However, paradoxically, 91% trust more than global providers They can better protect your data compared to local suppliers. This voltage between location and global service catches my attention, and it seems relevant to Argentina taking into account our normative situation, ”he analyzes for Clarion Luis García Balcarce, lawyer specialized in digital rights.
Although Argentina was a pioneer in Latin America with its Personal Data Protection Law (Law 25,326) of the year 2000, “this legislation is already 25 years old and does not contemplate the technological advances or new digital practices, even when we remain considered by the European Union as a country with adequate level of protection,” he warns.
“Argentine regulatory dear contrasts with the data of the report that shows that 86% of organizations believe that privacy laws have a positive impact, and that 96% see that the benefits of investing in privacy exceed costs. Argentina would be losing the opportunity to capitalize these benefits by not modernizing your legal framework“, go on.
As for where the data is housed, García Balcarce suggests that the law needs an update: “On the other hand, the cross -border data flow is essential for the Argentine economy, but requires a modern regulatory framework that combines legal certainty with operational agility. Current legislation imposes strict requirements, such as the registration of databases, which often They hinder these flows without necessarily guaranteeing greater effective protection. This does not mean dispensing with regulation in data protection and cross -border flows, but focusing on key and updated aspects, aligned with modern privacy tendencies. ”
Finally, if it is considered the 63 % increase in familiarity with the generative artificial intelligence that reflects the report, “It is evident that Argentina urgently needs a regulatory framework for data protection that includes these technologies. This framework must establish clear guidelines for ethical and responsible use, promoting innovation without neglecting the protection of fundamental rights, ”the specialist closes.
This idea is in line with the weight that the legal issue has for users in relation to privacy and the use of these technologies: “Privacy legislation remains a corners Investments, ”says Cisco.
Marcelo Felman, director of cybersecurity of Microsoft For Latin America, he threw a central idea when understanding the adoption of AI.
“The adoption of artificial intelligence in organizations is no longer a matter of whether it will happen, but how to do it safely. According to the index of work trends 2024, 78% of the collaborators are bringing their own tools to the work space. This shows great interest and organic adoption of artificial intelligence, but also reflects an urgent challenge: organizations must provide safe platforms that allow their employees to take advantage of this technology without compromise privacy, ”he said in dialogue with Clarion.
“That is why it is essential that business leaders provide easy to use tools, who promote safe practices and allow to decide – from the design – what data should remain in a private environment. The key is to provide clarity on how and when to use AI, ensuring that sensitive information always remains protected in an exclusive and reliable environment,” the specialist closed.
After all, protecting data is a decision that begins with ourselves: understanding what risks are running with AI helps to know how to use them in a responsible way.