ISACA points to the lack of governance and protocol in European companies

The rapid adoption of artificial intelligence (AI) by European companies and organizations is revealing a lack of governance and use of protocols to this technology, given that, in a crisis context, most do not know how quickly they could stop an affected AI system, and many would not be able to explain the failure later.

This has been announced by the global professional association for auditing, security and IT governance. ISACAwhich has advanced some results of its new AI Pulse Poll 2026 study, within the framework of a meeting with the press held this Monday, although the study is scheduled to be officially published in full on May 5 of this year.

Specifically, ISACA wanted to advance some of the conclusions obtained, which highlight how, although the majority of companies are adopting AI tools in their workflows, they are currently incapable of governing it and, therefore, are face various security problems and use of this technology.

This was stated during the meeting by the ISACA expert and cybersecurity consultant at Balusian, Pablo Ballarín, who assured that the AI is adopted “too quickly, without protocols and with ignorance” by organizations and companies.

“This rapid integration of AI” causes the “unawareness of what type of tools they are using” in the companyhas valued, appealing to the need to bet on responsibilitysince “it is not a technological problem, but a problem of control and government.”

This highlights a ‘Significant and growing gap’ between AI adoption and organizational readiness to manage the risks involved.

As detailed by the professionals surveyed, which were a total of 681 digital trust professionals in Europe, almost three fifths (59 percent) say no Know how quickly your organization could stop an AI system if an incident is identified security.

In fact, only 21 percent indicated they could do it in less than half an hour, and among them, only 5 percent could stop it in about a minute. This shows how a Compromised AI system could operate unchecked for more than 30 minutes in a large percentage of cases, which can lead to various security problems.

According to Ballarín, this panorama is related to the lack of knowledge of companies about the AI ​​tools they implement in their workflowsince they do it without knowing how they work and how they can impact internally. “If we don’t know how they work, We don’t know what they can achieve and what they can cause,” the expert has pointed out.

ISACA has also pointed out that it also affects the absence of clear response procedures within organizations regarding the use of these technologies, with direct implications for regulatory exposure, the company’s reputational risk and the continuity of the processes and services that these systems support.

ORGANIZATIONAL ABILITY TO INVESTIGATE AN AI INCIDENT

These difficulties in stopping AI systems in the event of a security incident are further intensified due to “the important gaps in the capacity of organizations to understand and explain what happened when the system has failed”, as ISACA has stressed.

According to the study results, less than half of respondents (42 percent) are confident in their organization’s ability to investigate and explain a serious AI incident to management.

In this framework, ISACA has highlighted the importance of knowing how to explain what has happened, taking into account the arrival of regulations such as the AI Regulation of the European Union, which is currently in the implementation phase and which establishes explicit requirements regarding explainability and responsibility.

That is, it is legislation that not only requires the implementation of technical controls, but also “governance structures, traceability and professionals with the necessary skills to interpret and communicate the behavior of AI systems.

Taking all this into account, the results of the ISACA study reflect that these capabilities are not yet implemented on a large scale within organizations. Only 11 percent of respondents are completely confident in their company’s capabilities to investigate and explain an AI incident.

It should be noted that the problem escalates even more because Not all organizations require their employees to report when they have used AI tools in their jobs. So much so that, according to the data collected in the report, in a third of companies (33 percent) employees do not have to disclose whether they have used AI and in 15 percent they do not know if they have to do so.

Only 17 percent of respondents say their company requires reporting on the use of AI tools. As a result, it generate important visibility gaps about where and how this technology is being used in the company, making it significantly difficult to recognize where security incidents come from.

Therefore, Ballarín has stated that, in terms of governance, there should be an inventory of all the AI ​​tools that exist, which ones employees use and whether they are being implemented. running in a controlled environmentthat is, knowing where the information has been obtained from and ensuring that it does not go outside the company.

GOVERNANCE: WHO WOULD BE LASTLY RESPONSIBLE?

All these conclusions from the ISACA study show that there is a deeper problem at a structural levelgiven that 20 percent of those surveyed It is unknown who would be ultimately responsible if an AI system caused damage, while 38 percent identify the Board of Directors or a manager responsible in this area.

According to Ballarín, it is complex to define responsibility in these areas. Thus, he has referred to the current regulatory trend, which places responsibility in the higher levels of the organization, They are the ones who make the final decisions and are in charge of the strategies.

Regarding supervision by managers, the study reveals “some optimism”, given that 40 percent of those surveyed affirm that the AI-generated actions in your organization are approved by humans before execution.

With all this, ISACA points to the risk of AI as a problem that also has to be seen as “a governance challenge transversal to the entire company”, especially when the AI increasingly influences decisions and these must be overseen with a “governance infrastructure” that supports it”.

“The fact that the AI is capable of predicting and performing human actions has impacts that normal ICTs do not have. Cybersecurity along with other aspects related to unexpected uses is what must be controlled,” Ballarín stated.

“The gap between deployment and governance is not closing, but widening. Organizations must define responsibilities, develop incident response capabilities, and create visibility into AI use through audits that foster a culture of effective supervision,” said ISACA’s Director of Global Strategy, Chris Dimitriadis.

By Editor

One thought on “ISACA points to the lack of governance and protocol in European companies”
  1. https://sites.google.com/view/vrtovape-discover-the-future-/home
    https://diigo.com/0124y1d
    https://penzu.com/p/f157abec32ee1837
    https://zaaini123.blogspot.com/2026/03/vrtovape-your-gateway-to-advanced-vape.html
    https://www.pearltrees.com/ansari74/item786143727
    https://ansari77.bloggersdelight.dk/2026/03/17/vrtovape-sleek-design-and-powerful-performance/
    https://padlet.com/zainiansari131/my-fierce-padlet-lgoncrmlzv1yt8vm/wish/jpoxajkv3m69QbPE
    https://app.box.com/s/lbgk8vgnsqs6oer6ycp8q2b7kxs1clzn
    https://www.liveinternet.ru/users/muhammadzaid2/post514722577/
    https://uberant.com/article/2158479-vrtovape-designed-for-smooth-and-flavorful-vapor/
    https://demo.jcow.net/blogs/viewstory/109464
    https://quicknote.io/e1a74820-21e5-11f1-abfb-67f183722efa/live
    https://articlescad.com/vrtovape-built-for-comfort-and-performance-47721.html
    https://notebook.zohopublic.com/public/notes/3kk3b3185ca2d1c8f4abbbff7a13f1d78313a
    https://logcla.com/blogs/1332671/Vrtovape-Discover-Smart-Vaping-Technology
    https://sparktv.net/read-blog/125671
    https://community.wongcw.com/blogs/1220814/Vrtovape-Experience-Clean-and-Smooth-Vapor
    https://hallbook.com.br/blogs/927942/Vrtovape-Advanced-Features-for-Better-Vaping
    https://postheaven.net/7992e2amr4
    https://zenwriting.net/myq1kv9iyl
    https://writeablog.net/41dyoe0x41
    https://paper.wf/ansarii77/vrtovape-explore-the-world-of-premium-vaping
    https://hackmd.io/@zaini120/H1m7Ns89Wl
    https://scrapbox.io/zainiansari131/Vrtovape_%E2%80%93_Enjoy_a_Smooth_Vaping_Experience
    https://ansari74.amebaownd.com/posts/58649514

Leave a Reply