The artificial intelligence law opens a gap between large prepared companies and those that resort to open source |  Technology

Artificial intelligence (AI) is no longer a lawless territory. The approval of the European standard (AI Act) will be gradually applied over the next two years to any system that is used in the EU or affects its citizens and will be mandatory for suppliers, implementers or importers. In this sense, the law opens a gap between large companies, which have already foreseen limitations on their developments, and smaller entities that want to deploy their own models based on growing open source applications (open source). The latter, if they do not have the capacity to examine their systems, will have regulatory test environments (sandboxes) to develop and train innovative AI before its introduction to the market. But its availability raises questions about the ability to control its uses, widespread in the creation of non-consensual pornography and fraud campaigns.

“We ensure that the AI ​​technology we create is done responsibly and ethically from the beginning. “IBM has been working with government agencies around the world to promote smart and effective regulations, as well as to provide security barriers for society,” says Christina Montgomery, vice president and director of Privacy and Trust at the multinational.

Pilar Manchón, director of AI research strategy at Google and advisor to the Spanish Government’s advisory committee, agrees with this assessment during a meeting at the University of Seville. “We need regulation because AI is too important not to have it. Artificial intelligence must be developed, but it must be done well.” The researcher summarizes the premises included in the AI ​​Principles: “It is very easy to summarize: don’t do bad things, do good things, and if you are going to do something, make sure it is going to have a positive impact on the community, on society. , in the scientific community. And, if it can potentially do something that is not what you used it for or designed it for, be sure to take all necessary precautions and mitigate the risks. Do good, innovate, be bold, but responsibly.”

The Google member affirms that other multinationals agree with this vision. In this sense, the president of Microsoft, Brad Smith, stated in a recent interview in EL PAÍS: “We need a level of regulation that guarantees security.” Brad argues that European law does this by “examining safety standards and imposing a basis for these models.”

Jean-Marc Leclerc, head of EU Government and Regulatory Affairs for IBM, thinks the same way, who endorses the European law, asks for it to be extended and highlights the establishment of official entities that ensure the ethical implementation of the systems, as foreseen. the regulation. But he warns: “Many of the organizations that may be in scope of the AI ​​law have not established governance in their infrastructures to support compliance with the standard in their processes.”

IBM’s caution responds to the proliferation of open source tools, cheaper if not free and also effective. Despite their limitations in being trained, they are beginning to approach the developments of large companies and offer themselves freely. Last May, a writing by a Google engineer that this company does not consider official but rather the private opinion of the researcher, warned that AI was escaping the control of large companies.

Startup Hugging Face launched an open source alternative to ChatGPT, OpenAI’s popular conversational app, a year ago. “We will never give up the fight for open source AI,” tweeted Julien Chaumond, co-founder of this company. At the same time, Stability AI launched its own model and even Stanford University joined in with its Alpaca system.

“It is a global community effort to bring the power of conversational artificial intelligence to everyone, to get it out of the hands of a few large corporations,” says the AI ​​researcher and youtuber Yannic Kilcher in a presentation video of Open Assistant, one of these platforms.

Joelle Pineau, director of Meta AI and professor at McGill University, defends the MIT Reviewopen source systems: “It is very much a free market approach, of the type move fast, build things. “It really diversifies the number of people who can contribute to the development of technology and that means that not only researchers or entrepreneurs can access these models.”

But Pineau herself admits the risks that these systems, if they escape the ethical and regulatory criteria established by law, favor misinformation, prejudice and hate speech or serve to manufacture malicious programs. “We have to strike a balance between transparency and security,” reflects Pineau.

“I’m not an open source evangelist,” Margaret Mitchell, an ethics scientist at Hugging Face, tells the same publication. “I see reasons why being closed makes a lot of sense.” Mitchell points to non-consensual pornography (“It’s one of the main uses of AI to create images,” she admits) as an example of the downside of making powerful models widely accessible.

Regarding the use of these systems by cyber attackers, Bobby Ford, head of security at Hewlett Packard Enterprise, warned during the CPX meeting in Vienna: “My biggest concern when it comes to generative AI is that the adoption of technology by the enemy occurs at a faster rate than ours. The adversary has plenty of time to take advantage of artificial intelligence. If we do not do the same to defend ourselves from his attacks, the war is asymmetrical. Anyone with internet access and a keyboard can be a hacker [pirata informático]”.

Maya Horowitz, vice president of research at cybersecurity company Check Point, is more optimistic: “Defenders are using artificial intelligence better than threat actors. We have dozens of AI-powered security engines while attackers are still experimenting, trying to understand how they can use it. There are some things. The most popular is to write emails phishing [engaño mediante suplantación]. They also experiment with fake voice calls. But they are not yet creating malicious code with this technology. You still can’t ask AI to write code to simply use it. There has to be a coder who knows what he is doing. “I think our side is winning this time.”

By Editor

Leave a Reply