From stopping deceptive apps to "lie detector" for migrants. Here is the European Regulation on AI

There is a stop to deceptive artificial intelligence and the use of AI to subject migrants seeking asylum to a lie detector or establish priority in sending emergency health services or firefighters is in the pipeline . It’s all part of the lineup of the new, and so far only, European AI Regulation.

In December they will be in Europe those systems defined as “unacceptable” are prohibited capable of “behavioral manipulation” and “classifying people on the basis of race, religion and sexual orientation” and through the analysis of their biometric data. The new law was published in the European Official Journal on 12 July and after 20 days there was the first green light. But not to all its parts. After next month, the next useful date for the implementation of the general rules will be June 2025. While the launch of the constraints for high-risk systems – those considered potentially dangerous for the rights and health of the citizen, such as the lie detector precisely – it was postponed from 2026 to 2027. Last November the EU tried to get ahead of the curve by making a pact with the “interested parties” (volunteers), organizing a webinar to inform and “share experiences” on AI” .

On balance, therefore, the rules that will have to scrutinize high-risk intelligent systems are not yet in place. When they are, the Union “will evaluate the impact on fundamental rights”, will guarantee “greater transparency” of the devices registered in the database and, where AI will be used for the recognition of emotions, the document establishes that “it will have to be inform people exposed” to the electronic eye. Furthermore, the law will ensure that intelligent machines possess three requirements considered fundamental: “Accuracy, robustness and cybersecurity”. Adding the warning: “Companies that do not comply with the rules – it is stated in the act – will be sanctioned up to 7% of global annual turnover for violations of prohibited AI applications, up to 3% for violations of other obligations and up to 1.5% for providing inaccurate information”.

 

It seems like science fiction, but it is already reality

In the meantime, the sophisticated devices continue to “run” as always and, with all the necessary precautions, they will do so even afterwards. And it already seems like science fiction. In fact – the Regulation summarizes – artificial intelligence carries out checks at work, at school, in emergency switchboards and in hospitals, to manage road traffic, gas networks, electricity and home heating. In fact, it comes the lie detector, the polygraph, was also hired. On page 127 of the Regulation, Annex III explains the sectors and roles where AI systems to be monitored are at work. We start from “biometrics, for the recognition of emotions”.

We continue with AI in safety components for the “management of digital infrastructures of road traffic, supply of water, gas, heating or electricity”. Therefore, algorithms are used in the “vocational education and training sector: to determine access, admission or assignment of people to education and training institutions; evaluate learning outcomes, educational level and behavior of students during tests in institutes”. And then in the “employment, management of workers and access to self-employment” sector. That is, “for hiring or selecting people, advertising vacancies, screening applications, evaluating individuals during interviews or tests”.

The list goes on. The special machines offer useful information to humans to make “decisions regarding the promotion and termination of contractual employment relationships, assignment of tasks, monitoring and evaluation of performance and behavior of people within these employment relationships”. The fifth point involves algorithms in “access to essential public and private benefits and services”. Which means “evaluating the eligibility (or unacceptability, ed.) of people for public assistance benefits and services, creditworthiness or to establish” personal “creditworthiness”. The adoption of AI is even permitted “to dispatch first aid emergency services or to establish priorities regarding the dispatch of such services, including fire fighters and medical assistance”.

 

Here comes the lie detector

In the fight against crime, supersystems are an additional weapon to “determine the risk of a person becoming a victim of crime”. And here too, when necessary, the Regulation provides for the adoption of a lie detector. Its use is also indicated in the “management of migration, asylum and border control to detect the emotional state of a person and the risk involved in entering an EU member state”; for the moment, our Ministry of the Interior has not received any advance warning on the use of the polygraph. The last point of the annex is on high-risk systems in service in the “administration of justice and democratic processes”.

Specifically, but not limited to, “to assist a judicial authority in researching and interpreting the facts and law and in the application of the law to a specific set of facts, or to be used in a similar manner in alternative dispute resolution”. In this regard, the reasoning of judges, magistrates and lawyers shows lights and shadows on this field of action of AI. The robes fear interference in their work.

By Editor

Leave a Reply