Artificial intelligence in war. Thales’ revolutionary project

How do we train algorithms when human lives are at stake? In the context of Thales, more than 600 experts are developing artificial intelligence for the defense and security sectors, with disproportionate constraints compared to the AI ​​of consumers.

“This particular environment brings us duties that do not exist in a traditional environment. The limits are set by the design phase of the artificial intelligence algorithm, which cannot work as a black box and where the human being is absolutely essential”, has Said at the FP Philippe Keryer, director of the strategy, research and technology of Thales, the main holder of artificial intelligence patents for defense systems in Europe.

In view of the AI ​​Action Summit, which will be held in Paris on 10 and 11 February, this week Thales has opened its research workshops in Palaiseau, in view of the Earial Counter-Drone System (AI), in the Paris region.

The impact of these innovations is “enormous for the challenges of safety, sovereignty and energy efficiency”, said Patrice Caine, CEO of Thales, which equips 50 armies all over the world and whose systems manage 40% of the space world plane.

Sword and shield

“We have the responsibility of rethinking the way the IIA and learning models works in depth,” says Philippe Keryer. Since the number of data in these sensitive areas is limited, the group generates synthetic data based on its experience to train its algorithms. It takes “ethical hackers” to anticipate threats, invent the most sophisticated attacks and submit the software to a “resilience crash test” before being validated.

A “sword and shield” principle already applied to weapon systems (drones and anti-drone systems). “It is thinking about evil with the most insidious attacks that we will create good,” says Philippe Keryer. Another challenge: on a battlefield, “we are limited in terms of size, weight, power, but also by the type of network to which we are connected,” says Fabien Flacher, head of the IT security of Thales. On a frigate, on a rafale or on a tank, we have no “server farm” like Google, he adds.

And if artificial intelligences are generally trained on data that are “frozen for a long time”, this cannot work for modern conflicts. “We immediately teach the AI ​​to be more relevant” after each mission, for example by a reconnaissance plane in which it is integrated.

People win

“The IAs are judged more hard than human beings,” says Christophe Meyer, technical director of Cortaix Labs, head of the AI ​​for Thales. But the crucial decision always belongs to the human being. “If there were drones with recovery capacity, there would be a human decision that said +I confirm this suggestion you are making me, with my criteria that are human criteria +”, observes.

The solutions offered by this type of IA also contain a rational explanation.

The calculations it provides allow the operator to relieve its cognitive load and sometimes to spend less time in an area where his life is in danger.

For example, an intelligent radar “can recognize the size of hundreds of targets in a few tens of seconds, while before it took tens of minutes”, explains Nicolas Léger, Thales’ radar expert.

The same goes for the fight against mines: the antennas that detect suspicious devices are increasingly powerful, but produce an impossible amount of data to digest for a human being.

Algorithms help “speed up the classification and evaluate the relevance of identification and neutralization operations”, explains Benoît Drier de Loforte, consultant for action against mines. This technique produces only “from 1% to 2%” of false alarms, while “Americans were satisfied with a 20% margin of error on certain operations” of this type, according to him.

However, algorithms are not yet ready to replace human “big ears”. “If the algorithm has not been trained to face a new threat, it may not work as well as it should”, underlines the expert.

By Editor

Leave a Reply