OpenAI technology is tested to control swarm UAVs

OpenAI’s technology was tested by two defense companies in the US to convert the operator’s voice into digital instructions for UAV swarms in combat.

Theo BloombergOpenAI is cooperating with two defense technology companies, among the units selected by the US Department of Defense to participate in the competition to develop technology for automatically commanding UAV swarms to perform tasks, worth 100 million USD.

OpenAI’s technology is used to convert the commander’s voice commands on the battlefield into digital instructions for the drone, but is not used to directly operate the UAV, integrate weapons or make targeting decisions.

 

Illustration of UAVs flying in swarms next to the OpenAI logo. Image: ChatGPT

The competition was initiated by the US Department of Defense in January, lasted six months and was implemented in many phases. The first phase focuses on software development, before real-life testing. The following stages aim to develop awareness and shared goals, and finally the launch cycle, ending the mission. Participating teams must demonstrate that their technology can translate commander’s voice commands into action, helping drones perform simultaneous tasks in combat.

An OpenAI spokesperson said the company’s participation was limited. Their two partners integrated the company’s open source model into the bid, acting as an intermediary between the operator and the drone.

OpenAI’s appearance in the competition shows that its defense cooperation activities are expanding. Previously, the Pentagon also announced plans to provide ChatGPT to about three million US Department of Defense personnel.

CEO Sam Altman once downplayed the possibility that OpenAI would support the development of AI-integrated weapons. “I don’t think most of the world wants AI to make decisions about weapons,” he said at a conference on modern conflict last year. However, he also said: “I will never say never.”

Although swarm UAVs have been researched for many years, building software to coordinate a series of UAVs in the air and at sea as a unified entity that can move automatically and pursue targets is still a big challenge. The prospect of integrating chatbots and voice-text command capabilities into weapons also makes many people worried. Some have expressed concern about the risks if artificial AI is used to translate voice into combat decisions without human supervision.

This move comes in the context of many employees at major AI laboratories leaving the company due to ethical concerns. Previously, in 2018, Google faced a wave of internal protests related to the Maven project – using AI to analyze images from drones.

By Editor