An artificial intelligence system helps blind people to orient themselves

A scientific team has developed a peaceful technology designed to help in navigation to blind people or with vision problems, a system that uses algorithms of artificial intelligence (AI) to study the environment and send signals to the user when approaching an obstacle.

The details of the device, which provides indications through voice commands, are published in Nature Machine Intelligence, in an article that leads researchers from the Jiao Tong University of Shanghai, China.

“We present a Parable (” Wearable “) multimodal system centered user that improves usability through the combination of software and hardware innovations,” the authors write.

Portable visual assistance electronic systems offer a promising alternative to medical treatments and prostheses. These devices convert the visual information of the environment into other sensory signals to help in everyday tasks.

However, current systems are difficult to use, which has hindered its generalized adoption, summarizes a magazine note.

In this work, Leilei Gu and his team present a peaceful technology of visual assistance which can provide indications through voice commands.

The scientists developed an AI algorithm that processes the video of a camera, incorporated into glasses that users carry, to determine a obstacle -free route. Through bone driving headphones, auditory signals and orders on the environment in front of them can be sent to the subject.

These headphones send the sound through the skull bones leaving free ears to listen to other important sounds of the environment that surrounds the subjects.

The researchers also created elastic artificial skins for the wrists, which send signals of vibration to the user to guide the address of the movement and avoid lateral objects.

The device was tested with humanoid robots and blind participants and with low vision in virtual and real environments, and significant improvements were observed in their navigation tasksas its ability to avoid obstacles to crossing a maze and reach and grab an object.

The results suggest that the integration of visual, auditory and haptic senses can improve the usability and functionality of visual assistance systems.

“This research opens the way for easy to use visual assistance systems and offers alternative ways to improve the quality of life of people with visual disabilities,” the authors underline in their article.

However, future research should focus on continuing to perfect the system.

System improvement

Eduardo Fernández, director of the Bioengineering Institute of the Miguel Hernández de Elche University (Spain), points out that the Leilei Gu group is conducting “very leading” research in the development of intelligent sensors and technologies related to visual prostheses and artificial vision.

While the study suggests that a combination of non -invasive technologies can help achieve more efficient navigation and provide a greater degree of independence and confidence to these people, It should be noted that it has been carried out in very few subjects and in limited and very controlled environmentsindicates to Science Media Center, a platform of scientific resources for journalists.

In addition, most object recognition systems based on deep learning techniques such as the authors use can be significantly affected by lighting conditions.

On the other hand, blind or visual disabilities can be reluctant to use multiple assistance devices for guidance and mobility.

“It is necessary to improve the precision and reliability of these mobility aid systems, develop more intuitive interfaces and reduce the size of the devices,” summarizes the researcher, who does not participate in the study.

In addition, “not only must we consider the functionality of the devices, but also their usability and social acceptability.”

By Editor

Leave a Reply