Figure 01, the robot closest to the humanoid that science fiction anticipated |  Technology

Figure 01 is the closest prototype to the humanoid that science fiction had anticipated. The robot, which this March received investment and technological support from the artificial intelligence company Open AI, the processor giant Nvidia and Jeff Bezos, founder of Amazon, is capable of discerning objects not only by their shape but by its functionality, carry out various tasks by adjusting its movements to the resistance of what it manipulates, interact with the environment and even evaluate its performance. Figure is close in appearance to the machines of Yo, robot and it’s still far from Robocopbut it is an example of a dazzling technological career: the embodiment, an English term that could be translated as incarnation or personification and that, according to Luis Merino, professor and director of the Service Robotics Lab at the Pablo de Olavide University, means breaking the limits of the “passivity of automatic learning.” to get closer to humans, where interaction with the environment is the key.

The commitment of large companies to this technology is clear. Nvidia, in addition to its financial support for Figure, has announced GR00T, a specific platform for humanoid robots, in whose development there is an accelerated race in which, among others, companies such as 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics and XPENG Robotics.

Dennis Hong is founder of RoMeLa and creator of Artemis, an android robot that plays soccer as a demonstration of the versatility achieved in its movement capabilities. Hong explains the qualitative leap of the new developments: “99.9% of the robots that exist today use servomotors and are very rigid. They are great for factory automation or one-off household tasks. [como los aspiradores autónomos]but this robot [Artemis] “It imitates biological muscle and allows it to be agile, fast, robust and quite intelligent.”

“This intelligence,” as he explains, allows him to recognize a good plan and make decisions autonomously. “The future,” he concludes, “is that it can do anything a human can do.” To demonstrate this, Hong grabs Artemis from behind and pushes him to force him to react to an unforeseen event, proof that the robot overcomes.

It is a very significant step compared to models such as those of Deep Robotics, which develops quadrupeds for industrial and rescue work. Vera Huang highlights “motor advances, such as the ability to jump or climb stairs,” but admits that they are not equipped with the latest generation of intelligence.

Cassie, from Agility Robotics, has been prepared to cover different surfaces and execute large jumps without prior knowledge of the terrain. She does this through the “reinforcement learning” technique. “The goal is to teach the robot to learn to make all kinds of dynamic movements the way a human does. “Cassie uses the history of what she has observed and will quickly adapt to the real world,” she explains to MIT technology review Zhongyu Li, from the University of California and participant in the development.

The researchers used an AI technique called reinforcement learning to help a two-legged robot. Reinforcement learning works by rewarding or penalizing an AI as it attempts to accomplish a goal. In this case, the approach taught the robot to generalize and respond in new scenarios, rather than freeze as its predecessors would have done.

“The next big step is for humanoid robots to do real work, plan activities and interact with the physical world in ways that don’t just interact with their feet and the ground,” says Alan Fern, professor of computer science at the Oregon State University.

In this sense, Figure advances, a robot 1.70 meters high, 60 kilos, with the capacity to carry a third of its weight, electric, with five hours of autonomy and a speed of 1.2 meters per second. But what makes it different is its ability to carry out different tasks, discern people and objects, act autonomously and, above all, learn. The company defends that its human appearance is necessary because “the world is designed for it.”

Figure is an example of incarnation or personalization. “We cannot separate mind and body. Learning brings them together. Most robots process images and data. You train them and they don’t have an interaction. However, humans learn by interacting with our environment, because we have a body and we have senses,” explains Merino.

His team has already developed assistance robots that, when acting as tour guides, adapt their explanations to people’s reactions, or act according to the feelings of an elderly person they are helping, or avoid violating the social distance of humans with those who work.

But in most current robots, even those with artificial intelligence, “learning is passive,” according to the UPO professor. Cassie, in addition to developing the artificial neural network, has developed her dexterity through reinforcement learning, a technique similar to that used for pet training.

Merino goes deeper into this sense. “We do not give the robot an explicit description of what it has to do, but rather we provide a signal when it misbehaves and, from then on, it will avoid it. And on the contrary. If he does well, we give him a reward.” In the case of pets it can be a toy, a caress or a treat. For robots it is an algorithm that they will try to achieve as many times as possible with their behavior.

The researcher clarifies that this system represents, in addition to an advance in robotic capabilities, a formula to make them more efficient, since they require less energy to process millions of data linked to all possible variables. “It is very difficult to program a robot for all the circumstances it may face,” says Merino.

“We have had robots in factories for decades doing things algorithmically and repetitively. But if we want them to be more general, we have to go one step further,” he concludes. The robotics race is going in this direction.

And like any digital advance, security will be a determining element. Any system, even a simple appliance connected to the cloud, can be a victim of attacks. In this sense, Nvidia, present in the most advanced robotics developments, has signed a collaboration agreement with Check Point to improve the security of artificial intelligence infrastructure in the cloud.

Amazon Web Services (AWS) has also announced its collaboration with Nvidia to use the latter company’s platform, Blackwell, presented this year at its GTC 2024 developer conference. The agreement includes the joint use of infrastructure from both technologies in developments that include the robotic.

By Editor

Leave a Reply