Tekoälyt|Artificial intelligence studied human gestures at Stanford University for a thousand hours. The virtual gesturer may be useful in animations and as an assistant in videos.
The summary is made by artificial intelligence and checked by a human.
Virtual characters were trained to gesture like humans with the help of artificial intelligence at Stanford University.
The AI listened to audiobooks and watched videos of people gesturing as they spoke. In tests, the artificial intelligence was more natural than older models.
Artificial intelligence could appear naturally in animations and videos.
On the computer the human figures made can speak quite fluently. The sound is often realistic. Mouth movements follow what is said.
But gestures! They literally still have work to do. There are often contradictions between speech and body language.
If the gestures look genuine and feel somber with the speech, there’s a reason for that.
For example, in popular movies Avatar and Avatar: The Way of Water the actor has moved and at the same time talked first in the studio. Only then an artificial character has been created on top.
Now attempts have been made to train virtual characters to gesture human-like and “correctly” in artificial intelligence at Stanford University in California.
The artificial intelligence was given audio samples to hear and image samples to see. The effort was to teach the interaction between people’s language, speech and emotions.
Researcher of deep learning and 3d landscapes Changan Chen pretrained the AI for a thousand hours.
The artificial intelligence listened to, for example, audio books that included more than 60 hours of video data describing movements.
With videos people gestured when they spoke to the audience.
The examples helped the artificial intelligence to code and interpret the relationships between text, sound and video, says psychiatrist and behavioral scientist Ehsan Adelialso from Stanford.
For example, the AI learned when people might tilt their head and how they usually gesture with their hands. The machine also learned how a certain tone of voice can be associated with certain emotions. Eventually it learned to create gestures equivalent to speech, says New Scientist.
Of course, artificial intelligences have predicted body movements before, when given as a source either written text or spoken voice. However, the model developed at Stanford learned to recognize emotions from pictorial material as well. The learning was related, for example, to what “happiness” could look like.
According to the researchers, in tests, the new artificial intelligence conjured up human gestures and movements that were expressive and nuanced, better than older models.
Artificial intelligence can help create, for example, assistants in videos that are realistic when they interact with a person. This is what Adeli, who developed artificial intelligence, promises.
Artificial intelligence could also make characters in video games or animations move naturally.
One use could be a news reader, which feels natural. Until now, artificial newsreaders have generally caused criticism in the world.
Gesturing artificial intelligence is not yet able to produce sign language in real time. The artificial intelligence must first familiarize itself with the voice data.
The researchers presented experiments with gesturing artificial intelligence in the Arxiv pre-publication service.