Recreate complex movements of the human body dressed in baggy garments with the help of AI

A new approach taken by researchers at Adobe Research and University College London has solved a problem in the recreation of realistic movement in human figures in virtual environments produced for large or loose clothes, that hide part of the body’s movements.

Recreating the movement of the human body when it is covered by wide garments, with complex patterns or textures, is difficult because they also move, and their movement depends, in turn, on that of parts of the human anatomy that are hidden for those same clothes.

The difficulty lies above all in generate dynamic animations for virtual environments that simulate natural movement of the human body, realistically. To tackle this, researchers at Adobe Research and University College London propose a video-based appearance synthesis method and the adoption of the StyleGAN antagonistic generative network architecture.

What they have done has been record a person to model their specific appearance and reorient movement. “Given a sequence of motion of the human body in 3D, we first synthesize the representation of a deformed thick garment. We learn deep dynamic characteristics about the pattern of the thick garment. The dynamic neural renderer simulates and co-renders the target garment,” they explain in a demo video.

“We present a simple reorientation and alignment of the movement transfer process”, they indicate in the text of the investigation, where they assure that “improving the generation of the movement is a promising direction“Their objective is to extend the work they have done” to videos captured with moving cameras and to incorporate camera movement in the generation process. ”

By Editor

Leave a Reply