Image-based Representation of Clothes for Realistic Virtual Try-On

Facts

Run time
08/2014  – 07/2015
DFG subject areas

Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing

Sponsors

DFG Individual Research Grant DFG Individual Research Grant

Description

The project addresses the photo-realistic rendering of faces. Faces show, similar to clothing, very complex appearance properties, and it is very difficult to photorealistically render human faces. Even more, as the human eye is very sensitive to seeing human faces, computer generated human faces often appear uncanny to the human observer. In recent years, there has been a tremendous step towards realistic facial rendering. However, current methods for realistic face rendering require very sophisticated capturing setups, e.g. high-resolution camera setups and light stages, to capture and model the reflectance field of the human face or computationally demanding simulation of facial characteristics during rendering. Extending the developed pose-space image-based rendering methods to the visualization of faces will allow a photorealistic rendering and synthesis of facial expressions directly from a database of example expression images without complex and computationally demanding simulation of skin bulging, wrinkling effects and reflection properties, as these details are captured by the images. One possible application for the proposed methods is performance-driven facial animation, i.e. the transfer of the facial expressions of one person to another.