Research Mission: |
One central problem in computer graphics is synthesizing realistic images that are indistinguishable from real photographs. The basic theory behind rendering such images has been known for a while and has been turned into a broad range of rendering algorithms ranging from slow but physically accurate frameworks to hardware-accelerated, real-time applications that make a lot of simplifications. One fundamental building block to these algorithms is the simulation of the interaction between incident illumination and the reflective properties of the scene. The limiting factor in photo-realistic image synthesis today is not the rendering per se but rather modeling the input to the algorithms. The realism of the outcome depends largely on the quality of the scene description passed to the rendering algorithm. Accurate input is required for geometry, illumination and reflective properties. An efficient way to obtain realistic models is through measurement of scene attributes from real-world objects by inverse rendering. The attributes are estimated from real photographs by inverting the rendering process. The digitization of real word objects is of increasing importance not only to image synthesis applications, such as film production or computer games, but also to a number of other applications, such as e-commerce, education, digital libraries, cultural heritage, and so forth. In the context of cultural heritage, for example, the captured 3D models can serve to digitally preserve an artifact, to document and guide the restoration process, and to present art to a wide audience via the Internet.
One focus of this research group is on developing photographic techniques for measuring the scene’s reflection properties. Here, the goal is to capture the appearance of the entire scene including all local and global illumination effects such as highlights, shadows, interreflections, or caustics, such that the scene can be reconstructed and relit later in an virtual environment producing photorealistic images. The envisioned techniques should be general enough to cope with arbitrary materials, with scenes with high depth complexity such as trees, and with scenes in arbitrary environments, i.e. outside a measurement laboratory.
A second thread of research followed by this group is computational photography with the goal to develop optical systems augmented by computational procedures; by jointly designing the capturing apparatus, i.e., the optical layout of active or passive devices such as cameras, projectors, beam-splitters, etc., together with the capturing algorithm and appropriate post-processing. Such combined systems could be used to increase image quality, e.g., by removing images noise or camera shake, over pronouncing or extracting scene feature such as edges or silhouettes by optical means, to 3D volume reconstruction algorithms from images. We plan to devise computational photography techniques for advanced optical microscopy, large scale scene acquisition, and even astronomical imaging. |