Foto | First Name | Last Name | Position |
---|---|---|---|
Mykhaylo | Andriluka | People Detection and Tracking | |
Roland | Angst | Vision, Geometry, and Computational Perception | |
Tamay | Aykut | ||
Vahid | Babaei | ||
Pierpaolo | Baccichet | Distributed Media Systems | |
Volker | Blanz | Learning-Based Modeling of Objects | |
Volker | Blanz | Learning-Based Modeling of Objects | |
Martin | Bokeloh | Inverse Procedural Modeling | |
Adrian | Butscher | Geometry Processing and Discrete Differential Geometry | |
Renjie | Chen | Images and Geometry |
Researcher
|
Dr. Michael Zollhöfer |
Visual Computing, Deep Learning and Optimization
Name of Research Group: | Visual Computing, Deep Learning and Optimization |
Homepage Research Group: | web.stanford.edu/~zollhoef |
Personal Homepage: | zollhoefer.com |
Mentor Saarbrücken: | Hans-Peter Seidel |
Mentor Stanford: | Pat Hanrahan |
Research Mission: | The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems. My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques. The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook. |
Researcher
- Name of Researcher
- Hendrik Lensch
- Homepage of Research Group
- First Name
- Hendrik
- Last Name
- Lensch
- Foto
- url_foto
- Homepage
- Phone
- Position
- General Appearance Acquisition
- Mentor in Saarbruecken
- Mentor in Stanford
- Categories
- Former Groups
- Research Mission
- One central problem in computer graphics is synthesizing realistic images that are indistinguishable from real photographs. The basic theory behind rendering such images has been known for a while and has been turned into a broad range of rendering algorithms ranging from slow but physically accurate frameworks to hardware-accelerated, real-time applications that make a lot of simplifications. One fundamental building block to these algorithms is the simulation of the interaction between incident illumination and the reflective properties of the scene. The limiting factor in photo-realistic image synthesis today is not the rendering per se but rather modeling the input to the algorithms. The realism of the outcome depends largely on the quality of the scene description passed to the rendering algorithm. Accurate input is required for geometry, illumination and reflective properties. An efficient way to obtain realistic models is through measurement of scene attributes from real-world objects by inverse rendering. The attributes are estimated from real photographs by inverting the rendering process. The digitization of real word objects is of increasing importance not only to image synthesis applications, such as film production or computer games, but also to a number of other applications, such as e-commerce, education, digital libraries, cultural heritage, and so forth. In the context of cultural heritage, for example, the captured 3D models can serve to digitally preserve an artifact, to document and guide the restoration process, and to present art to a wide audience via the Internet. One focus of this research group is on developing photographic techniques for measuring the scene's reflection properties. Here, the goal is to capture the appearance of the entire scene including all local and global illumination effects such as highlights, shadows, interreflections, or caustics, such that the scene can be reconstructed and relit later in an virtual environment producing photorealistic images. The envisioned techniques should be general enough to cope with arbitrary materials, with scenes with high depth complexity such as trees, and with scenes in arbitrary environments, i.e. outside a measurement laboratory. A second thread of research followed by this group is computational photography with the goal to develop optical systems augmented by computational procedures; by jointly designing the capturing apparatus, i.e., the optical layout of active or passive devices such as cameras, projectors, beam-splitters, etc., together with the capturing algorithm and appropriate post-processing. Such combined systems could be used to increase image quality, e.g., by removing images noise or camera shake, over pronouncing or extracting scene feature such as edges or silhouettes by optical means, to 3D volume reconstruction algorithms from images. We plan to devise computational photography techniques for advanced optical microscopy, large scale scene acquisition, and even astronomical imaging.
- mission_rtf
- Name of Research Group
Personal Info
- Photo
- Website, Blog or Social Media Link