Foto | First Name | Last Name | Position |
---|---|---|---|
Christian | Theobalt | Graphics, Vision, Video | |
Thorsten | Thormählen | Image-based 3D Scene Analysis | |
Peter | Vajda | Personalized TeleVision News | |
Michael | Wand | Statistical Geometry Processing | |
Tino | Weinkauf | Feature-Based Data Analysis for Computer Graphics and Visualization | |
Martin | Wicke | ||
Martin | Wicke | Methods for Large-Scale Physical Modeling and Animation | |
Thomas | Wiegand | Image Processing | |
Stefanie | Wuhrer | Non-Rigid Shape Analysis | |
Michael | Zollhöfer | Visual Computing, Deep Learning and Optimization |
Researcher
|
Dr. Michael Zollhöfer |
Visual Computing, Deep Learning and Optimization
Name of Research Group: | Visual Computing, Deep Learning and Optimization |
Homepage Research Group: | web.stanford.edu/~zollhoef |
Personal Homepage: | zollhoefer.com |
Mentor Saarbrücken: | Hans-Peter Seidel |
Mentor Stanford: | Pat Hanrahan |
Research Mission: | The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems. My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques. The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook. |
Researcher
- Name of Researcher
- Antti Oulasvirta
- Homepage of Research Group
- First Name
- Antti
- Last Name
- Oulasvirta
- Foto
- Homepage
- users.comnet.aalto.fi/oulasvir/
- Phone
- Position
- Adaptive Multimodal User Interfaces
- Mentor in Saarbruecken
- Hans-Peter Seidel
- Mentor in Stanford
- Categories
- Former Groups
- Research Mission
- The limits of human performance in computer use are determined jointly by the properties of the user interface (UI) and the human perceptual, motor, and cognitive systems. Recent technological advances have vastly expanded the means for constructing UIs, but we still see limited progress in overcoming the traditional interfaces in user performance. The group's mission is to identify the optima of interactive performance. The scientific approach is based on 1) information theoretical measurement of skilled motor performance to identify candidates for highest throughput, 2) formal analysis of UI design spaces, 3) predictive modeling of user performance, and 4) computational search for UIs that maximize user performance. Whereas previous work in human-computer interaction (HCI) has been largely based on trial and error, this approach allows aggressive exploration of user interfaces. The outcomes are demonstrated as novel user interfaces targeted to two domains: 1) classic interactive tasks, such as target acquisition, text entry, information retrieval, and visual search, and 2) "embodied" tasks where the environment mediates interaction, such as in mixed reality applications.
- mission_rtf
- Name of Research Group
Personal Info
- Photo
- Website, Blog or Social Media Link