Foto | First Name | Last Name | Position |
---|---|---|---|
Mike | Sips | Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces | |
Mike | Sips | Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces | |
Michael | Stark | Visual Object Recognition and Scene Interpretation | |
Jürgen | Steimle | Embodied Interaction | |
Markus | Steinberger | GPU Scheduling and Parallel Computing | |
Carsten | Stoll | Optical Performace Capture | |
Robert | Strzodka | Integrative Scientific Computing | |
Robert | Strzodka | Integrative Scientific Computing | |
Holger | Theisel | Topological Methods for Vector Field Processing | |
Holger | Theisel | Topological Methods for Vector Field Processing |
Researcher
|
Dr. Michael Zollhöfer |
Visual Computing, Deep Learning and Optimization
Name of Research Group: | Visual Computing, Deep Learning and Optimization |
Homepage Research Group: | web.stanford.edu/~zollhoef |
Personal Homepage: | zollhoefer.com |
Mentor Saarbrücken: | Hans-Peter Seidel |
Mentor Stanford: | Pat Hanrahan |
Research Mission: | The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems. My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques. The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook. |
Researcher
- Name of Researcher
- Roland Angst
- Homepage of Research Group
- First Name
- Roland
- Last Name
- Angst
- Foto
- Homepage
- rangst.github.io/#/home
- Phone
- Position
- Vision, Geometry, and Computational Perception
- Mentor in Saarbruecken
- Hans-Peter Seidel
- Mentor in Stanford
- Bernd Girod and Leonidas Guibas
- Categories
- Former Groups
- Research Mission
- Teaching a machine what it ‘sees’ has been a long-standing goal in computer vision which is not surprising since such 3D scene understanding algorithms would have a tremendous value for applications. For example, robots (including vehicles such as cars) could interact autonomously and intelligently within their environment, images and videos could be automatically indexed based on their spatial arrangement and on semantic tags, and missing parts in 3D reconstructions could be completed based on how a reasonable scene looks like. Even though seemingly easy for us humans, computers still struggle with this task. However, 3D computer vision (multiple-view geometry, visual SLAM, structure-from-motion, …) has matured and is nowadays a well-established technique for metric 3D reconstructions. Moreover, we have seen large progress in 2D image and video content analysis (segmentation, object and activity recognition, …). 3D reconstruction and 2D scene understanding have mostly evolved independently, though. It is clear that the two problems intertwine and a joint approach would be mutually beneficial. The major goal of our research is precisely the development of mathematical formulations and algorithms which combine scene understanding and 3D reconstructions in a joint framework. Since this is a very ambitious task, our research will initially focus on low level geometric concepts before ultimately tackling the higher-level 3D scene understanding problem. Starting from known concepts in 3D computer vision, we follow an interdisciplinary approach mostly drawing upon geometric computing and data-driven approaches. Roland Angst joined ASUS Corp. Taipei, Tw, in December 2015.
- mission_rtf
- Name of Research Group
- Vision, Geometry, and Computational Perception
Personal Info
- Photo
- Website, Blog or Social Media Link