Foto | First Name | Last Name | Position |
---|---|---|---|
Mike | Sips | Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces | |
Mike | Sips | Visual Exploration of Space-Time Pattern in Multi-Dimensional and Heterogeneous Data Spaces | |
Michael | Stark | Visual Object Recognition and Scene Interpretation | |
Jürgen | Steimle | Embodied Interaction | |
Markus | Steinberger | GPU Scheduling and Parallel Computing | |
Carsten | Stoll | Optical Performace Capture | |
Robert | Strzodka | Integrative Scientific Computing | |
Robert | Strzodka | Integrative Scientific Computing | |
Holger | Theisel | Topological Methods for Vector Field Processing | |
Holger | Theisel | Topological Methods for Vector Field Processing |
Researcher
|
Dr. Michael Zollhöfer |
Visual Computing, Deep Learning and Optimization
Name of Research Group: | Visual Computing, Deep Learning and Optimization |
Homepage Research Group: | web.stanford.edu/~zollhoef |
Personal Homepage: | zollhoefer.com |
Mentor Saarbrücken: | Hans-Peter Seidel |
Mentor Stanford: | Pat Hanrahan |
Research Mission: | The primary focus of my research is to teach computers to reconstruct and analyze our world at frame rate based on visual input. The extracted knowledge is the foundation for a broad range of applications not only in visual effects, computer animation, autonomous driving and man-machine interaction, but is also essential in other related fields such as medicine and biomechanics. Especially, with the increasing popularity of virtual, augmented and mixed reality, there comes a rising demand for real-time low latency solutions to the underlying core problems. My research tackles these challenges based on novel mathematical models and algorithms that enable computers to first reconstruct and subsequently analyze our world. The main focus is on fast and robust algorithms that approach the underlying reconstruction and machine learning problems for static as well as dynamic scenes. To this end, I develop key technology to invert the image formation models of computer graphics based on data-parallel optimization and state-of-the-art deep learning techniques. The extraction of 3D and 4D information from visual data is highly challenging and under-constraint, since image formation convolves multiple physical dimensions into flat color measurements. 3D and 4D reconstruction at real-time rates poses additional challenges, since it involves the solution of unique challenges at the intersection of multiple important research fields, namely computer graphics, computer vision, machine learning, optimization, and high-performance computing. However, a solution to these problems provides strong cues for the extraction of higher-order semantic knowledge. It is incredibly important to solve the underlying core problems, since this will have high impact in multiple important research fields and provide key technological insights that have the potential to transform the visual computing industry. In summer 2019 Michael Zollhöfer joined Facebook. |
Researcher
- Name of Researcher
- Meinard Müller
- Homepage of Research Group
- First Name
- Meinard
- Last Name
- Müller
- Foto
- Phone
- Position
- Multimedia Retrieval
- Mentor in Saarbruecken
- Hans-Peter Seidel
- Mentor in Stanford
- Bernd Girod
- Categories
- Former Groups
- Research Mission
- Modern information society is experiencing an explosion of digital content, comprising text, audio, video and graphics. The challenge is to organize, understand, and search multimodal information in a robust, efficient and intelligent manner. One challenge arises from the fact that multimedia objects, even though they are similar from a structural or semantic viewpoint, often reveal significant spatial or temporal differences. This makes content-based multimedia retrieval a challenging research field with many unsolved problems. In my habilitation project conducted at Bonn University, we studied fundamental algorithms and concepts for the analysis, classification, indexing, and retrieval of time-dependent data streams by means of two different types of multimedia data: waveform-based music data and human motion data. In the music domain, we developed techniques for automatic music alignment, synchronization, and matching. The common goal of these tasks is to automatically link several types of music representations, thus coordinating the multiple information sources related to a given musical work. In the motion domain, we introduced a general and unified framework for motion analysis, retrieval, and classification using binary features to represent poses. By handling spatio-temporal motion deformations already on the feature level, we were able to adopt efficient indexing methods allowing for flexible and efficient content-based retrieval for large motion capture data sets.
- mission_rtf
- Name of Research Group
- Multimedia Retrieval
Personal Info
- Photo
- Website, Blog or Social Media Link