University of Bath

Intel Visual Computing Institute

Max-Planck-Institut für Informatik

Inria logo

University of Cambridge logo

Portrait photo of Christian RichardtI am a Lecturer (=Assistant Professor) in the Visual Computing Group and MARVEL Lab at the University of Bath. My research interests cover the fields of image processing, computer graphics and computer vision, with a focus on video processing for 360 degree videos, light fields, and for user-centric applications.

I am also interested in stereoscopic vision and graphics, computational photography and non-photorealistic rendering (NPR). In my work, I also often build on insights from human visual perception and use techniques from multi-view geometry. I furthermore enjoy teaching, mentoring and supervising student projects – many of which have received project prizes at Cambridge.

I was previously a postdoctoral researcher working on user-centric video processing and motion capture with Christian Theobalt at the Intel Visual Computing Institute at Saarland University and also in the Graphics, Vision and Video group at Max-Planck-Institut für Informatik in Saarbrücken, Germany. Previously, I was a postdoc in the REVES team at Inria Sophia Antipolis, France, working with George Drettakis and Adrien Bousseau, and I interned with Alexander Sorkine-Hornung at Disney Research Zurich where I worked on Megastereo panoramas.

I graduated with a PhD and BA from the University of Cambridge in 2012 and 2007, respectively. My PhD in the Computer Laboratory’s Rainbow Group was supervised by Neil Dodgson. My doctoral research investigated the full life cycle of videos with depth (RGBZ videos): from their acquisition, via filtering and processing, to the evaluation of stereoscopic display.

 

I am looking for students!

Are you interested in visual computing, computer graphics, computational photography or video processing? I am looking for motivated students at all levels, from Bachelor’s, Master’s to PhD level. Please contact me if you are interested.


News

October 2017
Abhi and Gereon will be presenting our work on live user-guided intrinsic video decomposition at ISMAR 2017.

August 2017
I co-organised a course at SIGGRAPH 2017 on Video for Virtual Reality, which covers the technical foundations, current systems in practice, and the potential for future systems of VR video.

July 2017
Our work on live user-guided intrinsic video decomposition has been accepted at ISMAR 2017 and in IEEE TVCG, and our work on combining predictors at test time has been accepted at ICCV 2017.

March 2017
I visited my former group at MPI Informatik for two weeks to join them for the ICCV deadline. Fingers crossed!

February 2017
I am looking for a PhD student to work on creating light fields from existing images and videos (competition-based studentship for UK/EU applicants). Please contact me if you are interested.

December 2016
We presented our new egocentric motion-capture approach “EgoCap” at SIGGRAPH Asia 2016 in Macao.

October 2016
We presented three papers at 3DV: video depth-from-defocus, handheld wide-baseline scene flow (oral) and real-time halfway-domain scene flow.

Our work on motion capture with volumetric contour cues was presented at ECCV 2016.

September 2016
We will present our egocentric motion-capture approach “EgoCap” at SIGGRAPH Asia 2016 and already gave a sneak peek at EPIC@ECCV2016.

Three papers accepted at 3DV 2016: one oral (arXiv:1609.05115) and two posters.

August 2016
I started a new position as Lecturer (=Assistant Professor) at University of Bath.

July 2016
We presented paper on Live Intrinsic Video at SIGGRAPH 2016 in Anaheim.

One paper accepted at ECCV 2016 (arXiv:1607.08659) and at SIGGRAPH Asia 2016 (arXiv:1609.07306).

May 2016
Our paper on Live Intrinsic Video was accepted at SIGGRAPH 2016.


Selected publications

Live User-Guided Intrinsic Video For Static Scenes
Abhimitra Meka*, Gereon Fox*, Michael Zollhöfer, Christian Richardt and Christian Theobalt
IEEE Transactions on Visualization and Computer Graphics (ISMAR 2017)

Video for Virtual Reality
Christian Richardt, James Tompkin, Jordan Halsey, Aaron Hertzmann, Jonathan Starck and Oliver Wang
Course at SIGGRAPH 2017 (Thursday, 3 August, 9:00–12:15, Room 403AB)

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
Helge Rhodin, Christian Richardt, Dan CasasEldar Insafutdinov, Mohammad Shafiei, Hans-Peter SeidelBernt Schiele and Christian Theobalt
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016)

Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras
Christian Richardt, Hyeongwoo Kim, Levi Valgaerts and Christian Theobalt
International Conference on 3D Vision 2016 (oral presentation, 9.8% acceptance rate)

Video Depth-From-Defocus
Hyeongwoo Kim, Christian Richardt and Christian Theobalt
International Conference on 3D Vision 2016

General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues
Helge Rhodin, Nadia Robertini, Dan Casas, Christian Richardt, Hans-Peter Seidel and Christian Theobalt
European Conference on Computer Vision 2016 (spotlight presentation, 2.9% acceptance rate)

Live Intrinsic Video
Abhimitra Meka, Michael Zollhöfer, Christian Richardt and Christian Theobalt
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2016)

Megastereo: Constructing High-Resolution Stereo Panoramas
Christian Richardt, Yael Pritch, Henning Zimmer and Alexander Sorkine-Hornung
Proceedings of CVPR 2013 (oral presentation, 3.3% acceptance rate)

Real-time Spatiotemporal Stereo Matching Using the Dual-Cross-Bilateral Grid
Christian Richardt, Douglas Orr, Ian DaviesAntonio Criminisi and Neil A. Dodgson
European Conference on Computer Vision 2010 (Poster + Demo)