University of Bath

Intel Visual Computing Institute

Max-Planck-Institut für Informatik

Inria logo

University of Cambridge logo

Portrait photo of Christian RichardtI am a Lecturer (=Assistant Professor) in the Visual Computing Group at the University of Bath. My research interests cover the fields of image processing, computer graphics and computer vision, with a focus on video processing for 360 degree videos, light fields, and for user-centric applications.

I am also interested in stereoscopic vision and graphics, computational photography and non-photorealistic rendering (NPR). In my work, I also often build on insights from human visual perception and use techniques from multi-view geometry. I furthermore enjoy teaching, mentoring and supervising student projects – many of which have received project prizes at Cambridge.

I was previously a postdoctoral researcher working on user-centric video processing and motion capture with Christian Theobalt at the Intel Visual Computing Institute at Saarland University and also in the Graphics, Vision and Video group at Max-Planck-Institut für Informatik in Saarbrücken, Germany. Previously, I was a postdoc in the REVES team at Inria Sophia Antipolis, France, working with George Drettakis and Adrien Bousseau, and I interned with Alexander Sorkine-Hornung at Disney Research Zurich where I worked on Megastereo panoramas.

I graduated with a PhD and BA from the University of Cambridge in 2012 and 2007, respectively. My PhD in the Computer Laboratory’s Rainbow Group was supervised by Neil Dodgson. My doctoral research investigated the full life cycle of videos with depth (RGBZ videos): from their acquisition, via filtering and processing, to the evaluation of stereoscopic display.


I am looking for motivated students at all levels who are interested in visual computing, computational photography and video processing. Please contact me with your project idea(s).


October 2016
The project websites for our three 3DV papers are now online: video depth-from-defocus, handheld wide-baseline scene flow (oral) and real-time halfway-domain scene flow.

September 2016
We will present our egocentric motion-capture approach “EgoCap” at SIGGRAPH Asia 2016 and at EPIC@ECCV2016.

Our work on motion capture with volumetric contour cues will be presented at ECCV 2016.

Three papers accepted at 3DV 2016: one oral (arXiv:1609.05115) and two posters. More soon.

August 2016
I started a new position as Lecturer (=Assistant Professor) at University of Bath.

July 2016
We presented paper on Live Intrinsic Video at SIGGRAPH 2016 in Anaheim.

One paper accepted at ECCV 2016 (arXiv:1607.08659) and at SIGGRAPH Asia 2016 (arXiv:1609.07306).

May 2016

Our paper on Live Intrinsic Video was accepted at SIGGRAPH 2016.

March 2016
I am looking to fill a fully-funded PhD position on light field synthesis for start in October 2016 (position filled).

February 2016
I will be joining the University of Bath as a Lecturer in August 2016.

December 2015
We presented our paper on Differentiable Visibility at ICCV 2015.

October 2015
Posted project website for our upcoming ICCV 2015 paper on Differentiable Visibility.

August 2015
Our paper on 4D Model Flow was presented at Pacific Graphics 2015.

We presented a course on User-Centric Computational Videography at SIGGRAPH 2015.