Meta logo

University of Bath

Intel Visual Computing Institute

Max-Planck-Institut für Informatik

Inria logo

University of Cambridge logo

Portrait photo of Christian RichardtChristian Richardt is a Research Scientist at the Codec Avatars Lab at Meta Reality Labs in Pittsburgh, PA, USA. He was previously a Reader (=Associate Professor) and EPSRC-UKRI Innovation Fellow in the Visual Computing Group, the CAMERA Centre and REVEAL at the University of Bath. His research interests cover the fields of image processing, computer graphics and computer vision, and his research combines insights from vision, graphics and perception to reconstruct visual information from images and videos, to create high-quality visual experiences with a focus on novel-view synthesis.

Christian was previously a postdoctoral researcher working on user-centric video processing and motion capture with Christian Theobalt at the Intel Visual Computing Institute at Saarland University and also in the Graphics, Vision and Video group at Max-Planck-Institut für Informatik in Saarbrücken, Germany. Previously, he was a postdoc in the REVES team at Inria Sophia Antipolis, France, working with George Drettakis and Adrien Bousseau, and he interned with Alexander Sorkine-Hornung at Disney Research Zurich where he worked on Megastereo panoramas.

Christian graduated with a PhD and BA from the University of Cambridge in 2012 and 2007, respectively. His PhD in the Computer Laboratory’s Rainbow Group was supervised by Neil Dodgson. His doctoral research investigated the full life cycle of videos with depth (RGBZ videos): from their acquisition, via filtering and processing, to the evaluation of stereoscopic display.


June 2024
We will present five papers by our outstanding interns at CVPR 2024 in Seattle:
PlatoNeRF, HybridNeRF, SpecNeRF, Real Acoustic Fields, and ViewDiff.

My PhD student Jundan Luo has two papers accepted: IntrinsicDiffusion (SIGGRAPH 2024).

November 2023
Three papers accepted:
1. Neural Feature Filtering for Faster Structure-from-Motion Localisation (BMVC 2023 in Aberdeen),
2. PyNeRF: Pyramidal Neural Radiance Fields (NeurIPS 2023 in New Orleans), and
3. VR-NeRF: High-Fidelity Virtualized Walkable Spaces (SIGGRAPH Asia 2023 in Sydney).

Check out our new Eyeful Tower dataset, the highest-resolution, highest-quality, high-dynamic range, multi-view indoor scene dataset powering VR-NeRF.

October 2023
We presented one paper on Neural Fields for Structured Lighting at ICCV 2023 in Paris.
June 2023
We held DynaVis, the Fourth International Workshop on Dynamic Scene Reconstruction at CVPR 2023 in Vancouver (afternoon of Monday 19 June).

We presented two papers at CVPR 2023 in Vancouver: HyperReel and Neural Duplex Radiance Fields for high-fidelity and real-time view synthesis, respectively.

March 2023
I will be serving as an Area Chair for ICCV 2023.
June 2022
We presented our work on high-resolution 360° monocular depth estimation (360MonoDepth) at CVPR 2022 in New Orleans.
April 2022
I’m excited to join Reality Labs Research in Pittsburgh as a Research Scientist Lead.

Selected publications

VR-NeRF: High-Fidelity Virtualized Walkable Spaces
Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer and Christian Richardt
SIGGRAPH Asia 2023

360MonoDepth: High-Resolution 360° Monocular Depth Estimation
Manuel Rey-Area*, Mingze Yuan* and Christian Richardt
Conference on Computer Vision and Pattern Recognition (CVPR) 2022

OmniPhotos: Casual 360° VR Photography
Tobias Bertel, Mingze Yuan, Reuben Lindroos and Christian Richardt
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2020)

HoloGAN: Unsupervised learning of 3D representations from natural images
Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt and Yong-Liang Yang
International Conference on Computer Vision (ICCV) 2019

Megastereo: Constructing High-Resolution Stereo Panoramas
Christian Richardt, Yael Pritch, Henning Zimmer and Alexander Sorkine-Hornung
Proceedings of CVPR 2013 (oral presentation, 3.3% acceptance rate)