Tobias Bertel1    Yusuke Tomoto2    Srinivas Rao2    Rodrigo Ortiz-Cayon2    Stefan Holzer2    Christian Richardt1

1 University of Bath       2 Fyusion Inc.

Poster at SIGGRAPH Asia 2020


Image-based rendering methods that support visually pleasing specular surface reflections require accurate surface geometry and a large number of input images. Recent advances in neural scene representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.



We want to thank Justus Thies for fruitful discussions and sharing the code for DNR. This work was supported by EU Horizon 2020 MSCA grant FIRE (665992), the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), and a Rabin Ezra Scholarship.


  author    = {Tobias Bertel and Yusuke Tomoto and Srinivas Rao and Rodrigo Ortiz-Cayon and Stefan Holzer and Christian Richardt},
  title     = {Deferred Neural Rendering for View Extrapolation},
  booktitle = {SIGGRAPH Asia Posters},
  year      = {2020},
  doi       = {10.1145/3415264.3425441},
  url       = {},