Inverse Global Illumination
using a Neural Radiometric Prior

1University of Maryland, College Park
2NVIDIA
SIGGRAPH 2023 Conference Proceedings

Equal training time comparison at 1.5 hours

Abstract

Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive and prone to noise. Instead, this paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer, while still correctly accounting for global illumination. Inspired by the Neural Radiosity technique, we use a neural network as a radiance function, and we introduce a prior consisting of the norm of the residual of the rendering equation in the inverse rendering loss. We train our radiance network and optimize scene parameters simultaneously using a loss consisting of both a photometric term between renderings and the multi-view input images, and our radiometric prior (the residual term). This residual term enforces a physical constraint on the optimization that ensures that the radiance field accounts for global illumination. We compare our method to a vanilla differentiable path tracer, and more advanced techniques such as Path Replay Backpropagation. Despite the simplicity of our approach, we can recover scene parameters with comparable and in some cases better quality, at considerably lower computation times.

Teaser Image

Teaser Figure. We propose an inverse rendering method that uses a radiometric prior to account for global illumination as opposed to building and differentiating path integrals. Our method uses standard automatic differentiation (AD) to compute gradients with respect to the scene parameters, while satisfying the rendering equation using our radiometric prior, which is represented by a neural network. Here we compare a traditional auto-differentiable path tracer (AD-PT), an advanced technique (Path Replay Backpropagation, or PRB), and our method (AD-Ours) for recovering non-diffuse spatially varying BRDF properties (also represented as neural networks) under known illumination and geometry from 26 views of the Staircase scene. Despite its simplicity, our approach takes into account global illumination, and more faithfully recovers albedo and roughness compared to differentiable path tracing and PRB. Each method used a total of 16384 × 16 × 18000 (batch size × spp × steps) = 4.7B samples, i.e., 690 training samples per pixel (26 views × 512 × 512 pixels). We conducted all experiments with a single RTX3090 GPU, and the total runtimes for AD-PT, PRB, and our method were 760, 970, and 260 minutes, respectively.




Analysis of biased gradients. We visualize gradients with respect to the roughness parameter of the wooden material on the staircase and the picture frames, using different methods. We cap the maximum path length for PRB at each specified number. For our approach, we perform a certain number k of 'differentiable' bounces before querying our radiance cache. The heatmap visualizes magnitude of the gradient of the L2 loss with respect to the roughness parameters over the entire image, comparing the current state with the target state. For unbiased gradients, cells on the diagonal have gradients equal to zero, and non-zero values indicate bias. As shown, direct illumination, PRB with low numbers of bounces, and our method provide biased gradients. Increasing the number k of differentiable bounces reduces bias in our gradients. All results in this paper use 'Ours' with k=1 differentiable bounce, but we sample the residual at an extra bounce. In all visualizations, red indicates negative and blue indicate positive values.

Video Presentation

BibTeX


        @misc{hadadan2023inverse,
          title={Inverse Global Illumination using a Neural Radiometric Prior}, 
          author={Saeed Hadadan and Geng Lin and Jan Novák and Fabrice Rousselle and Matthias Zwicker},
          year={2023},
          eprint={2305.02192},
          archivePrefix={arXiv},
          primaryClass={cs.CV}
        }