Mip-NeRF: A Multiscale Representation
for Anti-Aliasing Neural Radiance Fields
ICCV 2021 (Oral, Best Paper Honorable Mention)

Abstract

overview

The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (à la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 17% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

Video

Integrated Positional Encoding

Typical positional encoding (as used in Transformer networks and Neural Radiance Fields) maps a single point in space to a feature vector, where each element is generated by a sinusoid with an exponentially increasing frequency:

Here, we show how these feature vectors change as a function of a point moving in 1D space.

Our integrated positional encoding considers Gaussian regions of space, rather than infinitesimal points. This provides a natural way to input a "region" of space as query to a coordinate-based neural network, allowing the network to reason about sampling and aliasing. The expected value of each positional encoding component has a simple closed form:

We can see that when considering a wider region, the higher frequency features automatically shrink toward zero, providing the network with lower-frequency inputs. As the region narrows, these features converge to the original positional encoding.

Mip-NeRF

We use integrated positional encoding to train NeRF to generate anti-aliased renderings. Rather than casting an infinitesimal ray through each pixel, we instead cast a full 3D cone. For each queried point along a ray, we consider its associated 3D conical frustum. Two different cameras viewing the same point in space may result in vastly different conical frustums, as illustrated here in 2D:

scales

In order to pass this information through the NeRF network, we fit a multivariate Gaussian to the conical frustum and use the integrated positional encoding described above to create the input feature vector to the network.

Results

We train NeRF and mip-NeRF on a dataset with images at four different resolutions. Normal NeRF (left) is not capable of learning to represent the same scene at multiple levels of detail, with blurring in close-up shots and aliasing in low resolution views, while mip-NeRF (right) both preserves sharp details in close-ups and correctly renders the zoomed-out images.




We can also manipulate the integrated positional encoding by using a larger or smaller radius than the true pixel footprint, exposing the continuous level of detail learned within a single network:

Related links

Wikipedia provides an excellent introduction to spatial anti-aliasing techniques.

Mipmaps were introduced by Lance Williams in his paper "Pyramidal Parametrics" (Williams (1983)).

Amanatides (1984) first proposed the idea of replacing rays with cones in computer graphics rendering.

The closely related concept of ray differentials (Igehy (1999)) is used in most modern renderers to antialias textures and other material buffers during ray tracing.

Cone tracing has been used along with prefiltered voxel-based representations of scene geometry for speeding up indirect illumination calculations in Crassin et al. (2011).

Mip-NeRF was implemented on top of the JAXNeRF codebase.

Citation

Acknowledgements

We thank Janne Kontkanen and David Salesin for their comments on the text, Paul Debevec for constructive discussions, and Boyang Deng for JaxNeRF.
MT is funded by an NSF Graduate Fellowship.
The website template was borrowed from Michaël Gharbi.