Overview
This document covers a variety of topics related to working with pbrt-v4, the rendering system described in the forthcoming fourth edition of Physically Based Rendering: From Theory to Implementation, by Matt Pharr, Wenzel Jakob, and Greg Humphreys. Because most users of pbrt are also developers who also work with the system’s source code, this guide also includes coverage of a number of topics related to the system’s structure and organization.
If you find errors in this text or have ideas for topics that should be discussed, please either submit a bug in the pbrt-v4 issue tracker, or send an email to [email protected].
Changes from pbrt-v3
The system has seen many changes since the third edition. To figure out how to use the new features, you may want to look at the example scene files and read through the source code to learn more details of the following.
Major changes include:
- Spectral rendering: rendering computations are always performed using point-sampled spectra; the use of RGB color is limited to the scene description (e.g., image texture maps), and final image output.
- Modernized volumetric scattering
- An all-new VolPathIntegrator based on the null-scattering path integral formulation of Miller et al. 2019 has been added.
- Tighter majorants are used for null-scattering with the GridDensityMedium via a separate low-resolution grid of majorants.
- Both emissive volumes and volumes with RGB-valued absorption and scattering coefficients are now supported.
- Support for rendering on GPUs is available on systems that have CUDA
and OptiX.
- The GPU path provides all of the functionality of the CPU-based VolPathIntegrator, including volumetric scattering, subsurface scattering, all of pbrt's cameras, samplers, shapes, lights, materials and BxDFs, etc.
- Performance is substantially better than rendering on the CPU.
- New BxDFs and Materials
- The provided BxDFs and Materials have been redesigned to be more closely tied to physical scattering processes, along the lines of Mitsuba's materials. (Among other things, the kitchen-sink UberMaterial is now gone.)
- Measured BRDFs are now represented using Dupuy and Jakob's approach.
- Scattering from layered materials is accurately simulated using Monte Carlo random walks (after Guo et al. 2018.)
- A variety of light sampling improvements have been implemented.
- "Many-light" sampling is available via light BVHs (Conty and Kulla 2018).
- Solid angle sampling is used for triangle (Arvo1995) and quadrilateral (Ureña et al. 2013) light sources.
- A single ray is now traced for both indirect lighting and BSDF-sampled direct-lighting.
- Warp product sampling is used for approximate cosine-weighted solid angle sampling (Hart et al. 2019).
- An implementation of Bitterli et al's environment light portal sampling technique is included.
- Rendering can now be performed in absolute physical units with modeling of real cameras as per Langlands and Fascione 2020.
- And also...
- Various improvements have been made to the Sampler classes, including better randomization and a new sampler that implements Ahmed and Wonka's blue noise Sobol' sampler.
- A new GBufferFilm that provides position, normal, albedo, etc., at each pixel is now available. (This is particularly useful for denoising and ML training.)
- Path regularization (optionally).
- A bilinear patch primitive has been added (Reshetov 2019).
- Various improvements to ray–shape intersection precision.
- Most of the low-level sampling code has been factored out into stand-alone functions for easier reuse. Also, functions that invert many sampling techniques are provided.
- Unit test coverage has been substantially increased.
File format and scene description
We have tried to keep the scene description file format as unchanged from pbrt-v3 as much as possible. However, progress in other parts of the system required changes to the scene description format. pbrt now provides an --upgrade command-line option that can usually automatically update pbrt-v3 scene files for use with pbrt-v4. See the pbrt-v4 File Format documentation for more information.
Images that encode directional distributions (such as environment maps) should now be represented using Clarberg's equal-area mapping. pbrt's imgtool utility provides a makeequiarea operation that converts equirectangular environment maps (as used in pbrt-v3) to this parameterization.
Working with the code
Building pbrt
Please see the README.md file in the pbrt-v4 distribution for general information about how to check out and compile the system.
Porting to different targets
pbrt should compute out of the box for reasonably-recent versions of Linux, FreeBSD, OpenBSD, OS X, and Windows. A C++ compiler with support for C++17 is required.
The CMakeLists.txt file does its best to automatically determine the capabilities and limitations of the system's compiler and to determine which header files are available. If pbrt doesn't build out of the box on your system and you're able to figure out the changes needed in the CMakeLists.txt file, we'd be delighted to receive a github pull request. Alternatively, open a bug in the issue tracker that includes the compiler output from your failed build and we'll try to help get it running.
Note that if extensive changes to pbrt are required to build it on a new target, we may not accept the pull request, as it’s also important that the source code on github be as close as possible to the source code in the physical book.
Debugging
Debugging a ray tracer can be its own special kind of fun. When the system crashes, it may take hours of computation to reach the point where the crash occurs. When an incorrect image is generated, chasing down why it is that the image is usually-but-not-always correct can be a very tricky exercise.
When trouble strikes, it’s usually best to start by rendering the scene again using a debug build of pbrt. Debug builds of the system not only include debugging symbols and aren’t highly optimized (so that the program can be effectively debugged in a debugger), but also include more runtime assertions, which may help narrow down the issue. We find that debug builds are generally three to five times slower than optimized builds; thus, if you’re not debugging, make sure you’re not using a debug build! (See the pbrt-v4 README.md file for information about how to create a debug build.)
One of the best cases for debugging is a failing assertion. This at least gives an initial clue to the problem–assuming that the assertion is not itself buggy, then the debugging task is just to figure out the sequence of events that led to it failing. In general, we’ve found that taking the time to add carefully-considered assertions to the system more than pays off in terms of reducing future time working backward when things go wrong.
There are a number of improvements in pbrt-v4 that make debugging easier than it was in previous versions:
- The renderer is now deterministic, which means that if one renders a crop window of an image (or even a single pixel), a bug that manifested itself when rendering the full image should still appear when a small part of the image is rendered. The --pixel and --pixelbounds command-line options can be used to isolate small regions of images for debugging.
- If pbrt crashes or an assertion fails during rendering, the error message will often include the directions to re-run the renderer using a specified --debugstart command-line option. Doing so will cause pbrt to trace just the single ray path that failed, which often makes it possible to quickly reproduce a bug or test a fix.
If the system crashes outright (e.g., with a segmentation violation), then the issue is likely corruption in the memory heap or another problem related to dynamic memory management. For these sorts of bugs, we've found valgrind and address sanitizer to be effective.
If the crash happens intermittently and especially if it doesn't present itself when running with a single thread (--nthreads=1), the issue is likely due to a race condition or other issue related to parallel execution. We have found that the helgrind tool to be very useful for debugging threading-related bugs. (See also Thread Sanitizer.)
For help with chasing down more subtle errors, most of the classes in the system both provide a ToString() method that returns a std::string that describing the values stored in an object. This can be useful when printing out debugging information. If you find it useful to add these methods to a class that doesn’t currently have them implemented, please send us a pull request with the corresponding changes so that they can be made available to other pbrt users.
Debugging on the GPU can be more difficult than on the CPU due to the massive parallelism of GPUs and more limited printing and logging capabilities. In this case, it can be useful to try rendering the scene on the CPU using the --wavefront command-line option; this causes the CPU to run the same code as the GPU. The program's execution may not follow exactly the same path and compute the same results, however, due to slight differences in intersection points returned by the GPU ray tracing hardware compared to pbrt's CPU-based ray intersection code.
Unit tests
We have written unit tests for some parts of the system (primarily for new functionality added with pbrt-v4, but some to test preexisting functionality). Running the pbrt_test executable, which is built as part of the regular build process, causes all tests to be executed. Unit tests are written using the Google C++ Testing Framework, which is included with the pbrt distribution. See the Google Test Primer for more information about how to write new test.
We have found these tests to be quite useful when developing new features, testing the system after code changes, and when porting the system to new targets. We are always happy to receive pull requests with additions to the system’s tests.
Pull requests
We’re always happy to get pull requests or patches that improve pbrt. However, we are unlikely to accept pull requests that significantly change the system’s structure, as we don’t want the “master” branch to diverge too far from the contents of the book. (In this cases, however, we certainly encourage you to maintain a separate fork of the system on github and to let people know about it on the pbrt mailing list.)
Pull requests that fix bugs in the system or improve portability are always welcome.
Rendering with pbrt
Example scenes
See the resources page for links to a variety of sources of interesting scenes to render with pbrt-v4.
Choosing an integrator
pbrt-v4 provides approximately ten choices for the "integrator" that is used to compute solutions to the rendering equation. In most cases, the default "volpath" integrator is the most effective choice; it handles complex direct and indirect lighting and offers state-of-the-art algorithms for rendering participating media. (When rendering on the GPU or using the "wavefront" integrator on the CPU, there is no choice of integrator and the integrator used has equivalent functionality to the "volpath" integrator).
In scenes with focused indirect light due to specular reflection (caustics) or with other forms of difficult-to-sample indirect lighting, the "bdpt" integrator, which uses bidirectional path tracing, may be a better choice. However, its sampling algorithms for volumetric media (and especially chromatic volumetric media) are not as good as those of the "volpath" integrator. Further, it has the disadvantage that its performance doesn't scale well as the maximum depth parameter is increased since the number of shadow rays traced to connect the camera and light paths grows quadratically with path length.
For scenes with especially challenging indirect lighting or caustics, the "mlt" integrator may be more effective than the "bdpt" integrator. It applies Metropolis sampling algorithms, which allows the reuse of light carrying paths; this can improve results when it is challenging to sample light carrying paths. However, this benefit comes at the cost of increased correlation between paths which can manifest itself as low-frequency noise in images. Further, the "mlt" integrator is built on top of the "bdpt" integrator and so inherits its shortcomings with respect to chromatic media.
The "sppm" integrator is an alternative to the "bdpt" integrator for scenes with tricky indirect lighting and caustics; it is based on the stochastic progressive photon mapping algorithm. When it is used, error in images is manifested with low-frequency noise at low sampling rates. This integrator does not support volumetric scattering.
The remaining integrators are primarily useful for pedagogical purposes or as baselines for evaluating other integration algorithms.
Choosing a sampler
pbrt-v4 also provides a variety of samplers that are used by integrators to generate sample points for Monte Carlo integration. When rendering complex scenes, most of them give similar results, especially at higher sampling rates.
The default "zsobol" sampler is especially effective at low sampling rates; it decorrelates sample values at nearby pixels which tends to cause error in the image to have a "blue noise" (i.e., high frequency) distribution. This tends to be more visually pleasing to human observers than lower-frequency noise and is generally more friendly input to provide to denoising algorithms.
If higher sampling rates are to be used, the "halton" or "sobol" sampler is likely to give slightly better results. Note that the "independent" and "stratified" samplers should not generally be used except as a baseline for comparing the performance of more sophisticated samplers.
GPU rendering
If your system has a supported GPU and if pbrt was compiled with GPU support, the GPU rendering path can be selected using the --gpu command-line option. When used, any integrator specified in the scene description file is ignored, and an integrator that matches the functionality of the VolPathIntegrator on the CPU is used: unidirectional path tracing with state-of-the-art support for volumetric scattering.
The GPU rendering path can also execute on the CPU: specify the --wavefront command-line option to use it. This capability can be especially useful for debugging. We note, however, that execution on the CPU and GPU will not give precisely the same results, largely due to ray-triangle intersections being computed using different algorithms, leading to minor floating-point differences.
Interactive rendering
pbrt-v4 provides an interactive rendering mode that is enabled using the --interactive command-line option. When it is used, pbrt opens a window and displays the image as it is rendered. Rendering restarts from the first sample after the camera moves and pbrt exits as usual, writing out the final image once the specified number of pixel samples have been taken. (Thus, you may want to specify a very large number of pixel samples if you do not want pbrt to exit.)
The following keyboard controls are provided:
- w, a, s, d: move the camera forward and back, left and right.
- q, e: move the camera down and up, respectively.
- Arrow keys: adjust the camera orientation.
- B, b: respectively increase and decrease the exposure ("brightness").
- c: print the transformation matrix for the current camera position.
- -, =: respectively decrease and increase the rate of camera movement.
Note that using many CPU cores, using pbrt's GPU rendering path, or rendering low resolution images will be necessary for interactive performance in practice.
Working with images
pbrt is able to write images in a variety of formats, including OpenEXR, PNG, and PFM. We recommend the use of OpenEXR if possible, as it is able to represent high dynamic range images and has the flexibility to encode images with additional geometric information at each pixel as well as spectral images (see below).
While many image viewers can display RGB OpenEXR images, it is worthwhile to use a capable viewer that allows close inspection of pixel values, makes it easy to determine the coordinates of a pixel, and supports display of additional image channels. We have found Thomas Müller's tev image viewer to be an excellent tool for working with images rendered by pbrt; it runs on Windows, Linux, and OSX.
The pbrt distribution also includes a command-line program for working with images, imgtool. It provides a range of useful functionality, including conversion between image formats and various ways of computing compute the error of images with respect to a reference. Run imgtool to see a list of the commands it offers. Further documentation about a particular command can be found by running, e.g., imgtool help convert.
"Deep" images with auxiliary information
When the "gbuffer" film is used, pbrt will write out EXR images that include the following image channels. The auxiliary information they include is especially useful for denoising rendered images and for the use of rendered images for ML training:
- {R,G,B}:Pixel color (as is also output by the regular "rgb" film).
- Albedo.{R,G,B}: red, green, and blue albedo of the first visible surface
- P{x,y,z}: x, y, and z components of the position.
- u, v: u and v coordinates of the surface parameterization.
- dzd{x,y}: partial derivatives of camera-space depth z with respect to raster-space x and y.
- N{x,y,z}: x, y, and z components of the geometric surface normal.
- Ns{x,y,z}: x, y, and z components of the shading normal, including the effect of interpolated per-vertex normals and bump/or normal mapping, if present.
- Variance.{R,G,B}: sample variance of the red, green, and blue pixel color.
- RelativeVariance.{R,G,B}: relative sample variance of red, green, and blue.
Spectral output
The "spectral" film records images where a specified range of wavelengths is discretized into buckets the radiance in each wavelength range is recorded separately. The images are then stored using OpenEXR based on an encoding proposed by Fichet et al. (It also stores "R", "G", and "B" color channels so that the images it generates can be viewed in non-spectral-aware image viewers.)
Converting scenes to pbrt’s format
For scenes in a format that can be read by assimp, we have found that converting them to pbrt's format using assimp's pbrt-v4 exporter usually works well. It is the preferred approach for formats like FBX. See also the scene exporters section in the resources page for information about exporters from specific modeling and animation systems.
However, a scene exported using assimp or one of the other exporters is unlikely to immediately render beautifully at first export. Here are some suggestions for how to take an initial export and turn it into something that looks great.
First, you may find it useful to run
$ pbrt --toply scene.pbrt > newscene.pbrt
This will convert triangle meshes into more compact binary PLY files, giving you a much smaller pbrt scene file to edit and a scene that will be faster to parse, leading to shorter start-up times.
The Camera
Next, if the exporter doesn’t include camera information, the next thing to do is to find a good view. If your computer has sufficient performance, the --interactive option to pbrt allows you to interactively position the camera and then print the transformation matrix that positions the camera (see above).
Lacking interactive rendering, the “spherical” camera (which renders an image in all directions) can be useful for orienting yourself and for finding a good initial position for the camera. (Setting the “string mapping” parameter to “equirectangular” gives a spherical image that is easier to reason about than the default equal-area mapping.) Keep rendering images and adjusting the camera position to taste. (For efficiency, you may want to use as few pixel samples as you can tolerate and learn to squint and interpret noisy renderings!) Then, you can use the camera position you’ve chosen as the basis for specifying a LookAt transformation for a more conventional camera model.
Lighting
If the lighting hasn't been set up, it can be helpful to have a point light source at the camera’s position while you're placing the camera. Adding a light source like the following to your scene file does this in a way that ensures that the light moves appropriately to wherever the camera has been placed. (You may need to scale the intensity up or down for good results–remember the radius-squared falloff!
AttributeBegin CoordSysTransform "camera" LightSource "point" "color I" [10 10 10] AttributeEnd
Once the camera is placed, we have found that it’s next useful to set up approximate light sources. For outdoor scenes, a good HDR environment map is often all that is needed for lighting. (You may want to consider using imgtool makesky to make a realistic HDR sky environment map, or see polyhaven, which has a wide variety of high-quality and freely-available HDR environment maps.)
For indoor scenes, you may want a combination of an environment map for the outside and point and/or area light sources for interior lights. You may find it useful to examine the scene in the modeling system that it came from to determine which geometry corresponds to area light sources and to try adding AreaLightSource properties to those. (Note that in pbrt, area light sources only emit lights on the side that the surface normal points; you may need a ReverseOrientation directive to make the light come out in the right direction.)
Materials
Given good lighting, the next step is to tune the materials (or set them from scratch). It can be helpful to pick a material and set it to an extreme value (such as a “diffuse” material that is pure red) and render the scene; this quickly shows which geometric models have that material associated with it.
To find out which material is visible in a given pixel, the --pixelmaterial command-line option can be useful: for all intersections along a ray through the center of the pixel, it prints geometric information about each intersection for all of the materials from front to back. For a material specified using Material in the scene description file, the output will be of the form:
Intersection depth 1 World-space p: [ 386.153, 381.153, 560 ] World-space n: [ 0, 0, 1 ] World-space ns: [ 0, 0, 1 ] Distance from camera: 1070.9781 [ DiffuseMaterial displacement: (nullptr) normapMap: (nullptr) reflectance: [ SpectrumImageTexture filename: ../textures/grid.exr [...further output elided...]
In this case we can see that the “diffuse” material was used with no bump or normal map, and ../textures/grid.exr as a texture map for the reflectance.
If a material has been defined using pbrt's MakeNamedMaterial directive, the output will along the lines of:
Intersection depth 1 World-space p: [ 0.95191336, 1.1569455, 0.50672007 ] World-space n: [ 0.032488506, -0.851931, -0.5226453 ] World-space ns: [ 0.02753538, -0.8475513, -0.5299988 ] Distance from camera: 6.3228507 Named material: MeshBlack_phong_SG
As you figure out which material names correspond to what geometry, watch for objects that are missing texture maps and add Texture specifications for them and use them in their materials. (The good news is that such objects generally do have correct texture coordinates with them, so this usually goes smoothly. In some cases it may be necessary to specify a custom UV mapping like "planar", "spherical", or "cylindrical".)