Tag Archives: spatial coherence

Exploiting coherence at GDC 2009

A few months back, I wrote a blog post discussing techniques which exploit coherence, either spatial (like multiresolution rendering) or temporal (like reprojection caching).

Both of these were represented at GDC this year.  Jeremy Shopf presented a talk on Mixed Resolution Rendering, and the ambient occlusion technique presented in the talk Rendering Techniques in Gears of War 2 (available on the GDC Vault site) made use of both methods.  The ambient occlusion factors were rendered at a downsampled resolution. In addition, reprojection caching was used to reduce temporal aliasing.  This is the first use I have seen of reprojection caching in a shipping game.

In my previous blog post, I was skeptical of reprojection approaches, since it seemed to me that as an optimization method they did not address the worst case (where the camera angle changes abruptly).  Using such approaches to improve quality instead (as Epic did) makes more sense.

Exploiting temporal and spatial coherence

Exploitation of temporal and spatial coherence is among the most powerful tools available to a graphics programmer. Several recent papers explore this area. Accelerating Real-Time Shading with Reverse Reprojection Caching (GH 2007, available here) uses reverse reprojection to reuse values cached from previous frames. An Improved Shading Cache for Modern GPUs (GH 2008, available here) analyzes the performance characteristics of this technique and proposes some efficiency improvements.

Such caching schemes involve analyzing each pixel shader to find appropriate values to cache. Care must be taken to use values which are expensive to compute but have low directional dependence. Automated Reprojection-Based Pixel Shader Optimization (to be published at SIGGRAPH Asia 2008, available here) proposes a method to automate this process. Another option is to apply reprojection caching to a specific, well-defined case like shadow mapping. This is discussed in Pixel-Correct Shadow Maps with Temporal Reprojection and Shadow-Test Confidence (EGSR 2007, paper web page). This paper was also mentioned in our book.

Personally I’m a bit skeptical of reprojection caching techniques, since whenever the view changes abruptly the cache will be completely invalidated resulting in performance dips. Many applications can’t use acceleration techniques which don’t help worst-case performance. Applications with restricted camera motion may certainly benefit. Enhancing these techniques with fallbacks which degrade quality (instead of performance) in cases of abrupt camera motion may make them more generally applicable.

A different approach is discussed in Geometry-Aware Framebuffer Level of Detail (EGSR 2008, available here). Here the idea is to render certain quantities into a lower resolution frame buffer, using a joint bilateral filter to resample them during final rendering. As with the previous technique, care must be taken in selecting intermediate values; they should be both expensive to compute and vary slowly over the screen. This powerful acceleration technique was also used in the paper Image-Based Proxy Accumulation for Real-Time Soft Global Illumination (PG 2007, available here). Variations of this technique have been used in games, with perhaps the most common case being particle rendering, as the Level of Detail blog points out in this interesting post on the subject. The same blog also has insightful posts on many of the papers mentioned here, as well as another related paper.