Monthly Archives: November 2008

Exploiting temporal and spatial coherence

Exploitation of temporal and spatial coherence is among the most powerful tools available to a graphics programmer. Several recent papers explore this area. Accelerating Real-Time Shading with Reverse Reprojection Caching (GH 2007, available here) uses reverse reprojection to reuse values cached from previous frames. An Improved Shading Cache for Modern GPUs (GH 2008, available here) analyzes the performance characteristics of this technique and proposes some efficiency improvements.

Such caching schemes involve analyzing each pixel shader to find appropriate values to cache. Care must be taken to use values which are expensive to compute but have low directional dependence. Automated Reprojection-Based Pixel Shader Optimization (to be published at SIGGRAPH Asia 2008, available here) proposes a method to automate this process. Another option is to apply reprojection caching to a specific, well-defined case like shadow mapping. This is discussed in Pixel-Correct Shadow Maps with Temporal Reprojection and Shadow-Test Confidence (EGSR 2007, paper web page). This paper was also mentioned in our book.

Personally I’m a bit skeptical of reprojection caching techniques, since whenever the view changes abruptly the cache will be completely invalidated resulting in performance dips. Many applications can’t use acceleration techniques which don’t help worst-case performance. Applications with restricted camera motion may certainly benefit. Enhancing these techniques with fallbacks which degrade quality (instead of performance) in cases of abrupt camera motion may make them more generally applicable.

A different approach is discussed in Geometry-Aware Framebuffer Level of Detail (EGSR 2008, available here). Here the idea is to render certain quantities into a lower resolution frame buffer, using a joint bilateral filter to resample them during final rendering. As with the previous technique, care must be taken in selecting intermediate values; they should be both expensive to compute and vary slowly over the screen. This powerful acceleration technique was also used in the paper Image-Based Proxy Accumulation for Real-Time Soft Global Illumination (PG 2007, available here). Variations of this technique have been used in games, with perhaps the most common case being particle rendering, as the Level of Detail blog points out in this interesting post on the subject. The same blog also has insightful posts on many of the papers mentioned here, as well as another related paper.

This and That

I’ll someday run out of titles for these occasional summaries of new(ish) resources, but in the meantime, this one’s “This and That”.

Christer Ericson’s article on dealing with grouping and sorting objects for rendering is excellent. It mostly depends on input latency, but has concepts that can be applied in immediate mode.

An element that continues to renew the field of computer graphics is that the rules change. This article is about taking Quake 2 (from 1997) and moving it to a modern GPU.

If you haven’t seen it yet, Farbrausch’s demo “debris” is truly impressive. It’s only 183,462 bytes, and is absolutely packed with procedural content. Download here (last link works). Or be lazy and watch on YouTube.

NVIDIA’s pulled together its resources for shadow generation and ambient occlusion all onto one handy page (plus ray tracing – just one entry so far, but it’s a good one).

How to deal with various rendering paradigms on multiple platforms? GRAMPS looks intriguing.

Gamasutra put a useful Game Developer article online, all about commercial middleware game engines currently available.

OpenGL will always exist, since Macs and Linux need it. It’s easier to use in college courses because of its clarity and readability. But otherwise the pendulum’s swung far towards DirectX. Phil Taylor comments on and gives some historical context to the controversy around the latest release, OpenGL 3.0.

A nice trend for OpenGL is that people continue to write useful bits, such as GLee, which manages extensions.

New info on older effects: blur and glow, volumetric clouds, and particle systems.

The glorious teapot. I like “a wireframe view”. Yes, the real thing is taller than the synthetic model, as the model makers were compensating for non-square pixels.

“What’s the future hold?” is always a fun topic, one we’ve used each edition to end our book. I liked this presentation on SlideShare for its sheer “here are a hundred things that hurtle us towards the Singularity” feel, though I don’t buy it for a minute. SlideShare, where it is hosted, is a pleasant medium-attention-span kind of place, with all sorts of random and fun slidesets.

Finally, I am pleased to find that LittleBIGPlanet is just as gorgeous as it looked like it would be. I’ve played myself for only a bit, but walking by when my kids are playing I find I have to stop and stare.

Ray Tracing News v. 21 n. 1 is out

I’ve put out the Ray Tracing News for more than 20 years now. New issues come maybe once a year, but there you have it. There’s a little overlap with this blog, but not that much. Find the latest issue here. Now that I’m finally done with this issue I can imagine blogging again (it wasn’t just I3D that was holding me back).

Best Conference Ever

I’ve been busy with papers cochairing for I3D 2009 (he said casually, knowing he’ll probably never get an opportunity to do so again, being a working stiff and not an academic), but hope to get back to blogging soon. In the meantime, here’s the best conference ever: “Foundations of Digital Games“. I3D’s the end of February in Boston, vs. April on a cruise ship between Florida and the Bahamas. Why don’t I get invited to help out at conferences like this?