Monthly Archives: August 2008

Drawing Silhouette Edges

With SIGGRAPH, the release of ShaderX^2 for free, and the publication of our own 3rd edition, there was much to report, but now things have settled down a bit. The bread and butter content of this blog is any new or noteworthy article or demo that is related to the field. The assumption is that not everyone is tracking all sources of information all the time.

So, if you don’t subscribe to Gamasutra’s free email newsletters, you wouldn’t know of this article: Inking the Cube: Edge Detection with Direct3D 10. It walks through the details of creating geometry for silhouette and crease edges using the geometry shader. To its credit, it also shows the problem with the basic approach: separate silhouette edges can have noticeable join and endcap gaps. One article that addresses this problem:

McGuire, Morgan, and John F. Hughes, “Hardware-Determined Feature Edges,” The 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), pp. 35–47, June 2004.

One minor flaw in the Gamasutra article: the URL to Sarah Tariq’s presentation is broken (I’m writing Gamasutra to ask them to correct it), that article is here.

Disk-Based Global Illumination in RenderMan

In Section 9.1 of our book we discuss Bunnell’s disk-based approximation for computing dynamic ambient occlusion and indirect lighting, and mention that this technique was used by ILM when performing renders for the Pirates of the Caribbean films.

Recently, more details on this technique have appeared in a RenderMan Technical Memo called Point-Based Approximate Color Bleeding, available on Pixar’s publication page. Pixar have implemented an interesting global illumination algorithm based in part on Bunnell’s disk approximation, which is used for transfer over intermediate distances. Spherical harmonics are used to approximate distant transfer and ray-tracing is used for transfer between nearby points. This technique is now built into Pixar’s RenderMan and has been used in over 12 films to date, including Pixar’s own Wall-E. it is interesting to see a technique originating from real-time rendering used in film production; the opposite is much more usual. The paper is worth a close read – perhaps someone will close the loop by adapting some of Pixar’s enhancements into new real-time techniques.

Pixar’s publication page is a valuable resource. The papers span a quarter century, and most of them have been very influential in the field. The first seven papers gave us the Cook-Torrance BRDF, programmable shaders, distributed ray tracing, image compositing, stochastic sampling, percentage-closer filtering, and the REYES rendering architecture (upon which almost all film production renderers are based). the page includes many other important papers as well as SIGGRAPH course notes and Renderman Technical Memos.

RT’08 Presentation

The RT’08 symposium is a small conference (160 people, vs. 28,000+ people at SIGGRAPH) focussed on interactive ray tracing research. It was twice as large as last year’s due to its co-location with SIGGRAPH. There were no big breakthroughs; rather, people were exploring how best to take advantage of new hardware. Of personal interest, there were also talks on optimizing various acceleration structures. I should be making an issue of the Ray Tracing News pretty soon with a summary.

One happy event for me was that my keynote went well, after losing sleep over it the night before. The subject was ray tracing, rasterization, and hardware, past and future. It was one of the best talks I’ve ever given, which is somewhat equivalent to “one of the best films Rob Schneider’s ever made”. Well, maybe better than that. It was fun to dig up provocative quotes and engaging images for the talk; that said, the slideset doesn’t include most of my jokes (though it does include a photo of an object that luckily did not burn down my office’s building). I would have liked to go even further in the direction of more images and less text–it’s a time-consuming task to find good images! I did sometimes break my own personal rule of a maximum of 6 lines per slide, but usually for effect, to overwhelm the viewer with text; my favorite is my “buffers slide”, which lists all the named buffers I could find up to the year 2000.

After the talk Larry Gritz and Dan Wexler pointed me at the new art of Pecha Kucha, where a presentation consists mostly of images and runs at a constant rate. This sounds pretty ideal for most talks. I sometimes find myself distracted between looking at the text on a slide while also trying to listen to the speaker. At least I didn’t show any equations, which are absolute death most of the time. An equation is highly dense information, suitable for a talk only if (a) you are noting what the equation looks like, so people will recognize it later when they read your paper, (b) you actually spend the time to slowly explain each term in the equation and let it sink in, or (c) everyone in the audience already knows the equation (in which case it would be better to just say the equation’s name). Even then, you risk losing much of your audience when you put up an equation. Same rule applies to long shaders or pseudocode samples.

I think this phenomenon of too much information occurs more frequently now than in the past because slidesets often take the place of white papers. This happens quite frequently with GDC, XNA Gamefest, SIGGRAPH class talks, and many other venues. It’s nice that people who can’t attend the talk can at least see the slides, but it leads to slidesets having two purposes: one is to enhance the verbal part of the talk, the other is to reiterate the verbal part of the talk. Enhancement favors succinct bullet items, which can be hard to understand when downloaded. Reiteration helps the downloader, but is either overwhelming (read, or listen?) or boring (OMG he’s reading his slides, line by line) during the talk itself. OK, end of rant.

I’m as guilty as the rest (and now wish I had trimmed back on text on my last talk, now that this phenomenon has dawned on me), but part of the solution is to add at least some further notes (not seen on the screen) with the slides, which I did do with my slideset (the non-PDF version). Better is to write a blog entry, webpage, or paper about the subject, as PowerPoint is not really a word-processor. And of course I probably won’t do so myself for my own talk, as that’s too much time and I don’t think I’d add much to the presentation. But, my excuse is that my presentation is a high-level “soft” talk than anything with a lot of technical chew.

SIGGRAPH 2008: Bilateral Filters

The class “A Gentle Introduction to Bilateral Filtering and its Applications” was very well-presented. Bilateral filters are edge-preserving smoothing filters that have many applications in rendering and computational photography. The basic concept was clearly explained, and then various variants, related techniques, optimized implementations and applications were discussed. The full slides as well as detailed course notes are available here. Currently they are from the SIGGRAPH 2007 course; I assume the 2008 slides will replace them soon.

A related technique, which appears to have some interesting advantages over the bilateral filter, was presented in a paper this year, titled “Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation”. It presents a novel edge-preserving smoothing operator based on weighted least squares optimization. The paper and various supplementary materials are available here.

SIGGRAPH 2008: Beyond Programmable Shading Class

This class was about non-traditional processing performed on GPUs, similar to GPGPU but for graphics. As we discuss in the “Futures” chapter at the end of our book, this is a particularly interesting direction of research and may well represent the future of rendering. The recent disclosures on Direct3D 11 Compute Shaders and Larrabee make this a particularly hot topic.

The full course notes are available at the course web site.

The talk by Jon Olick from id software was perhaps the most interesting. He discussed a sparse voxel octree data structure which is rendered directly using CUDA. This extends id’s megatexture idea to geometry and may very well find its way into id’s next engine in some form.

SIGGRAPH 2008: The Authors Meet

All the work on the book was done remotely, via email and CVS. In fact, I had never met Tomas until this morning at SIGGRAPH. Here you can see all three of us, pleased as punch that the book is finally done. Left to right, Eric, Naty, Tomas:

(Eric here. I guess this is a tradition: Tomas and I didn’t meet until after the first edition was published.)

SIGGRAPH 2008: Advances in Real-Time Rendering in 3D Graphics and Games

I attended the “Advances in Real-Time Rendering in 3D Graphics and Games” class today at SIGGRAPH. This is the third year in a row Natasha Tatarchuk from AMD has organized this class. Each year different game developers as well as people from the AMD demo team are brought in to talk about graphics, and some of the best real-time stuff at SIGGRAPH in the last two years has been in this course.

Unfortunately, due to Little Big Planet crunch, Alex Evans from Media Molecule was unable to give his planned talk and a different speaker was brought in instead. This was a bit of a bummer since Alex’s SIGGRAPH 2006 talk was very good and I was hoping to hear more about his unorthodox take on real-time rendering.

The remaining talks were of high quality, including talks by the developers of games such as Halo 3, Starcraft 2 and Crysis. Unlike previous years, where it took many weeks for the course notes to be available online, the full course notes are already available at AMD’s Technical Publications page – check them out!

Direct 3D Details Part V: Other Features

This grab-bag of a post summarizes the various other features of Direct3D 11 which Microsoft described at Gamefest.

Dynamic shader linkage is supported (similar to the interfaces feature of Cg). This allows for separate light and material shaders to be written and compiled. These are later linked when the shader is set. This offers a solution to the combinatorial explosion resulting from a variety of lights and materials (this explosion, and some other solutions to it, are discussed in section 7.9 of our book).

Two new compressed texture formats have been added. BC6 supports high dynamic range RGB textures, using 1 byte per texel (instead of 6 bytes for an RGB 16-bit float texture). BC7 supports low dynamic range RGB or RGBA textures. It also uses one byte per texel (like DXT5/BC3), but offers significantly higher quality than texture formats available in D3D10. Both formats offer multiple block types (the compression tool selects the appropriate block type based on its content).

The block compression formats in D3D9 and D3D10 are based on the idea that each 4×4 texel block has all its values arranged along a single line, and the bits for each texel encode where on the line it is placed. For example, in DXT1/BC1, a line in RGB space is represented by two RGB endpoints, and each texel gets two bits to select one of four points along the line.

The new D3D11 formats support block types with one, two or even three (in the case of BC7) color lines. There is a tradeoff between the number of lines and the number of points along each line, since each block takes up the same amount of memory.

In principle, a 4×4 block with two color lines would need 16 additional bits per block to determine which line each texel was associated with (even more bits are needed for three color lines). To reduce storage requirements, only a subset of possible line association patterns are supported. The compression tool selects the best association out of this subset for each block.

Direct3D11 also tightens up the texture specifications. Decompression results must be bit-accurate, and subtexel/submip filtering precision is required to be at least 8 bits.

Direct3D11 increases the texture size limits from 8K texels to 16K texels. Note that a 16K x 16K DXT1/BC1 texture takes up 128MB – not many games will have textures this large! In general, D3D11 allows for resources as large as 2GB.

Hardware can optionally support double-precision floats. This was the only optional feature of D3D11 mentioned at Gamefest.

There was a slide listing a bunch of other features without further explanation. Most are a bit mysterious, but I list them here in case someone else is able to puzzle out what they mean:

  • Addressable Stream Out
  • Draw Indirect
  • Pull-model attribute eval
  • Improved Gather4
  • Min-LOD texture clamps
  • Conservative oDepth
  • Geometry shader instance programming model
  • Read-only depth or stencil views

This completes my report on Direct3D11 from Gamefest. Check the XNA Presentations Page for the slides and audio – they are not up there yet, but hopefully will be there soon.