Portal adds

No, not that Portal (which if you haven’t played, you should, even if you have no time; it’s short! For NVIDIA card owners the first slice is free). I’ve updated our portal page with a few additions.

New blogs added: Pandemonium, C0DE517E, Gates 381, GameDevKicks, Chris Hecker’s, and Beyond3D. Being a trailing-edge adoption kind of a guy (I’ve kept my Tivo 1 alive by replacing the disk drives three times so far, my cell phone’s $90 from Indonesia via eBay), I ignored blogs for the most part until last year, when I finally learned how simple it was to use an RSS reader (I like Google’s). My philosophy since then is that if a blog has any articles relevant to interactive rendering techniques, I’ll subscribe. Since most graphics blogs don’t post daily, traffic is low, so checking new postings takes a minute or two a day. That said, if I had to pick just one, it would probably be GameDevKicks, since it’s an aggregator, similar to Digg (though the low counts on the digs, excuse me, kicks, means that some things may fall through the cracks). This service means I’m off the hook in noting new articles on Gamasutra on this blog, since these usually get listed there.

Ogre Forums has been added to the list of developer sites. Ogre is a popular free game development platform. I can’t say I frequent the forum, but on the strength of this article on using the pixel shader to generate the illusion of geometry, there are obviously good things happening here.

The Unity Web Player Hardware Statistics page is similar to the well-known Steam survey, but for machines used by casual gamers.

A site that’s been around a long while and should have been on the portal from the start is the Virtual Terrain Project, a constantly-expanding repository of algorithms about and models of terrain, vegetation, natural phenomena, etc.

… and that’s it for now.

Drawing Silhouette Edges

With SIGGRAPH, the release of ShaderX^2 for free, and the publication of our own 3rd edition, there was much to report, but now things have settled down a bit. The bread and butter content of this blog is any new or noteworthy article or demo that is related to the field. The assumption is that not everyone is tracking all sources of information all the time.

So, if you don’t subscribe to Gamasutra’s free email newsletters, you wouldn’t know of this article: Inking the Cube: Edge Detection with Direct3D 10. It walks through the details of creating geometry for silhouette and crease edges using the geometry shader. To its credit, it also shows the problem with the basic approach: separate silhouette edges can have noticeable join and endcap gaps. One article that addresses this problem:

McGuire, Morgan, and John F. Hughes, “Hardware-Determined Feature Edges,” The 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), pp. 35–47, June 2004.

One minor flaw in the Gamasutra article: the URL to Sarah Tariq’s presentation is broken (I’m writing Gamasutra to ask them to correct it), that article is here.

Disk-Based Global Illumination in RenderMan

In Section 9.1 of our book we discuss Bunnell’s disk-based approximation for computing dynamic ambient occlusion and indirect lighting, and mention that this technique was used by ILM when performing renders for the Pirates of the Caribbean films.

Recently, more details on this technique have appeared in a RenderMan Technical Memo called Point-Based Approximate Color Bleeding, available on Pixar’s publication page. Pixar have implemented an interesting global illumination algorithm based in part on Bunnell’s disk approximation, which is used for transfer over intermediate distances. Spherical harmonics are used to approximate distant transfer and ray-tracing is used for transfer between nearby points. This technique is now built into Pixar’s RenderMan and has been used in over 12 films to date, including Pixar’s own Wall-E. it is interesting to see a technique originating from real-time rendering used in film production; the opposite is much more usual. The paper is worth a close read – perhaps someone will close the loop by adapting some of Pixar’s enhancements into new real-time techniques.

Pixar’s publication page is a valuable resource. The papers span a quarter century, and most of them have been very influential in the field. The first seven papers gave us the Cook-Torrance BRDF, programmable shaders, distributed ray tracing, image compositing, stochastic sampling, percentage-closer filtering, and the REYES rendering architecture (upon which almost all film production renderers are based). the page includes many other important papers as well as SIGGRAPH course notes and Renderman Technical Memos.

RT’08 Presentation

The RT’08 symposium is a small conference (160 people, vs. 28,000+ people at SIGGRAPH) focussed on interactive ray tracing research. It was twice as large as last year’s due to its co-location with SIGGRAPH. There were no big breakthroughs; rather, people were exploring how best to take advantage of new hardware. Of personal interest, there were also talks on optimizing various acceleration structures. I should be making an issue of the Ray Tracing News pretty soon with a summary.

One happy event for me was that my keynote went well, after losing sleep over it the night before. The subject was ray tracing, rasterization, and hardware, past and future. It was one of the best talks I’ve ever given, which is somewhat equivalent to “one of the best films Rob Schneider’s ever made”. Well, maybe better than that. It was fun to dig up provocative quotes and engaging images for the talk; that said, the slideset doesn’t include most of my jokes (though it does include a photo of an object that luckily did not burn down my office’s building). I would have liked to go even further in the direction of more images and less text–it’s a time-consuming task to find good images! I did sometimes break my own personal rule of a maximum of 6 lines per slide, but usually for effect, to overwhelm the viewer with text; my favorite is my “buffers slide”, which lists all the named buffers I could find up to the year 2000.

After the talk Larry Gritz and Dan Wexler pointed me at the new art of Pecha Kucha, where a presentation consists mostly of images and runs at a constant rate. This sounds pretty ideal for most talks. I sometimes find myself distracted between looking at the text on a slide while also trying to listen to the speaker. At least I didn’t show any equations, which are absolute death most of the time. An equation is highly dense information, suitable for a talk only if (a) you are noting what the equation looks like, so people will recognize it later when they read your paper, (b) you actually spend the time to slowly explain each term in the equation and let it sink in, or (c) everyone in the audience already knows the equation (in which case it would be better to just say the equation’s name). Even then, you risk losing much of your audience when you put up an equation. Same rule applies to long shaders or pseudocode samples.

I think this phenomenon of too much information occurs more frequently now than in the past because slidesets often take the place of white papers. This happens quite frequently with GDC, XNA Gamefest, SIGGRAPH class talks, and many other venues. It’s nice that people who can’t attend the talk can at least see the slides, but it leads to slidesets having two purposes: one is to enhance the verbal part of the talk, the other is to reiterate the verbal part of the talk. Enhancement favors succinct bullet items, which can be hard to understand when downloaded. Reiteration helps the downloader, but is either overwhelming (read, or listen?) or boring (OMG he’s reading his slides, line by line) during the talk itself. OK, end of rant.

I’m as guilty as the rest (and now wish I had trimmed back on text on my last talk, now that this phenomenon has dawned on me), but part of the solution is to add at least some further notes (not seen on the screen) with the slides, which I did do with my slideset (the non-PDF version). Better is to write a blog entry, webpage, or paper about the subject, as PowerPoint is not really a word-processor. And of course I probably won’t do so myself for my own talk, as that’s too much time and I don’t think I’d add much to the presentation. But, my excuse is that my presentation is a high-level “soft” talk than anything with a lot of technical chew.

SIGGRAPH 2008: Bilateral Filters

The class “A Gentle Introduction to Bilateral Filtering and its Applications” was very well-presented. Bilateral filters are edge-preserving smoothing filters that have many applications in rendering and computational photography. The basic concept was clearly explained, and then various variants, related techniques, optimized implementations and applications were discussed. The full slides as well as detailed course notes are available here. Currently they are from the SIGGRAPH 2007 course; I assume the 2008 slides will replace them soon.

A related technique, which appears to have some interesting advantages over the bilateral filter, was presented in a paper this year, titled “Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation”. It presents a novel edge-preserving smoothing operator based on weighted least squares optimization. The paper and various supplementary materials are available here.

SIGGRAPH 2008: Beyond Programmable Shading Class

This class was about non-traditional processing performed on GPUs, similar to GPGPU but for graphics. As we discuss in the “Futures” chapter at the end of our book, this is a particularly interesting direction of research and may well represent the future of rendering. The recent disclosures on Direct3D 11 Compute Shaders and Larrabee make this a particularly hot topic.

The full course notes are available at the course web site.

The talk by Jon Olick from id software was perhaps the most interesting. He discussed a sparse voxel octree data structure which is rendered directly using CUDA. This extends id’s megatexture idea to geometry and may very well find its way into id’s next engine in some form.

SIGGRAPH 2008: The Authors Meet

All the work on the book was done remotely, via email and CVS. In fact, I had never met Tomas until this morning at SIGGRAPH. Here you can see all three of us, pleased as punch that the book is finally done. Left to right, Eric, Naty, Tomas:

(Eric here. I guess this is a tradition: Tomas and I didn’t meet until after the first edition was published.)

SIGGRAPH 2008: Advances in Real-Time Rendering in 3D Graphics and Games

I attended the “Advances in Real-Time Rendering in 3D Graphics and Games” class today at SIGGRAPH. This is the third year in a row Natasha Tatarchuk from AMD has organized this class. Each year different game developers as well as people from the AMD demo team are brought in to talk about graphics, and some of the best real-time stuff at SIGGRAPH in the last two years has been in this course.

Unfortunately, due to Little Big Planet crunch, Alex Evans from Media Molecule was unable to give his planned talk and a different speaker was brought in instead. This was a bit of a bummer since Alex’s SIGGRAPH 2006 talk was very good and I was hoping to hear more about his unorthodox take on real-time rendering.

The remaining talks were of high quality, including talks by the developers of games such as Halo 3, Starcraft 2 and Crysis. Unlike previous years, where it took many weeks for the course notes to be available online, the full course notes are already available at AMD’s Technical Publications page – check them out!