Category Archives: Reports

I3D 2011 Report – Part I: Keynote

Today was the first day of I3D 2011. I3D (full name: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games) is a great little conference – it’s small enough so you can talk to most of the people attending, and the percentage of useful papers is relatively high.

I’ll split my I3D report into several smaller blog posts to ensure timely updates; this one will focus on the opening keynote.

The keynote was titled “Image-Based Rendering: A 15-Year Retrospective”, presented  by Richard Szeliski (Microsoft Research), who’s been involved with a lot of the important research in this area. He started with a retrospective of the research in this area, and then followed with a discussion of a specific application: panoramic image stitching. Early research in this area lead to QuickTime VR, and panoramic stitching is now supported in many cameras as a basic feature. Gigapixel panoramas are common – see a great example of one from Obama’s inaugural address by Gigapan (360 cities is another good panorama site). Street-level views such as Google Street View and Bing’s upcoming Street Slide feature use sequences of panoramas.

Richard then discussed the mathematical basis of image-based rendering: a 4D space of light rays (reliant on the fact that radiance is constant over a ray – in participating media this no longer holds and you have to go to a full 5D plenoptic field). Having some geometric scene information is important, it is hard to get good results with just a 4D collection of rays  (this was the main difference between the first two implementations – lumigraphs and light fields). Several variants of these were developed over the years (e.g. unstructured lumigraphs and surface light fields).

Several successful developments used lower-dimensional “2.5D” representations such as layered depth images (a “depth image” is an image with a depth value as well as a color associated with each pixel). Richard remarked that Microsoft’s Kinect has revolutionized this area by making depth cameras (which used to cost many thousands of dollars) into $150 commodities; a lot of academic research is now being done using Kinect cameras.

Image-based modeling started primarily with Paul Debevec’s “Facade” work in 1996. The idea was to augment a very simple geometric model (which back then was created manually) with “view dependent textures” that combine images from several directions to remove occluded pixels and provide additional parallax. Now this can be done automatically at a large scale – Richard showed an aerial model of the city of Graz which was automatically generated in 2009 from several airplane flyovers – it allowed for photorealistic renders with free camera motion in any direction.

Richard also discussed environment matting, as well as video-based techniques such as video rewrite, video matting, and video textures. This last paper in particular seems to me like it should be revisited for possible game applications – it’s a powerful extension of the common practice of looping a short sequence of images on a sprite or particle.

Richard next talked about cases where showing the results of image-based rendering in a less realistic or lower-fidelity way can actually be better – similar to the “uncanny valley” in human animation.

The keynote ended by summarizing the areas where image-based rendering is currently working well, and areas where more work is needed. Automatic 3D pose estimation of the camera from images is pretty robust, as well as automatic aerial and manually augmented ground-level modeling. However, some challenges remain: accurate boundaries and matting, reflections & transparency, integration with recognition algorithms, and user-generated content.

For anyone interested in more in-depth research into these topics, Richard has written a computer vision book, which is also available freely online.

How to Make an Ebook

Here’s a short guide on creating decent ebooks from scans using Adobe Acrobat. This will not be of interest to 98% of you, but I want to record it somewhere for those of you who may do this in the future. It is written by Iliyan Georgiev, who made the recent PoDIS ebook. Comments are welcome, as usual.

The one piece of software you’ll need that can’t be downloaded for free is Adobe Acrobat, though even this application has a 30-day free trial.

1. Scan the pages of the book using a scanner (a digital camera is a good alternative).

2. Crop the scanned images (and split the pages, if you scanned two pages at once). It’s better for an ebook to have smaller page margins. Also, cropping removes black areas and other artifacts resulting from scanning. An excellent (JPEG-only) batch cropping tool for Windows is JPEGCrops. It has some disadvantages, however, so in practice it’s best to use JPEGCrops to estimate approximate cropping parameters (width, height, x-offset, y-offset) and XnView‘s batch processing mode for the actual cropping. Both applications are free and have portable versions.

3. Assemble all images into a PDF file. Adobe Acrobat has an option to combine multiple files into a single PDF. Use the highest quality settings for the creation.

4. (OPTIONAL) Rearrange/merge/delete pages. Acrobat has excellent tools to achieve these. This can be useful for books that are published in two volumes or for extending the book with additional information, such as errata listings, images, high quality cover pages, etc.

5. Manage blank pages. It might be tempting to delete blank pages inside the book. Such pages are always intentionally left blank by the publishers, as they are important for the printing order. This is particularly important for the first few pages, as well as for the chapters. Many books are created in such a way that all chapters start on an even/odd page, and the large majority have the inner pages typeset for being printed on a specific side (left/right). If you want to optimize the page count anyway, keep in mind how the book would appear when printed out (also using “2 pages per sheet” printing).

6. Number the pages. This is an often-overlooked, but very useful, option. Apart from the default page numbering, the PDF format supports logical page numbering. This can be used to synchronize the PDF page numbers with the actual book page numbers. This is very easy to do in Acrobat and should always be done. To do this, select the necessary pages, right click on them and choose “Number Pages…”.

7. Run OCR (optical character recognition) on the PDF. This is an extremely easy way to make your scanned pages searchable and the text copy/paste-able. Acrobat has a good and easy to use built-in OCR tool. You will find it in the Document menu (Tools pane in Acrobat X). Be sure to disable image resampling, as by default OCR will resample the images, which can easily increase the file size by a huge amount! Keep in mind that OCR is a compute-intensive process and can easily take a couple of hours for a larger book.

8. Optimize document. Acrobat has an option to optimize scanned documents. This runs some image-processing algorithms on the scanned images and compresses them aggressively when it detects text. This is a vital step to keep the size of the document low. It can reduce the file size by a factor of 20! It will also make the antialiasing to look better when pages are minified, if the resolution of the original scans is high enough. This process is also compute-intensive and can easily take an hour for a larger book.

9. (OPTIONAL) Reduce the file size further by using Acrobat’s other optimization options, from which the image downsampling is the most important.

At this point the most important steps are done and you can end here and go to sleep if you see the sunrise through the window. Go on if it’s only 4 AM.

10. (OPTIONAL) Setting the initial view. Open the document properties on the Initial View tab. Here, you can set the initial page, zoom level and which panes (e.g. the bookmarks pane, see below) should be active when the document is opened.

11. (OPTIONAL) Create a PDF table of contents (TOC). The PDF format has a useful (hierarchical) bookmarking feature with a dedicated Bookmarks pane which exists also in Adobe Reader. This feature can be used to reconstruct the book’s TOC for easy document navigation. One simple way to achieve this is the following:
11.a Go to the book’s Contents page, select the chapter title’s text and hit CTRL+B (or right click and choose to add a bookmark from the context menu). Repeat this for each chapter.
11.b Structure the created bookmarks. Rearrange the bookmarks to follow the order and structure of the book’s TOC.
11.c Link the bookmarks to pages. To do this, go over all pages of the book sequentially and every time a new chapter starts, right click on the corresponding bookmark and set the destination to the current page.

12. (OPTIONAL) Create hyperlinks inside the document. The PDF format also supports hyperlinks which can perform actions (e.g. jump to a page or a web site) when clicked. Links can be either rectangles (drawn with a corresponding tool) or text. To create text links, select the text, right click on it and choose to crate a link. There are options to set the link’s appearance and behavior.

You’re done! You have the perfect ebook and you’re late for work!

Gran Turismo on Playstation, PSP, PS2, and PS3

This video was published by Eurogamer‘s Digital Foundry department about two weeks ago; it shows footage captured from various games in the Gran Turismo series. What is remarkable about this video is that the same cars and tracks are shown on the original Playstation, the PSP, the Playstation 2 and Playstation 3. Since the developer (Polyphony Digital) has a reputation for squeezing the best visuals out of Sony’s platforms, this promises a rare “apples-to-apples” comparison across multiple hardware generations.

To my eyes, the display resolution changes drown out the more subtle differences in modeling, shading and lighting; it is also apparent to me that Polyphony no longer sits on the graphics throne in this generation. Other first-party PS3 titles such as Uncharted 2 and God of War III look better, in my opinion. The shadows are a particular weak spot: in places their resolution seems no higher than on the original Playstation!

More information on how the video was captured (as well as high-quality download links) can be found in Digital Foundry’s blog post.

I3D 2011

The website for I3D 2011 is now up, including the time/place and CFP. I3D will be in San Francisco next year, from February 18-20th. I3D probably has a higher percentage of graphics papers relevant to games than any other conference; this year five of the papers described techniques already in use in games (including high-profile titles like Batman: Arkham Asylum and Civilization 5), and many of the other papers were also highly relevant. Unfortunately, very few game developers attend; I hope next year’s location (San Francisco is home to a large number of developers) will help.

I3D is a great small conference to publish real-time rendering papers. One advantage it has for authors over Eurographics conferences like EGSR, and co-sponsored conferences like HPG and SCA (in “Europe” years) is that it is not subject to Eurographics’ monumentally stupid “authors can’t post copies of their papers for a year after the conference” policy. This policy, of course, hurts the chance of your paper being cited by making it harder for people to read it – brilliant! Hopefully EG will see the error of its ways soon – until then, you are better off sending your papers to non-EG conferences like I3D.

SIGGRAPH 2010 Game Content Roundup

With less than two weeks until the conference, here’s my final pre-SIGGRAPH roundup of all the game development and real-time rendering content. This is to either to help convince people who are still on the fence about attending (unlikely at this late date) or to help people who are trying to decide which sessions to go to (more likely). If you won’t be able to attend SIGGRAPH this year, this might at least help you figure out which slides, videos, and papers to hunt for after the conference.

First of all, the SIGGRAPH online scheduler is invaluable for helping to sort out all the overlapping sessions (even if you just “download” the results into Eric’s lower-tech version). The iPhone app may show up before the conference, but given the vagaries of iTunes app store approval, I wouldn’t hold my breath.

The second resource is the Games Focus page, which summarizes the relevant content for game developers in one handy place. It makes a good starting point for building your schedule; the rest of this post goes into additional detail.

My previous posts about the panels and the talks, and several posts about the courses go into more detail on the content available in these programs.

Exhibitor Tech Talks are sponsored talks by various vendors, and are often quite good. Although the Games Focus page links to the Exhibitor Tech Talk page, for some reason that page has is no information about the AMD and NVIDIA tech talks (the Intel talk on Inspecting Complex Graphics Scenes in a Direct X Pipeline, about their Graphics Performance Analyzer tool, could be interesting). NVIDIA does have all the details on their tech talks at their SIGGRAPH 2010 page; the ones on OpenGL 4.0 for 2010, Parallel Nsight: GPU Computing and Graphics Development in Visual Studio, and Rapid GPU Ray Tracing Development with NVIDIA OptiX look particularly relevant. AMD has no such information available anywhere: FAIL.

One program not mentioned in the Games Focus page is a new one for this year: SIGGRAPH Dailies! where artists show a specific piece of artwork (animation, cutscene sequence, model, lighting setup, etc.) and discuss it for two minutes. This is a great program, giving artists a unique place to showcase the many bits of excellence that go into any good film or game. Although no game pieces got in this year, the show order includes great work from films such as Toy Story 3, Tangled, Percy Jackson, A Christmas Carol, The Princess and The Frog, Ratatouille, and Up. The show is repeated on Tuesday and Wednesday overlapping the Electronic Theater (which also should not be missed; note that it is shown on Monday evening as well).

One of my favorite things about SIGGRAPH is the opportunity for film and game people to talk to each other. As the Game-Film Synergy Chair, my primary responsibility was to promote content of interest to both. This year there are four such courses (two of which I am organizing and speaking in myself): Global Illumination Across Industries, Color Enhancement and Rendering in Film and Game Production, Physically Based Shading Models in Film and Game Production, and Beyond Programmable Shading I & II.

Besides the content specifically designed to appeal to both industries, a lot of the “pure film” content is also interesting to game developers. The Games Focus page describes one example (the precomputed SH occlusion used in Avatar), and hints at a lot more. But which?

My picks for “film production content most likely to be relevant to game developers”: the course Importance Sampling for Production Rendering, the talk sessions Avatar in Depth, Rendering Intangibles, All About Avatar, and Pipelines and Asset Management, the CAF production sessions Alice in Wonderland: Down the Rabbit Hole, Animation Blockbuster Breakdown, Iron Man 2: Bringing in the “Big Gun”, Making “Avatar”, The Making of TRON: LEGACY, and The Visual Style of How To Train Your Dragon, and the technical papers PantaRay: Fast Ray-Traced Occlusion Caching, An Artist-Friendly Hair Shading System, and Smoothed Local Histogram Filters. (unlike much of the other film production content, paper presentation videos are always recorded, so if a paper presentation conflicts with something else you can safely skip it).

Interesting, but more forward-looking film production stuff (volumetric effects and simulations that aren’t feasible for games now but might be in future): the course Volumetric Methods in Visual Effects, the talk sessions Elemental Training 101, Volumes and Precipitation, Simulation in Production, and Blowing $h!t Up, and the CAF production session The Last Airbender: Harnessing the Elements: Earth, Air, Water, and Fire.

Speaking of forward-looking content, SIGGRAPH papers written by academics (as opposed to film professionals) tend to fall in this category (in the best case; many of them are dead ends). I haven’t had time to look at the huge list of research papers in detail; I highly recommend attending the Technical Papers Fast-Forward to see which papers are worth paying closer attention to (it’s also pretty entertaining).

Some other random SIGGRAPH bits:

  • Posters are of very mixed quality (they have the lowest acceptance bar of any SIGGRAPH content) but quickly skimming them doesn’t take much time, and there is sometimes good stuff there. During lunchtime on Tuesday and Wednesday, the poster authors are available to discuss their work, so if you see anything interesting you might want to come back then and ask some questions.
  • The Studio includes several workshops and presentations of interest, particularly for artists.
  • The Research Challenge has an interesting interactive haunted house concept (Virtual Flashlight for Real-Time Scene Illumination and Discovery) presented by the Square Enix Research and Development Division.
  • The Geek Bar is a good place to relax and watch streaming video of the various SIGGRAPH programs.
  • The SIGGRAPH Reception, the Chapters Party, and various other social events throughout the week are great opportunities to meet, network, and talk graphics with lots of interesting and talented people from outside your regular circle of colleagues.

I will conclude with the list of game studios presenting at SIGGRAPH this year: Activision Studio Central, Avalanche Software, Bizarre Creations, Black Rock Studio, Bungie, Crytek, DICE, Disney Interactive Research, EDEN GAMES, Fantasy Lab, Gearbox, LucasArts, Naughty Dog, Quel Solaar, tri-Ace, SCE Santa Monica Studio, Square Enix R&D, Uber Entertainment, Ubisoft Montreal, United Front Games, Valve, and Volition. I hope for an even longer list in 2011!

More SIGGRAPH Course Updates

After my last SIGGRAPH post, I spent a little more time digging around in the SIGGRAPH online scheduler, and found some more interesting details:

Global Illumination Across Industries

This is another film-game crossover course. It starts with a 15-minute introduction to global illumination by Jaroslav Křivánek, a leading researcher in efficient GI algorithms. It continues with six 25-30 minutes talks:

  • Ray Tracing Solution for Film Production Rendering, by Marcos Fajardo, Solid Angle. Marcos created the Arnold raytracer which was adopted by Sony Pictures Imageworks for all of their production rendering (including CG animation features like Cloudy with a Chance of Meatballs and VFX for films like 2012 and Alice in Wonderland). This is unusual in film production; most VFX and animation houses  use rasterization renderers like Renderman.
  • Point-Based Global Illumination for Film Production, by Per Christensen, Pixar. Per won a Sci-Tech Oscar for this technique, which is widely used in film production.
  • Ray Tracing vs. Point-Based GI for Animated Films, by Eric Tabellion, PDI/Dreamworks. Eric worked on the global illumination (GI) solution which Dreamworks used in Shrek 2; it will be interesting to hear what he has to say on the differences between the two leading film production GI techniques.
  • Adding Real-Time Point-based GI to a Video Game, Michael Bunnell, Fantasy Lab. Mike was also awarded the Oscar for the point-based technique (Christophe Hery was the third winner). He actually originated it as a real-time technique while working at NVIDIA; while Per and Christophe developed it for film rendering, Mike founded Fantasy Lab to further develop the technique for use in games.
  • Pre-computing Lighting in Games, David Larsson, Illuminate Labs. Illuminate Labs make very good prelighting tools for games; I used their Turtle plugin for Maya when working on God of War III and was impressed with its speed, quality and robustness.
  • Dynamic Global Illumination for Games: From Idea to Production, Anton Kaplanyan, Crytek. Anton developed the cascaded light propagation volume technique used in CryEngine 3 for dynamic GI; the I3D 2010 paper describing the technique can be found on Crytek’s publication page.

The course concludes with a 5 minute Q&A session with all speakers.

An Introduction to 3D Spatial Interaction With Videogame Motion Controllers

This course is presented by Joseph LaViola (director of the University of Central Florida Interactive Systems and User Experience Lab) and Richard Marks from Sony Computer Entertainment (principal inventor of the Eyetoy, Playstation Eye, and Playstation Move). Richard Marks gives two 45-minute talks, one on 3D Interfaces With 2D and 3D Cameras and one on 3D Spatial Interaction with the PlayStation Move. Prof. LaViola discusses Common Tasks in 3D User Interfaces, Working With the Nintendo Wiimote, and 3D Gesture Recognition Techniques.

Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations

After an introduction to the topic of collision detection and proximity queries, this course goes over recent research in collision detection for games including articulated, deformable and fracturing models. It concludes with optimization-oriented talks such as GPU-Based Proximity Computations (presented by Dinesh Manocha, University of North Carolina at Chapel Hill, one of the most prominent researchers in the area of collision detection), Optimizing Proximity Queries for CPU, SPU and GPU (presented by Erwin Coumans, Sony Computer Entertainment US R&D, primary author of the Bullet physics library, which is widely used for both games and feature films), and PhysX and Proximity Queries (presented by Richard Tonge, NVIDIA, one of the architects of the AGEIA  physics processing unit – the company was bought by NVIDIA and their software library formed the basis of the GPU-accelerated PhysX library).

Advanced Techniques in Real-Time Hair Rendering and Simulation

This course is presented by Cem Yuksel (Texas A&M University) and Sarah Tariq (NVIDIA). Between them, they have done a lot of the recent research on efficient rendering and simulation of hair. The course covers all aspects of real-time hair rendering: data management, the rendering pipeline, transparency, antialiasing, shading, shadows, and multiple scattering. It concludes with a discussion of real-time dynamic simulation of hair.

Ray Tracing Solution for Film Production Rendering
Fajardo

2:40 pm
Point-Based Global Illumination for Film Production
Christensen

3:05 pm
Ray Tracing vs. Point-Based GI for Animated Films
Tabellion

3:30 pm
Break 

3:45 pm
Adding Real-Time Point-based GI to a Video Game
Bunnell

4:15 pm
Pre-computing Lighting in Games
Larsson

4:45 pm
Dynamic Global Illumination for Games: From Idea to Production Kaplanyan

5:10 pm
Conclusions, Q & A
Ray Tracing Solution for Film Production Rendering

Fajardo

2:40 pm

Point-Based Global Illumination for Film Production

Christensen

3:05 pm

Ray Tracing vs. Point-Based GI for Animated Films

Tabellion

3:30 pm

Break

3:45 pm

Adding Real-Time Point-based GI to a Video Game

Bunnell

4:15 pm

Pre-computing Lighting in Games

Larsson

4:45 pm

Dynamic Global Illumination for Games: From Idea to Production Kaplanyan

5:10 pm

Conclusions, Q & A

All

All

Update on Splinter Cell: Conviction Rendering

In my recent post about Gamefest 2010, I discussed Stephen Hill’s great presentation on the rendering techniques used in Splinter Cell: Conviction.

Since then, Stephen contacted me – it turns out I got some details wrong, and he also provided me with some additional details about the techniques in his talk. I will give the corrections and additional details here.

  1. What I described in the post as a “software hierarchical Z-Buffer occlusion system” actually runs completely on the GPU. It was directly inspired by the GPU occlusion system used in ATI’s “March of the Froblins” demo (described here), and indirectly by the original (1993) hierarchical z-buffer paper. Stephen describes his original contribution as “mostly scaling it up to lots of objects on DX9 hardware, piggy-backing other work and the 2-pass shadow culling”. Stephen promises more details on this “in a book chapter and possibly… a blog post or two” – I look forward to it.
  2. The rigid body AO volumes were initially inspired by the Ambient Occlusion Fields paper, but the closest research is an INRIA tech report that was developed in parallel with Stephen’s work (though he did borrow some ideas from it afterwards).
  3. The character occlusion was not performed using capsules, but via nonuniformly-scaled spheres. I’ll let Stephen speak to the details: “we transform the receiver point into ‘ellipsoid’-local space, scale the axes and lookup into a 1D texture (using distance to centre) to get the zonal harmonics for a unit sphere, which are then used to scale the direction vector. This works very well in practice due to the softness of the occlusion. It’s also pretty similar to Hardware Accelerated Ambient Occlusion Techniques on GPUs although they work purely with spheres, which may simplify some things. I checked the P4 history, and our implementation was before their publication, so I’m not sure if there was any direct inspiration. I’m pretty sure our initial version also predated Real-time Soft Shadows in Dynamic Scenes using Spherical Harmonic Exponentiation since I remember attending SIGGRAPH that year and teasing a friend about the fact that we had something really simple.”
  4. My statement that the downsampled AO buffer is applied to the frame using cross-bilateral upsampling was incorrect. Stephen just takes the most representative sample by comparing the full-resolution depth and object IDs against the surrounding down-sampled values. This is a kind of “bilateral point-sampling” which apparently works surprisingly well in practice, and is significantly cheaper than a full bilateral upsample. Interestingly, Stephen did try a more complex filter at one point: “Near the end I did try performing a bilinearly-interpolated lookup for pixels with a matching ID and nearby depth but there were failure cases, so I dropped it due to lack of time. I will certainly be looking at performing more sophisticated upsampling or simply increasing the resolution (as some optimisations near the end paid off) next time around.”

A recent blog post on Jeremy Shopf’s excellent Level of Detail blog mentions similarities between the sphere technique and one used for AMD’s ping-pong demo (the technique is described in the article Deferred Occlusion from Analytic Surfaces in ShaderX7). To me, the basic technique is reminiscent of Inigo Quilez‘ article on analytical sphere ambient occlusion; an HPG 2010 paper by Morgan McGuire does something similar with triangles instead of spheres.

Although the technique builds upon previous ones, it does add several new elements, and works well in the game. The technique does suffer from multiple-occlusion; I wonder if a technique similar to the 1D “compensation map’ used by Morgan McGuire might help.

Gamefest 2010 Presentations

I attended this year’s Gamefest back in February. Gamefest is a conference run by Microsoft, focusing on games development for Microsoft platforms (Xbox 360 and Windows). This year (unusually, due to the presence of prerelease information on Kinect, at the time still known as “Project Natal”) the conference was only open to registered platform developers. For this reason, I didn’t blog about it at the time (no sense in telling people about stuff they can’t see).

Recently (thanks to the Legalize Adulthood! blog) I became aware that the Gamefest 2010 presentations are online on the conference website, and available for anyone (not just registered XBox 360 and Windows Live developers). I’ll briefly discuss which presentations I think are of most interest. First, the ones I attended and found interesting:

Lighting Volumes

This was a very nice talk about baking lighting into volumes by John O’Rorke, Director of Technology at Monolith Productions. Monolith were trying to light a large city at night, where the character could traverse the city pretty freely both horizontally and vertically. Lots of instances and geometry Levels-of-Detail (LODs), lots of dynamic lights. A standard lightmap + light probe solution took up too much memory given the large surface area, and Monolith didn’t like the slow baking workflow involved, as well as the inconsistencies between static and dynamic objects.

Instead, Monolith stored light probes in volume textures. They tried spherical harmonics (SH) and didn’t like it (too much memory, too blurry to use for specular). F.E.A.R. 2 shipped with an approach similar to Valve’s “Ambient Cube” (6 RGB coefficients), which has the advantage of cheap shader evaluation. For their new game they went with a stripped-down version of this, which had a single RGB color and 6 luminance coefficients; this reduces from 18 to 9 scalars and it was hard to tell the difference. Besides memory, this also sped up the shaders (less cache misses) and gave them better precision (since the luminance and color can be combined in a way that increases precision). For HDR they used a scale value for each volume (the game had multiple volumes in it) – this also gave them good precision in dark areas. Evaluating the “luminance cube” is extremely cheap (details in the slides). John also described some implementation details to do with stenciling out areas of the screen, using MIP maps, and getting around 360 alignment issues with DXT1 textures (all volumes were stored as DXT1).

Generation: the artists place lights (including area lights) and all the lights are baked (direct only, no global illumination (GI) bounces) during level packing. The math is simple – the tools just evaluated diffuse lighting for 6 normal directions at the center of each volume texel. Once the number of lights added by the artists started getting large this slowed down a bit so they added a caching system for the baked volumes. They eventually added GI support by rendering cube map probes in the game.

Downsides: low resolution, bad for high contrast shadows, can get light or shadow bleeding through thin geometry. They use dynamic lights for high contrast / shadow casting lighting.

For the future they plan to cascade the volumes and stream them. They also tried raymarching against the volume to get atmospheric effects, this was fast enough on high-end PCs but not consoles.

Rendering with Conviction: The Graphics of Splinter Cell

This great talk (by Stephen Hill from Ubisoft) went into detail on two rendering systems used in the game Splinter Cell: Conviction. The first was a software hierarchical Z-Buffer occlusion system. They used this in various ways to cull draw calls from shadows as well as primary rendering. The system could handle over occlusion 20,000 queries in around 1/2 millisecond. Results looked pretty good.

Next, Stephen discussed is the game’s ambient occlusion (AO) system. The game developers didn’t use screen-space ambient occlusion (SSAO), since they didn’t like the inaccuracy, cost, and lack of artist control. Instead they went for a hybrid baked system. Over background surfaces (buildings, etc.) they bake precomputed AO maps. The precomputation is GPU-accelerated, based on the GPU Gems 2 article “High-Quality Global Illumination Rendering Using Rasterization” (available here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter38.html). For dynamic rigid objects like tables, chairs, vehicles, etc. they precompute AO volumes (16x16x16 or so). Finally for characters, they analytically compute AO from an articulating model of “capsules” (two half-spheres connected by a cylinder). Ubisoft combine all of these (not trying to address double-occlusion, so results are slightly too dark) into a downsampled offscreen buffer. Rather than simple scalar AO, all this stuff uses a directional 4-number AO representation (essentially linear SH) so that they can later apply high-res normal maps to it when the offscreen buffer is applied. They figured out a clever way to map the math so that they can use blending hardware to combine these directional AOs into the offscreen buffer in a way that makes sense. The AO buffer is later applied using cross-bilateral upscaling. For the future Ubisoft would like to add streaming support for the AO maps and volumes to allow for higher resolution.

Stephen showed the end result, and it looked pretty good with a character running through a crowded scene, vaulting over tables, knocking down chairs, with nice ambient occlusion effects whenever any two objects were close. A system like this is definitely worth considering as an alternative to SSAO.

Stripped Down Direct3D: Xbox 360 Command Buffer and Resource Management

This excellent talk (by Wade Brainerd, who like me works in Activision‘s Studio Central group) dives deep into a low-level description of Xbox 360 internals and the modified version of DirectX that it uses. A rare opportunity for people without registered console developer accounts to look at this stuff, which is relevant to PC developers as well since it shows you what happens under the driver’s hood.

Fluid Simulation Driven Effects in Dark Void

This talk by NVIDIA contained basically the same stuff as the I3D paper Interactive Fluid-Particle Simulation using Translating Eulerian Grids, which can be found here: http://www.jcohen.name/. It was interesting to hear about such a high-end CUDA fluid sim system being integrated into a shipping game (even if only on the PC version) – they got some cool particle effects out of it with turbulence etc. These kinds of effects will probably become more common once a new generation of console hardware arrives.

Advanced Rendering Techniques with DirectX 11

This talk was about various ways to use DX11 Compute Shaders in graphics. This talk included stuff like fast computation of summed area tables for fast anisotropic blurring of environment maps and depth of field. The speakers also showed an A-buffer-like technique for order-independent transparency, and a tile-based deferred rendering system that was more efficient than using pixel shaders. Like the previous talk, this seemed like the kind of stuff that could become mainstream in the next console generation.

Realistic Rendering with Spatially-Varying Reflectance

This presentation discussed research published in the SIGGRAPH Asia 2009 paper “All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance“ (available here: http://research.microsoft.com/en-us/um/people/johnsny/). The presentation was by John Snyder, one of the paper authors. It’s similar to some other recent papers which represent normal distribution functions as a sum of Gaussians and filter them, but this paper does some interesting things with regards to supporting environment maps and transforming from half-angle to view space. Worth a read for people looking at specular shader stuff.

Xbox 360 Shaders and Performance: How Not to Upset the GPU

This talk was probably old hat to anyone with significant 360 experience but should be interesting to anyone who does not fit that description – it was a rare public discussion of low-level console details.

Bringing Characters to Life: Using Physics to Enhance Animation

This talk was about combining physics with canned animation (similar to some of NaturalMotion‘s tools). It looked pretty good. The basic idea is straightforward – artist paints tightness of springs connecting the character’s joints to the skeleton playing the animation – a state machine allows to vary these tightness values based on animation and gameplay events.

The Dark Art of Shadow Mapping

This was a good, basic introduction to the current state of the art in shadow mapping.

The Devil is in the Details: Nuances of Light Mapping

Illuminate Labs (the makers of Beast and Turtle) gave this talk about baked lighting. It was pretty basic for anyone who’s done work in this area but might be good to brush up with for people who aren’t familiar with the latest practice.

Other Talks

There were a bunch of talks I didn’t attend (too many overlapping sessions!) but which look promising based on title, speaker list, or both: Case Studies in VMX128 Optimization, Best Practices for DirectX 11 Development, DirectX 11 DirectCompute: A Teraflop for Everyone, DirectX 11 Technology Update, and Think DirectX 11 Tessellation! – What Are Your Options?

SIGGRAPH 2010 Panels

I don’t often go to SIGGRAPH panels, but this year’s list includes three that look very tempting. Here they are, sorted by date:

Future Directions in Graphics Research

Sunday, 25 July, 3:45 PM – 5:15 PM

The SIGGRAPH website description says, “This panel presents the results of an NSF-funded workshop on defining broader, fundamental long-term research areas for potential funding opportunities in medical imaging and device design, manufacturing, computational photography, scientific visualization, and many other emerging areas in graphics research.” It’s important to know where the funding is going into computer graphics research, and what the researchers think the most promising future directions are. The panelists include some of the most prominent and influential computer graphics professors: Jessica Hodgins from Carnegie Mellon,  James Foley (first author of “Computer Graphics: Principles and Practice”) from Georgia Tech,  Pat Hanrahan (who probably has his name on more SIGGRAPH papers than anyone in the world) from Stanford University, and  Donald P. Greenberg (whose list of former students would make a great first draft for a “who’s who” of computer graphics) from Cornell.

CS 292: The Lost Lectures; Computer Graphics People and Pixels in the Past 30 Years

Monday, 26 July, 3:45 PM – 5:15 PM

This is a unique idea for a panel – in the 1980’s, Ed Catmull and Jim Blinn taught a hugely influential course on computer graphics. Among many others, it inspired Richard Chuang who went on to found PDI. While teaching the course, Ed Catmull was building Lucasfilm’s computer graphics group, which later became Pixar. The panelists are Ed Catmull and Richard Chuang, who according to the website description “use video from the course to reflect on the evolution of computer graphics – from the genesis of Pixar and PDI to where we are today.” Catmull in particular is an amazing speaker – this looks well worth attending.

Large Steps Toward Open Source

Thursday, 29 July, 9:00 AM – 10:30 AM

Several influential film industry groups have open-sourced major bits of internal technology recently. This panel discusses why they did it, what were the benefits and where were the challenges. This is definitely relevant to the game industry – would it make sense for us to do the same? (Insomniac is already leading the way – I wish they had a representative on this panel). Panelists include Rob Bredow (CTO of Sony Pictures Imageworks, which has recently launched several important open source initiatives),  Andy Hendrickson (CTO of Walt Disney Animation Studios, which has recently done the same, most notably including the Ptex texture mapping system),  Florian Kainz (Principal R&D Engineer at Industrial Light & Magic and the key individual behind OpenEXR, which ILM open-sourced in 2003),  and Bill Polson (Lead of Production Engineering at Pixar Animation Studios). Pixar doesn’t currently have any open-source initiatives that I know of – does Bill’s participation mean that they are about to announce one?

SIGGRAPH 2010 Talks

After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:

Avatar for Nerds

Sunday, 25 July, 2-3:30 pm

  • A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
  • Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
  • PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that  several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).

Split Second Screen Space

Monday, 26 July, 2-3:30 pm

  • Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
  • How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
  • Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
  • A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.

APIs for Rendering

Wednesday, 28 July, 2-3:30 pm

  • Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
  • REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
  • WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.

Games & Real Time

Thursday, 29 July, 10:45 am-12:15 pm

  • User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
  • Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
  • Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for  more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
  • Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.

A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101”, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.