Tag Archives: talks

SIGGRAPH 2010 resource links

Naty and I (mostly Naty!) collected the links for most courses and a few talks given at SIGGRAPH 2010; see our page here. Enjoy! If you have links to any other courses and talks, please do send them on to me or post them as a comment.

Personally, I particularly liked the “Practical Morphological Anti-Aliasing on the GPU” talk. It’s good to see the technique take around 3.5 ms on an NVIDIA 295 GTX, and the author’s site has a lot of information (including code).

SIGGRAPH 2010 Talks

After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:

Avatar for Nerds

Sunday, 25 July, 2-3:30 pm

  • A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
  • Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
  • PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that  several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).

Split Second Screen Space

Monday, 26 July, 2-3:30 pm

  • Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
  • How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
  • Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
  • A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.

APIs for Rendering

Wednesday, 28 July, 2-3:30 pm

  • Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
  • REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
  • WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.

Games & Real Time

Thursday, 29 July, 10:45 am-12:15 pm

  • User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
  • Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
  • Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for  more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
  • Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.

A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101”, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.