Author Archives: Naty

SIGGRAPH Scheduler & Course Update

For anyone still working on their SIGGRAPH 2010 schedule, SIGGRAPH now has an online scheduler available. They are also promising an iPhone app, but this has not yet materialized. Most courses (sadly, only one of mine) now have detailed schedules. These reveal some more detail about two of the most interesting courses for game and real-time rendering developers:

Advances in Real-Time Rendering in 3D Graphics and Games

The first half, Advances in Real-Time Rendering in 3D Graphics and Games I (Wednesday, 28 July, 9:00 AM – 12:15 PM, Room 515 AB) starts with a short introduction by Natalya Tatarchuk (Bungie), and continues with four 45 to 50-minute talks:

  • Rendering techniques in Toy Story 3, by John Ownby, Christopher Hall and Robert Hall (Disney).
  • A Real-Time Radiosity Architecture for Video Games, by Per Einarsson (DICE) and Sam Martin (Geomerics)
  • Real-Time Order Independent Transparency and Indirect Illumination using Direct3D 11, by Jason Yang and Jay McKee (AMD)
  • CryENGINE 3: Reaching the Speed of Light, by Anton Kaplayan (Crytek)

The second half, Advances in Real-Time Rendering in 3D Graphics and Games II (Wednesday, 28 July, 2:00 PM – 5:15 PM, Room 515 AB) continues with five more talks (these are more variable in length, ranging from 25 to 50 minutes):

  • Sample Distribution Shadow Maps, by Andrew Lauritzen (Intel)
  • Adaptive Volumetric Shadow Maps, by Marco Salvi (Intel)
  • Uncharted 2: Character Lighting and Shading, by John Hable (Naughty Dog)
  • Destruction Masking in Frostbite 2 using Volume Distance Fields, by Robert Kihl (DICE)
  • Water Flow in Portal 2, by Alex Vlachos (Valve)

And concludes with a short panel (Open Challenges for Rendering in Games and Future Directions) and Q&A session by all the course speakers.

Beyond Programmable Shading

The first half,  Beyond Programmable Shading I (Thursday, 29 July, 9:00 AM – 12:15 PM, Room 515 AB) includes seven 20-30 minute talks:

  • Looking Back, Looking Forward, Why and How is Interactive Rendering Changing, by Mike Houston (AMD)
  • Five Major Challenges in Interactive Rendering, by Johan Andersson (DICE)
  • Running Code at a Teraflop: How a GPU Shader Core Works, by Kayvon Fatahalian (Stanford)
  • Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Intel)
  • DirectCompute Use in Real-Time Rendering Products, by Chas. Boyd (Microsoft)
  • Surveying Real-Time Beyond Programmable Shading Rendering Algorithms, by David Luebke (NVIDIA)
  • Bending the Graphics Pipeline, by Johan Andersson (DICE)

The second half, Beyond Programmable Shading II (Thursday, 29 July, 2:00 PM – 5:15 PM, Room 515 AB) starts with a short “re-introduction” by Aaron Lefohn (Intel) continues with five 20-35 minute talks:

  • Keeping Many Cores Busy: Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (MIT)
  • Evolving the Direct3D Pipeline for Real-Time Micropolygon Rendering, by Kayvon Fatahalian (Stanford)
  • Decoupled Sampling for Real-Time Graphics Pipelines, by Jonathan Ragan-Kelley (MIT)
  • Deferred Rendering for Current and Future Rendering Pipelines, by Andrew Lauritzen (Intel)
  • PantaRay: A Case Study in GPU Ray-Tracing for Movies, by Luca Fascione (Weta) and Jacopo Pantaleoni (NVIDIA)

and closes with a 15-minute wrapup (What’s Next for Interactive Rendering Research?) by Mike Houston (AMD) followed by a 45-minute panel (What Role Will Fixed-Function Hardware Play in Future Graphics Architectures?) by all the course speakers Mike Houston, Kayvon Fatahalian, and Johan Andersson, joined by Steve Molnar (NVIDIA) and David Blythe (Intel) (thanks to Aaron Lefohn for the update).

Both of these courses look extremely strong, and I recommend them to any SIGGRAPH attendee interested in real-time rendering (I definitely plan to attend them!)

Four presentations by DICE is an unusually large number for a single game developer, but that isn’t the whole story; they are actually doing two additional presentations in the Stylized Rendering in Games course, for a total of six!

Gamefest 2010 Presentations

I attended this year’s Gamefest back in February. Gamefest is a conference run by Microsoft, focusing on games development for Microsoft platforms (Xbox 360 and Windows). This year (unusually, due to the presence of prerelease information on Kinect, at the time still known as “Project Natal”) the conference was only open to registered platform developers. For this reason, I didn’t blog about it at the time (no sense in telling people about stuff they can’t see).

Recently (thanks to the Legalize Adulthood! blog) I became aware that the Gamefest 2010 presentations are online on the conference website, and available for anyone (not just registered XBox 360 and Windows Live developers). I’ll briefly discuss which presentations I think are of most interest. First, the ones I attended and found interesting:

Lighting Volumes

This was a very nice talk about baking lighting into volumes by John O’Rorke, Director of Technology at Monolith Productions. Monolith were trying to light a large city at night, where the character could traverse the city pretty freely both horizontally and vertically. Lots of instances and geometry Levels-of-Detail (LODs), lots of dynamic lights. A standard lightmap + light probe solution took up too much memory given the large surface area, and Monolith didn’t like the slow baking workflow involved, as well as the inconsistencies between static and dynamic objects.

Instead, Monolith stored light probes in volume textures. They tried spherical harmonics (SH) and didn’t like it (too much memory, too blurry to use for specular). F.E.A.R. 2 shipped with an approach similar to Valve’s “Ambient Cube” (6 RGB coefficients), which has the advantage of cheap shader evaluation. For their new game they went with a stripped-down version of this, which had a single RGB color and 6 luminance coefficients; this reduces from 18 to 9 scalars and it was hard to tell the difference. Besides memory, this also sped up the shaders (less cache misses) and gave them better precision (since the luminance and color can be combined in a way that increases precision). For HDR they used a scale value for each volume (the game had multiple volumes in it) – this also gave them good precision in dark areas. Evaluating the “luminance cube” is extremely cheap (details in the slides). John also described some implementation details to do with stenciling out areas of the screen, using MIP maps, and getting around 360 alignment issues with DXT1 textures (all volumes were stored as DXT1).

Generation: the artists place lights (including area lights) and all the lights are baked (direct only, no global illumination (GI) bounces) during level packing. The math is simple – the tools just evaluated diffuse lighting for 6 normal directions at the center of each volume texel. Once the number of lights added by the artists started getting large this slowed down a bit so they added a caching system for the baked volumes. They eventually added GI support by rendering cube map probes in the game.

Downsides: low resolution, bad for high contrast shadows, can get light or shadow bleeding through thin geometry. They use dynamic lights for high contrast / shadow casting lighting.

For the future they plan to cascade the volumes and stream them. They also tried raymarching against the volume to get atmospheric effects, this was fast enough on high-end PCs but not consoles.

Rendering with Conviction: The Graphics of Splinter Cell

This great talk (by Stephen Hill from Ubisoft) went into detail on two rendering systems used in the game Splinter Cell: Conviction. The first was a software hierarchical Z-Buffer occlusion system. They used this in various ways to cull draw calls from shadows as well as primary rendering. The system could handle over occlusion 20,000 queries in around 1/2 millisecond. Results looked pretty good.

Next, Stephen discussed is the game’s ambient occlusion (AO) system. The game developers didn’t use screen-space ambient occlusion (SSAO), since they didn’t like the inaccuracy, cost, and lack of artist control. Instead they went for a hybrid baked system. Over background surfaces (buildings, etc.) they bake precomputed AO maps. The precomputation is GPU-accelerated, based on the GPU Gems 2 article “High-Quality Global Illumination Rendering Using Rasterization” (available here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter38.html). For dynamic rigid objects like tables, chairs, vehicles, etc. they precompute AO volumes (16x16x16 or so). Finally for characters, they analytically compute AO from an articulating model of “capsules” (two half-spheres connected by a cylinder). Ubisoft combine all of these (not trying to address double-occlusion, so results are slightly too dark) into a downsampled offscreen buffer. Rather than simple scalar AO, all this stuff uses a directional 4-number AO representation (essentially linear SH) so that they can later apply high-res normal maps to it when the offscreen buffer is applied. They figured out a clever way to map the math so that they can use blending hardware to combine these directional AOs into the offscreen buffer in a way that makes sense. The AO buffer is later applied using cross-bilateral upscaling. For the future Ubisoft would like to add streaming support for the AO maps and volumes to allow for higher resolution.

Stephen showed the end result, and it looked pretty good with a character running through a crowded scene, vaulting over tables, knocking down chairs, with nice ambient occlusion effects whenever any two objects were close. A system like this is definitely worth considering as an alternative to SSAO.

Stripped Down Direct3D: Xbox 360 Command Buffer and Resource Management

This excellent talk (by Wade Brainerd, who like me works in Activision‘s Studio Central group) dives deep into a low-level description of Xbox 360 internals and the modified version of DirectX that it uses. A rare opportunity for people without registered console developer accounts to look at this stuff, which is relevant to PC developers as well since it shows you what happens under the driver’s hood.

Fluid Simulation Driven Effects in Dark Void

This talk by NVIDIA contained basically the same stuff as the I3D paper Interactive Fluid-Particle Simulation using Translating Eulerian Grids, which can be found here: http://www.jcohen.name/. It was interesting to hear about such a high-end CUDA fluid sim system being integrated into a shipping game (even if only on the PC version) – they got some cool particle effects out of it with turbulence etc. These kinds of effects will probably become more common once a new generation of console hardware arrives.

Advanced Rendering Techniques with DirectX 11

This talk was about various ways to use DX11 Compute Shaders in graphics. This talk included stuff like fast computation of summed area tables for fast anisotropic blurring of environment maps and depth of field. The speakers also showed an A-buffer-like technique for order-independent transparency, and a tile-based deferred rendering system that was more efficient than using pixel shaders. Like the previous talk, this seemed like the kind of stuff that could become mainstream in the next console generation.

Realistic Rendering with Spatially-Varying Reflectance

This presentation discussed research published in the SIGGRAPH Asia 2009 paper “All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance“ (available here: http://research.microsoft.com/en-us/um/people/johnsny/). The presentation was by John Snyder, one of the paper authors. It’s similar to some other recent papers which represent normal distribution functions as a sum of Gaussians and filter them, but this paper does some interesting things with regards to supporting environment maps and transforming from half-angle to view space. Worth a read for people looking at specular shader stuff.

Xbox 360 Shaders and Performance: How Not to Upset the GPU

This talk was probably old hat to anyone with significant 360 experience but should be interesting to anyone who does not fit that description – it was a rare public discussion of low-level console details.

Bringing Characters to Life: Using Physics to Enhance Animation

This talk was about combining physics with canned animation (similar to some of NaturalMotion‘s tools). It looked pretty good. The basic idea is straightforward – artist paints tightness of springs connecting the character’s joints to the skeleton playing the animation – a state machine allows to vary these tightness values based on animation and gameplay events.

The Dark Art of Shadow Mapping

This was a good, basic introduction to the current state of the art in shadow mapping.

The Devil is in the Details: Nuances of Light Mapping

Illuminate Labs (the makers of Beast and Turtle) gave this talk about baked lighting. It was pretty basic for anyone who’s done work in this area but might be good to brush up with for people who aren’t familiar with the latest practice.

Other Talks

There were a bunch of talks I didn’t attend (too many overlapping sessions!) but which look promising based on title, speaker list, or both: Case Studies in VMX128 Optimization, Best Practices for DirectX 11 Development, DirectX 11 DirectCompute: A Teraflop for Everyone, DirectX 11 Technology Update, and Think DirectX 11 Tessellation! – What Are Your Options?

SIGGRAPH 2010 Panels

I don’t often go to SIGGRAPH panels, but this year’s list includes three that look very tempting. Here they are, sorted by date:

Future Directions in Graphics Research

Sunday, 25 July, 3:45 PM – 5:15 PM

The SIGGRAPH website description says, “This panel presents the results of an NSF-funded workshop on defining broader, fundamental long-term research areas for potential funding opportunities in medical imaging and device design, manufacturing, computational photography, scientific visualization, and many other emerging areas in graphics research.” It’s important to know where the funding is going into computer graphics research, and what the researchers think the most promising future directions are. The panelists include some of the most prominent and influential computer graphics professors: Jessica Hodgins from Carnegie Mellon,  James Foley (first author of “Computer Graphics: Principles and Practice”) from Georgia Tech,  Pat Hanrahan (who probably has his name on more SIGGRAPH papers than anyone in the world) from Stanford University, and  Donald P. Greenberg (whose list of former students would make a great first draft for a “who’s who” of computer graphics) from Cornell.

CS 292: The Lost Lectures; Computer Graphics People and Pixels in the Past 30 Years

Monday, 26 July, 3:45 PM – 5:15 PM

This is a unique idea for a panel – in the 1980’s, Ed Catmull and Jim Blinn taught a hugely influential course on computer graphics. Among many others, it inspired Richard Chuang who went on to found PDI. While teaching the course, Ed Catmull was building Lucasfilm’s computer graphics group, which later became Pixar. The panelists are Ed Catmull and Richard Chuang, who according to the website description “use video from the course to reflect on the evolution of computer graphics – from the genesis of Pixar and PDI to where we are today.” Catmull in particular is an amazing speaker – this looks well worth attending.

Large Steps Toward Open Source

Thursday, 29 July, 9:00 AM – 10:30 AM

Several influential film industry groups have open-sourced major bits of internal technology recently. This panel discusses why they did it, what were the benefits and where were the challenges. This is definitely relevant to the game industry – would it make sense for us to do the same? (Insomniac is already leading the way – I wish they had a representative on this panel). Panelists include Rob Bredow (CTO of Sony Pictures Imageworks, which has recently launched several important open source initiatives),  Andy Hendrickson (CTO of Walt Disney Animation Studios, which has recently done the same, most notably including the Ptex texture mapping system),  Florian Kainz (Principal R&D Engineer at Industrial Light & Magic and the key individual behind OpenEXR, which ILM open-sourced in 2003),  and Bill Polson (Lead of Production Engineering at Pixar Animation Studios). Pixar doesn’t currently have any open-source initiatives that I know of – does Bill’s participation mean that they are about to announce one?

SIGGRAPH 2010 Talks

After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:

Avatar for Nerds

Sunday, 25 July, 2-3:30 pm

  • A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
  • Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
  • PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that  several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).

Split Second Screen Space

Monday, 26 July, 2-3:30 pm

  • Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
  • How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
  • Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
  • A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.

APIs for Rendering

Wednesday, 28 July, 2-3:30 pm

  • Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
  • REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
  • WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.

Games & Real Time

Thursday, 29 July, 10:45 am-12:15 pm

  • User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
  • Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
  • Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for  more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
  • Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.

A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101”, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.

SIGGRAPH 2010 Courses Update

Since my original post about the SIGGRAPH 2010 courses, some of the courses now have updated speaker lists (including mine – regardless of what Eric may think, I’m not about to risk Hyper-Cerebral Electrosis by speaking for three hours straight). I’ll give the notable updates here:

Stylized Rendering in Games

Covered games will include:

  • Borderlands (presented by Gearbox cofounder and chief creative officer Brian Martel as well as VP of product development Aaron Thibault)
  • Brink (presented by lead programmer Dean Calver)
  • The 2008 Prince of Persia (presented by lead 3D programmer Jean-François St-Amour)
  • Battlefield Heroes (presented by graphics engineer Henrik Halén)
  • Mirror’s Edge (also presented by Henrik Halén).
  • Monday Night Combat (presented by art director Chandana Ekanayake) – thanks to Morgan for the update!

Physically Based Shading Models in Film and Game Production

  • I’ll be presenting the theoretical background, as well as technical, production, and creative lessons from the adoption of physically-based shaders at the Activision studios.
  • Also on the game side, Yoshiharu Gotanda (president, R&D manager, and co-founder of tri-Ace) will talk about some of the fascinating work he has been doing with physically based shaders.

On the film production side:

  • Adam Martinez is a computer graphics supervisor at Sony Pictures Imageworks whose film work includes the Matrix series and Superman Returns; his talk will focus on the use of physically based shaders in Alice in Wonderland.  Imageworks uses a ray-tracing renderer, unlike the micropolygon rasterization renderers used by most of the film industry; I look forward to hearing how this affects shading and lighting.
  • Ben Snow is a visual effects supervisor at Industrial Light and Magic who has done VFX work on numerous films (many of them as CG or VFX supervisor) including Star Trek: Generations, Twister, The Lost World: Jurassic Park, The Mummy, Star Wars: Episode II – Attack of the Clones, King Kong, and Iron Man. Ben has pioneered the use of physically based shaders in Terminator Salvation and Iron Man 2, which I hope to learn more about from his talk.

Color Enhancement and Rendering in Film and Game Production

The game side of the course has two speakers in common with the “physically-based shading” course:

  • Yoshiharu Gotanda will talk about his work on film and camera emulation at tri-Ace, which is every bit as interesting as his physical shading work.
  • I’ll discuss my experiences introducing filmic color grading techniques at the Activision studios.

And one additional speaker:

  • While working at Electronic Arts, Haarm-Pieter Duiker applied his experience from films such as the Matrix series and Fantastic Four to game development, pioneering the filmic tone-mapping technique recently made famous by John Hable. He then moved back into film production, working on Speed Racer and 2012 (for which he won a VES award). Haarm-Pieter also runs his own company which makes tools for film color management.

The theoretical background and film production side will be covered by a roster of speakers which (although I shouldn’t say this since I’m organizing the course) is nothing less than awe-inspiring:

  • Dominic Glynn is lead engineer of image mastering at Pixar Animation Studios. He has worked on films including Cars, The Wild, Ratatouille, Up and Toy Story 3. Dominic will talk about how color enhancement and rendering is done at different stages of the Pixar rendering pipeline.
  • Joseph Goldstone (Lilliputian Pictures LLC) is a prominent consulting color scientist; his film credits include Terminator 2: Judgment Day, Batman Returns, Apollo 13, The Fifth Element, Titanic, and Star Wars: Episode II – Attack of the Clones. He has contributed to industry standards committees such as the International Color Consortium (ICC) and the Academy of Motion Pictures Arts and Science’s Image Interchange Framework.
  • Joshua Pines is vice president of color imaging R&D at Technicolor; between his work at Technicolor, ILM and other production companies he has over 50 films to his credit, including Star Wars: Return of the Jedi, The Abyss, Terminator 2: Judgment Day, Jurassic Park, Schindler’s List, Forrest Gump, Twister, Mission: Impossible, Titanic, Saving Private Ryan, The Mummy, Star Wars: The Phantom Menace, The Aviator, and many others. Joshua lead the development of ILM’s film scanning system and has a Technical Achievement Award from the Motion Pictures Academy of Arts & Sciences for his work on film archiving.
  • Jeremy Selan is the color pipeline lead at Sony Pictures Imageworks. He has worked on films including Spider-Man 2 and 3, Monster House, Surf’s Up, Beowulf, Hancock, and Cloudy with a Chance of Meatballs. Jeremy has contributed to industry standards committees such as the Digital Cinema Initiative (DCI), SMPTE, and the Academy of Motion Picture Art and Science’s Image Interchange Framework. At the course, Jeremy will unveil an exciting new initiative he has been working on at Imageworks.
  • The creative aspects of color grading will be covered by Stefan Sonnenfeld, senior vice president at Ascent Media Group as well as president, managing director, and co-founder of Company 3. An industry-leading DI colorist, Stefan has worked on almost one hundred films including Being John Malkovich, the Pirates of the Caribbean series, War of the Worlds, Mission: Impossible III, X-Men: The Last Stand, 300, Dreamgirls, Transformers, Sweeney Todd, Cloverfield, The Hurt Locker, Body of Lies, The Taking of Pelham 1 2 3, Transformers: Revenge of the Fallen, Where the Wild Things Are, Alice in Wonderland, Prince of Persia: The Sands of Time, and many others, as well as numerous high-profile television projects.

ACM and SIGGRAPH Members – Vote for Open Access!

Long-time readers of this blog will be well aware of my position on ACM and Open Access. Although ACM is a non-profit organization which ostensibly has as its only mandate “the advancement of computing as a science and a profession”, the ACM Publications Board has been behaving like a rent-seeking publisher; bullying students working to provide valuable resources to the community, and lobbying the US Government against Open Access initiatives, all in the name of protecting their revenue streams.

I have witnessed an outpouring of anger from the computing community at these events, convincing me that I am not alone in believing that the ACM Publications Board (and by extension the ACM itself) has tragically lost its way, prioritizing its income over the good of computing as a science and a profession. In fighting Open Access they are on the wrong side of history; witness all the leading academic institutions who have come out in favor of the very government Open Access initiative which the ACM has opposed.

If you believe as I do, then now is the chance to make a difference! SIGGRAPH is holding elections now for three Director-at-Large positions, to be selected from five candidates; the deadline for the elections is June 4th. If you are an active SIGGRAPH member, you should have received instructions for voting by now; you can vote at this link. ACM is also holding general elections for various Council positions, including the President of the ACM; you can vote at this link. The deadline for the ACM General Council elections is May 24th.

For me, Open Access is the most important issue in these elections. But which candidates will fight for Open Access, and which for the status quo? Of all the SIGGRAPH candidates, only one has explicitly mentioned Open Access in their position statement (James O’Brien, as Eric pointed out in a recent post), and so far one of the ACM Council candidates have: Salil Vadhan’s statment is here: http://www.acm.org/acmelections/candidate10.

To arm voting ACM and SIGGRAPH members with information on the candidates positions, I have composed some questions regarding ACM’s copyright policy and Open Access, and put them up on a web page. I have sent these questions to all the candidates for these elections (except for the few which I have so far been unable to contact), and am posting the answers on the web page as they come in.

So far only two candidates for the SIGGRAPH election have answered; Mashhuda Glencross and James O’Brien. Both answers are Open-Access friendly. No candidates for the ACM Council have come forward yet. Keep following the questions web page, and make sure to use the information there to select candidates, and vote in both elections. Nothing will ever change unless we make our voices heard!

SIGGRAPH 2010 Courses

This year, SIGGRAPH is making a very strong push to include more game and real-time content.  A lot of the programs are yet to be published, but the full list of courses is now up on the conference website, and many of them are of interest. The courses have always been the SIGGRAPH program with the most relevant material for film and game production; this year the game side is particularly strong. If you are doing game graphics, the courses by themselves are reason enough to attend the conference.

Full disclosure – I am organizing two of these courses, so my description of them may not be fully objective 🙂

The courses which are most directly relevant to game developers:

  1. Advances in Real-Time Rendering in 3D Graphics and Games – this full-day course, organized by Natasha Tatarchuk, has been a highlight of SIGGRAPH since it was first presented in 2006 (the name’s a bit clunky, though). Each year Natasha solicits top-notch game and real-time rendering content for her course. SSAO was first presented at this course, as were cascaded light volumes and many other important techniques. This year includes presentations from game powerhouses Bungie, Naughty Dog, Crytek, DICE, and Rockstar, among others.
  2. Beyond Programmable Shading – another very strong full-day course, now in its third year. Like Natasha’s course, this course includes brand-new material every year. Focusing on GPU compute APIs such as CUDA, DirectCompute and OpenCL, the presentations tend to skew towards GPU vendors but have also included some groundbreaking game developer talks on topics like sparse voxel octrees (by id software) and parallelism in graphics engines (DICE) . This year, besides the usual suspects (NVIDIA, AMD, Intel, Microsoft),  there will be a talk by Johan Andersson from DICE (he gave the parallelism talk last year and I can’t wait to hear what he’s been up to since), one from Kayvon Fatahalian from Stanford (who has been doing some fascinating research on GPU-accelerated micropolygon rendering), and finally one from Luca Fascione of Weta. Hopefully Luca will be talking about the GPU-accelerated PantaRay system he helped design to render the jungles in Avatar. PantaRay is used to precompute occlusion; a very game-like thing to do.
  3. Stylized Rendering in Games – in recent years, games have started to explore the universe of possible styles beyond photorealism. The course is organized by Morgan McGuire, who is also chairing this year’s NPAR conference, and includes presentations by the developers of some of the most prominent stylized games.
  4. Physically Based Shading Models in Film and Game Production – this is one of two courses I am organizing. This topic has fascinated me for years and was a major focus of my work on RTR3. Physically based shading is currently a hot topic in film production, making this a natural film-games crossover topic (my primary focus on the conference committee). I’ve been able to get speakers with really strong film production backgrounds, so I’m optimistic that this course will turn out well.
  5. Color Enhancement and Rendering in Film and Game Production – this is my other course. Most of my work in this area is more recent than the physical shader stuff so RTR3 doesn’t have as much material on it; perhaps I can remedy this in RTR4. Although this topic is well-established in film production (a field from which I’ve been able to get good speakers for this course as well), it is still an area of active development in games, as attested by the excellent GDC 2010 talk by John Hable.
  6. Global Illumination Across Industries – this is another film-games crossover course, with presentations by top people working on global illumination in both industries (the games side is represented by Illuminate Labs for precomputed GI and Crytek for dynamic GI).
  7. An Introduction to 3D Spatial Interaction With Videogame Motion Controllers – between Microsoft’s Project Natal, Sony’s Playstation Move, and the Wii MotionPlus, motion controllers are an extremely timely topic. The speakers include Richard Marks, the brains behind the Eyetoy, Playstation Eye and Playstation Move.
  8. Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations – this is an important area, and the speakers are leading researchers in the field. Among other topics, the course will cover the the collision detection systems in the PhysX and Bullet libraries.
  9. Advanced Techniques in Real-Time Hair Rendering and Simulation – while this topic is a bit more of a niche, it is of interest for many games and the speakers have done some of the leading work in this area.
  10. Volumetric Methods in Visual Effects – one of the main differences between game and film graphics is the amount and quality of atmospheric effects. Film VFX houses have been actively developing their own systems for modeling and rendering clouds, fog, fire, ocean spray, etc. This course includes a stellar cast of speakers from Digital Domain, Sony Pictures Imageworks, Rhythm & Hues, Side Effects (developers of Houdini), PDI/DreamWorks and Double Negative; anything these people don’t know about volumetric effects isn’t worth knowing. This course is likely to have lots of good ideas for stuff that isn’t possible in real-time yet, but will be in the near future.
  11. Filtered Importance Sampling for Production Rendering – another film rendering course which is likely to yield good medium- and long-term real-time ideas. Importance sampling is crucial for efficient, high-quality reflections from arbitrary BRDFs and lighting; it can be used with environment maps as well as ray tracing. Filtered importance sampling is a more general, correct, and expensive version of the common game trick of prefiltering cubemaps for glossy reflections. It has recently found wide use in film production, a topic about which the speakers (from major visual effects houses such as ILM, Image Movers Digital and MPC) are well-qualified to speak.
  12. Perceptually Motivated Graphics, Visualization, and 3D Displays – Understanding human visual perception and how it relates to graphics is important for knowing which corners can be safely cut and which ones will yield distracting artifacts; 3D displays are a timely topic for game developers as well, now that TV and console manufacturers are getting into the act.
  13. Gazing at Games: Using Eye Tracking to Control Virtual Characters – I’m not aware of any commercial games that use gaze tracking as an input method (the course is presented by academic researchers). If existing cameras such as Playstation Eye and Project Natal can track eyes with sufficient precision, this may be an important trend going forward, but if new equipment is needed this might not be relevant for a long time (if ever).

Although not as directly relevant, some of the other courses appear to be informative and fun, such as Andrew Glassner’s course about the Processing graphics programming language, and the course on how to Build Your Own 3D Display.

Three ways to show off your game at SIGGRAPH 2010

I recently spent a weekend in downtown LA helping the SIGGRAPH 2010 committee put together the conference schedule.  Looking at the end result from a game developer’s perspective, this is going to be a great conference! More details will be published in early May, but you can see the emphasis on games already; of the current (partial) list of courses, over half have high relevance to games.

If you are a game developer, we need your participation to help make this the biggest game SIGGRAPH ever! A few months ago I posted about the February 18th deadline. That deadline is long gone, but several venues are still open. This is your chance to show off not just in front of your fellow game developers, but also before the leading film graphics professionals and researchers. The most relevant venues for game developers are:

  1. Live Real-Time Demos. The Electronic Theater, a nightly showcase of the best computer graphics clips of the year, has long been a SIGGRAPH highlight and the tentpole event of the Computer Animation Festival (which is an official qualifying festival for the Academy Awards). The Electronic Theater is shown on a giant screen in the largest convention center hall, before an audience packed with the world’s top computer graphics professionals and researchers. Last year SIGGRAPH introduced a new event before the Electronic Theater to showcase the best real-time graphics of the year. The submission deadline for Live Real-Time Demos is April 28th (a week and a half away), so time is short! Submitting your game to Live Real-Time Demos is as simple as uploading about 5 minutes of captured game footage (all submitted materials are held in strict confidentiality) and filling out a short online form. If you want your game submitted, please let your producer know about this ASAP; it will likely take some time to get approval.
  2. SIGGRAPH Dailies! (new for 2010) is where the artists get to shine; details here, including cool example presentations from Pixar. Other SIGGRAPH programs present graphics techniques; ‘SIGGRAPH Dailies!’ showcases the craft and artistry with which these techniques are applied. All excellent production art is welcome: characters, animations, level lighting, particle effects, etc. Each artist whose work is selected will get two minutes at SIGGRAPH to show a video clip of their work and tell an interesting story about creating it. The submission deadline for ‘SIGGRAPH Dailies!’ is May 6th. Submitting art to Dailies is just a matter of uploading 60-90 seconds of video and filling out an online form. If your studio is planning to submit more than one or two Dailies, you should use the batch submission process: designate a representative (like an art director or lead) to recruit presentations and get producer approval. Once the representative has a tentative list of submissions, they should contact SIGGRAPH (click this link and select ‘SIGGRAPH Dailies’ from the drop down menu) to give advance warning of the expected submission count. After all entries have video clips and backstory text files, the studio representative contacts SIGGRAPH again to coordinate a batch submission.
  3. Late-Breaking Talks. Although the initial talk deadline is past, there is one more chance to submit talks: the late-breaking deadline on May 6th. SIGGRAPH talks are 20-minute presentations, typically about practical, down-to-earth film or game production techniques. If you are a graphics programmer or technical artist, you must have developed several such techniques while working on your last game. If there is one you are especially proud of, consider submitting a Talk about it; this only requires a one-page abstract (if you happen to have video or additional documentation you can add them as supplementary material). To show the detail expected in the abstract and the variety of possible talks here are five abstracts from 2009: a game production technique, a game system/API, a game rendering technique, a film effects shot, and a film character.

Presenting at one of these forums is a professional opportunity well worth the small amount of work involved. Forward this post to other people on your team so they can get in on the fun!

More on God of War III Antialiasing

Since my recent post discussing the antialiasing method used in God of War III, Cedric Perthuis (a graphics programmer on the God of War III development team) was kind enough to email some additional details on how the technique was developed, which I will quote here:

“It was extremely expensive at first. The first not so naive SPU version, which was considered decent, was taking more than 120 ms, at which point, we had decided to pass on the technique. It quickly went down to 80 and then 60 ms when some kind of bottleneck was reached. Our worst scene remained at 60ms for a very long time, but simpler scenes got cheaper and cheaper. Finally, and after many breakthroughs and long hours from our technology teams, especially our technology team in Europe, we shipped with the cheapest scenes around 7 ms, the average Gow3 scene at 12 ms, and the most expensive scene at 20 ms.

In term of quality, the latest version is also significantly better than the initial 120+ ms version. It started with a quality way lower than your typical MSAA2x on more than half of the screen. It was equivalent on a good 25% and was already nicer on the rest. At that point we were only after speed, there could be a long post mortem, but it wasn’t immediately obvious that it would save us a lot of RSX time if any, so it would have been a no go if it hadn’t been optimized on the SPU. When it was clear that we were getting a nice RSX boost ( 2 to 3 ms at first, 6 or 7 ms in the shipped version ), we actually focused on evaluating if it was a valid option visually. Despite of any great performance gain, the team couldn’t compromise on quality, there was a pretty high level to reach to even consider the option. And as for the speed, the improvements on the quality front were dramatic. A few months before shipping, we finally reached a quality similar to MSAA2x on almost the entire screen, and a few weeks later, all the pixelated edges disappeared and the quality became significantly higher than MSAA2x or even MSAA4x on all our still shots, without any exception. In motion it became globally better too, few minor issues remained which just can’t be solved without sub-pixel sampling.

There would be a lot to say about the integration of the technique in the engine and what we did to avoid adding any latency. Contrarily to what I have read on few forums, we are not firing the SPUs at the end of the frame and then wait for the results the next frame. We couldn’t afford to add any significant latency. For this kind of game, gameplay is first, then quality, then framerate. We had the same issue with vsync, we had to come up with ways to use the existing latency. So instead of waiting for the results next frame, we are using the SPUs as parallel coprocessors of the RSX and we use the time we would have spent on the RSX to start the next frame. With 3 ms or 4 ms of SPU latency at most, we are faster than the original 6ms of RSX time we saved. In the end it’s probably a wash in term of latency due to some SPU scheduling consideration. We had to make sure we could kick off the jobs as soon as the RSX was done with the frame, and likewise, when the SPU are done, we need the RSX to pick up where it left and finish the frame. Integrating the technique without adding any latency was really a major task, it involved almost half of the team, and a lot of SPU optimization was required very late in the game.”

“For a long time we worked with a reference code, algorithm changes were made in the reference code and in parallel the optimized code was being optimized further. the optimized version never deviated from the reference code. I assume that doing any kind of cheap approximation would prevent any changes to the algorithm. There’s a point though, where the team got such a good grip of the optimized version that the slow reference code wasn’t useful anymore and got removed. We tweaked some values, made few major changes to the edge detection code and did a lot of testing. I can’t stress it enough. every iteration was carefully checked and evaluated.”

So it looks like my first impression of such techniques – that they are too expensive to be feasible on current consoles – was not that far off the mark; I just hadn’t accounted for what a truly heroic SPU optimization effort could achieve. I wonder what other graphics techniques could be made fast enough for games, given a similar effort?