A few days before the official release date of July 1, the book GPU Pro is out. Think “ShaderX, but now with color”. The example programs and source code are free to download. As mentioned before, more about it at Wolfgang Engel’s book blog.
You are currently browsing the monthly archive for June 2010.
Last month I noted some resources for finding out about graphics conference due dates and meeting dates. Naty pointed out that we, in fact, host one ourselves, by Ke-Sen Huang. Other people noted this nice one and this detailed one.
I’m posting today because Yamauchi Hitoshi has updated his own conference calendar (due to suggestions from readers of this blog), and also made the generator free software. He originally just made this page for himself, but the power of the web and all that… I like the layout a lot. The visual presentation of deadlines, notifications, and actual conference date is (I imagine) quite useful for deciding where to submit a paper and what alternatives there are if it is not immediately accepted.
I attended this year’s Gamefest back in February. Gamefest is a conference run by Microsoft, focusing on games development for Microsoft platforms (Xbox 360 and Windows). This year (unusually, due to the presence of prerelease information on Kinect, at the time still known as “Project Natal”) the conference was only open to registered platform developers. For this reason, I didn’t blog about it at the time (no sense in telling people about stuff they can’t see).
Recently (thanks to the Legalize Adulthood! blog) I became aware that the Gamefest 2010 presentations are online on the conference website, and available for anyone (not just registered XBox 360 and Windows Live developers). I’ll briefly discuss which presentations I think are of most interest. First, the ones I attended and found interesting:
This was a very nice talk about baking lighting into volumes by John O’Rorke, Director of Technology at Monolith Productions. Monolith were trying to light a large city at night, where the character could traverse the city pretty freely both horizontally and vertically. Lots of instances and geometry Levels-of-Detail (LODs), lots of dynamic lights. A standard lightmap + light probe solution took up too much memory given the large surface area, and Monolith didn’t like the slow baking workflow involved, as well as the inconsistencies between static and dynamic objects.
Instead, Monolith stored light probes in volume textures. They tried spherical harmonics (SH) and didn’t like it (too much memory, too blurry to use for specular). F.E.A.R. 2 shipped with an approach similar to Valve’s “Ambient Cube” (6 RGB coefficients), which has the advantage of cheap shader evaluation. For their new game they went with a stripped-down version of this, which had a single RGB color and 6 luminance coefficients; this reduces from 18 to 9 scalars and it was hard to tell the difference. Besides memory, this also sped up the shaders (less cache misses) and gave them better precision (since the luminance and color can be combined in a way that increases precision). For HDR they used a scale value for each volume (the game had multiple volumes in it) – this also gave them good precision in dark areas. Evaluating the “luminance cube” is extremely cheap (details in the slides). John also described some implementation details to do with stenciling out areas of the screen, using MIP maps, and getting around 360 alignment issues with DXT1 textures (all volumes were stored as DXT1).
Generation: the artists place lights (including area lights) and all the lights are baked (direct only, no global illumination (GI) bounces) during level packing. The math is simple – the tools just evaluated diffuse lighting for 6 normal directions at the center of each volume texel. Once the number of lights added by the artists started getting large this slowed down a bit so they added a caching system for the baked volumes. They eventually added GI support by rendering cube map probes in the game.
Downsides: low resolution, bad for high contrast shadows, can get light or shadow bleeding through thin geometry. They use dynamic lights for high contrast / shadow casting lighting.
For the future they plan to cascade the volumes and stream them. They also tried raymarching against the volume to get atmospheric effects, this was fast enough on high-end PCs but not consoles.
This great talk (by Stephen Hill from Ubisoft) went into detail on two rendering systems used in the game Splinter Cell: Conviction. The first was a software hierarchical Z-Buffer occlusion system. They used this in various ways to cull draw calls from shadows as well as primary rendering. The system could handle over occlusion 20,000 queries in around 1/2 millisecond. Results looked pretty good.
Next, Stephen discussed is the game’s ambient occlusion (AO) system. The game developers didn’t use screen-space ambient occlusion (SSAO), since they didn’t like the inaccuracy, cost, and lack of artist control. Instead they went for a hybrid baked system. Over background surfaces (buildings, etc.) they bake precomputed AO maps. The precomputation is GPU-accelerated, based on the GPU Gems 2 article “High-Quality Global Illumination Rendering Using Rasterization” (available here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter38.html). For dynamic rigid objects like tables, chairs, vehicles, etc. they precompute AO volumes (16x16x16 or so). Finally for characters, they analytically compute AO from an articulating model of “capsules” (two half-spheres connected by a cylinder). Ubisoft combine all of these (not trying to address double-occlusion, so results are slightly too dark) into a downsampled offscreen buffer. Rather than simple scalar AO, all this stuff uses a directional 4-number AO representation (essentially linear SH) so that they can later apply high-res normal maps to it when the offscreen buffer is applied. They figured out a clever way to map the math so that they can use blending hardware to combine these directional AOs into the offscreen buffer in a way that makes sense. The AO buffer is later applied using cross-bilateral upscaling. For the future Ubisoft would like to add streaming support for the AO maps and volumes to allow for higher resolution.
Stephen showed the end result, and it looked pretty good with a character running through a crowded scene, vaulting over tables, knocking down chairs, with nice ambient occlusion effects whenever any two objects were close. A system like this is definitely worth considering as an alternative to SSAO.
This excellent talk (by Wade Brainerd, who like me works in Activision‘s Studio Central group) dives deep into a low-level description of Xbox 360 internals and the modified version of DirectX that it uses. A rare opportunity for people without registered console developer accounts to look at this stuff, which is relevant to PC developers as well since it shows you what happens under the driver’s hood.
This talk by NVIDIA contained basically the same stuff as the I3D paper Interactive Fluid-Particle Simulation using Translating Eulerian Grids, which can be found here: http://www.jcohen.name/. It was interesting to hear about such a high-end CUDA fluid sim system being integrated into a shipping game (even if only on the PC version) – they got some cool particle effects out of it with turbulence etc. These kinds of effects will probably become more common once a new generation of console hardware arrives.
This talk was about various ways to use DX11 Compute Shaders in graphics. This talk included stuff like fast computation of summed area tables for fast anisotropic blurring of environment maps and depth of field. The speakers also showed an A-buffer-like technique for order-independent transparency, and a tile-based deferred rendering system that was more efficient than using pixel shaders. Like the previous talk, this seemed like the kind of stuff that could become mainstream in the next console generation.
This presentation discussed research published in the SIGGRAPH Asia 2009 paper “All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance“ (available here: http://research.microsoft.com/en-us/um/people/johnsny/). The presentation was by John Snyder, one of the paper authors. It’s similar to some other recent papers which represent normal distribution functions as a sum of Gaussians and filter them, but this paper does some interesting things with regards to supporting environment maps and transforming from half-angle to view space. Worth a read for people looking at specular shader stuff.
This talk was probably old hat to anyone with significant 360 experience but should be interesting to anyone who does not fit that description – it was a rare public discussion of low-level console details.
This talk was about combining physics with canned animation (similar to some of NaturalMotion‘s tools). It looked pretty good. The basic idea is straightforward – artist paints tightness of springs connecting the character’s joints to the skeleton playing the animation – a state machine allows to vary these tightness values based on animation and gameplay events.
This was a good, basic introduction to the current state of the art in shadow mapping.
Illuminate Labs (the makers of Beast and Turtle) gave this talk about baked lighting. It was pretty basic for anyone who’s done work in this area but might be good to brush up with for people who aren’t familiar with the latest practice.
There were a bunch of talks I didn’t attend (too many overlapping sessions!) but which look promising based on title, speaker list, or both: Case Studies in VMX128 Optimization, Best Practices for DirectX 11 Development, DirectX 11 DirectCompute: A Teraflop for Everyone, DirectX 11 Technology Update, and Think DirectX 11 Tessellation! – What Are Your Options?
Some computerish graphicsy photos from my trip to Shanghai. First, they’ve got the right definition of 3D:
But what about Cartesian coordinates? This pavilion certainly predates those, definitely 3D:
A cool sculpture from the World Expo (Robin Green comments “…hsync timing issues”):
Thomas, a happy customer of our 2nd edition:
Given the number of “name-brand” watches, purses, clothing, software, etc. in the (literally underground) market at the Shanghai Science and Technology Museum metro stop, name infringement like this is very small potatoes:
Magical technology from this market, no doubt shipped back from the future: USB flash drives with up to 880 GB!
Near as we can tell, they hack the driver on a small flash drive to make it look like 880 GB or whatever to your computer. Think of them as “write-only USBs”. Just as well: if you were to try to fill a real drive of this size at its 7 MB/sec transfer rate, it would take 35 hours.
I don’t often go to SIGGRAPH panels, but this year’s list includes three that look very tempting. Here they are, sorted by date:
Future Directions in Graphics Research
Sunday, 25 July, 3:45 PM – 5:15 PM
The SIGGRAPH website description says, “This panel presents the results of an NSF-funded workshop on defining broader, fundamental long-term research areas for potential funding opportunities in medical imaging and device design, manufacturing, computational photography, scientific visualization, and many other emerging areas in graphics research.” It’s important to know where the funding is going into computer graphics research, and what the researchers think the most promising future directions are. The panelists include some of the most prominent and influential computer graphics professors: Jessica Hodgins from Carnegie Mellon, James Foley (first author of “Computer Graphics: Principles and Practice”) from Georgia Tech, Pat Hanrahan (who probably has his name on more SIGGRAPH papers than anyone in the world) from Stanford University, and Donald P. Greenberg (whose list of former students would make a great first draft for a “who’s who” of computer graphics) from Cornell.
CS 292: The Lost Lectures; Computer Graphics People and Pixels in the Past 30 Years
Monday, 26 July, 3:45 PM – 5:15 PM
This is a unique idea for a panel – in the 1980’s, Ed Catmull and Jim Blinn taught a hugely influential course on computer graphics. Among many others, it inspired Richard Chuang who went on to found PDI. While teaching the course, Ed Catmull was building Lucasfilm’s computer graphics group, which later became Pixar. The panelists are Ed Catmull and Richard Chuang, who according to the website description “use video from the course to reflect on the evolution of computer graphics – from the genesis of Pixar and PDI to where we are today.” Catmull in particular is an amazing speaker – this looks well worth attending.
Large Steps Toward Open Source
Thursday, 29 July, 9:00 AM – 10:30 AM
Several influential film industry groups have open-sourced major bits of internal technology recently. This panel discusses why they did it, what were the benefits and where were the challenges. This is definitely relevant to the game industry – would it make sense for us to do the same? (Insomniac is already leading the way – I wish they had a representative on this panel). Panelists include Rob Bredow (CTO of Sony Pictures Imageworks, which has recently launched several important open source initiatives), Andy Hendrickson (CTO of Walt Disney Animation Studios, which has recently done the same, most notably including the Ptex texture mapping system), Florian Kainz (Principal R&D Engineer at Industrial Light & Magic and the key individual behind OpenEXR, which ILM open-sourced in 2003), and Bill Polson (Lead of Production Engineering at Pixar Animation Studios). Pixar doesn’t currently have any open-source initiatives that I know of – does Bill’s participation mean that they are about to announce one?
After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:
Avatar for Nerds
Sunday, 25 July, 2-3:30 pm
- A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
- Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
- PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).
Split Second Screen Space
Monday, 26 July, 2-3:30 pm
- Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
- How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
- Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
- A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.
APIs for Rendering
Wednesday, 28 July, 2-3:30 pm
- Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
- REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
- WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.
Games & Real Time
Thursday, 29 July, 10:45 am-12:15 pm
- User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
- Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
- Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
- Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.
A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101”, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.
Since my original post about the SIGGRAPH 2010 courses, some of the courses now have updated speaker lists (including mine – regardless of what Eric may think, I’m not about to risk Hyper-Cerebral Electrosis by speaking for three hours straight). I’ll give the notable updates here:
Stylized Rendering in Games
Covered games will include:
- Borderlands (presented by Gearbox cofounder and chief creative officer Brian Martel as well as VP of product development Aaron Thibault)
- Brink (presented by lead programmer Dean Calver)
- The 2008 Prince of Persia (presented by lead 3D programmer Jean-François St-Amour)
- Battlefield Heroes (presented by graphics engineer Henrik Halén)
- Mirror’s Edge (also presented by Henrik Halén).
- Monday Night Combat (presented by art director Chandana Ekanayake) – thanks to Morgan for the update!
Physically Based Shading Models in Film and Game Production
- I’ll be presenting the theoretical background, as well as technical, production, and creative lessons from the adoption of physically-based shaders at the Activision studios.
- Also on the game side, Yoshiharu Gotanda (president, R&D manager, and co-founder of tri-Ace) will talk about some of the fascinating work he has been doing with physically based shaders.
On the film production side:
- Adam Martinez is a computer graphics supervisor at Sony Pictures Imageworks whose film work includes the Matrix series and Superman Returns; his talk will focus on the use of physically based shaders in Alice in Wonderland. Imageworks uses a ray-tracing renderer, unlike the micropolygon rasterization renderers used by most of the film industry; I look forward to hearing how this affects shading and lighting.
- Ben Snow is a visual effects supervisor at Industrial Light and Magic who has done VFX work on numerous films (many of them as CG or VFX supervisor) including Star Trek: Generations, Twister, The Lost World: Jurassic Park, The Mummy, Star Wars: Episode II – Attack of the Clones, King Kong, and Iron Man. Ben has pioneered the use of physically based shaders in Terminator Salvation and Iron Man 2, which I hope to learn more about from his talk.
Color Enhancement and Rendering in Film and Game Production
The game side of the course has two speakers in common with the “physically-based shading” course:
- Yoshiharu Gotanda will talk about his work on film and camera emulation at tri-Ace, which is every bit as interesting as his physical shading work.
- I’ll discuss my experiences introducing filmic color grading techniques at the Activision studios.
And one additional speaker:
- While working at Electronic Arts, Haarm-Pieter Duiker applied his experience from films such as the Matrix series and Fantastic Four to game development, pioneering the filmic tone-mapping technique recently made famous by John Hable. He then moved back into film production, working on Speed Racer and 2012 (for which he won a VES award). Haarm-Pieter also runs his own company which makes tools for film color management.
The theoretical background and film production side will be covered by a roster of speakers which (although I shouldn’t say this since I’m organizing the course) is nothing less than awe-inspiring:
- Dominic Glynn is lead engineer of image mastering at Pixar Animation Studios. He has worked on films including Cars, The Wild, Ratatouille, Up and Toy Story 3. Dominic will talk about how color enhancement and rendering is done at different stages of the Pixar rendering pipeline.
- Joseph Goldstone (Lilliputian Pictures LLC) is a prominent consulting color scientist; his film credits include Terminator 2: Judgment Day, Batman Returns, Apollo 13, The Fifth Element, Titanic, and Star Wars: Episode II – Attack of the Clones. He has contributed to industry standards committees such as the International Color Consortium (ICC) and the Academy of Motion Pictures Arts and Science’s Image Interchange Framework.
- Joshua Pines is vice president of color imaging R&D at Technicolor; between his work at Technicolor, ILM and other production companies he has over 50 films to his credit, including Star Wars: Return of the Jedi, The Abyss, Terminator 2: Judgment Day, Jurassic Park, Schindler’s List, Forrest Gump, Twister, Mission: Impossible, Titanic, Saving Private Ryan, The Mummy, Star Wars: The Phantom Menace, The Aviator, and many others. Joshua lead the development of ILM’s film scanning system and has a Technical Achievement Award from the Motion Pictures Academy of Arts & Sciences for his work on film archiving.
- Jeremy Selan is the color pipeline lead at Sony Pictures Imageworks. He has worked on films including Spider-Man 2 and 3, Monster House, Surf’s Up, Beowulf, Hancock, and Cloudy with a Chance of Meatballs. Jeremy has contributed to industry standards committees such as the Digital Cinema Initiative (DCI), SMPTE, and the Academy of Motion Picture Art and Science’s Image Interchange Framework. At the course, Jeremy will unveil an exciting new initiative he has been working on at Imageworks.
- The creative aspects of color grading will be covered by Stefan Sonnenfeld, senior vice president at Ascent Media Group as well as president, managing director, and co-founder of Company 3. An industry-leading DI colorist, Stefan has worked on almost one hundred films including Being John Malkovich, the Pirates of the Caribbean series, War of the Worlds, Mission: Impossible III, X-Men: The Last Stand, 300, Dreamgirls, Transformers, Sweeney Todd, Cloverfield, The Hurt Locker, Body of Lies, The Taking of Pelham 1 2 3, Transformers: Revenge of the Fallen, Where the Wild Things Are, Alice in Wonderland, Prince of Persia: The Sands of Time, and many others, as well as numerous high-profile television projects.
Some have heard, some haven’t, so I’ll mention it here: Martin Gardner passed away a few days ago, age 95. If you’re saying “who?”, then you’re in for a treat, as there’s a great set of books and articles you haven’t yet discovered. He wrote about mathematical ideas and puzzles (he popularized Conway’s Game of Life, among many other things), debunked pseudoscience such as homeopathy and dianetics, explained magic tricks, annotated Lewis Carroll’s works and others, wrote about science and a little philosophy – what a great guy, and my #1 childhood hero. Need to know more? Check say this NYT article (which includes some puzzles) and Wikipedia.
I just noticed on Amazon you can get all of his Scientific American “Mathematical Games” articles on CD-ROM – cool. Me, my favorite books are “Aha! Insight” and “Aha! Gotcha” because I could give them to my children and pass on the word.
I’ll get back to graphics soon, but for now: a toast to a life well lived, and may we all do at least half as well!
Oh, come to think of it, I do have something that’s somewhat graphical, or at least geometric. This is from the book “The Mathemagician and Pied Puzzler: a Collection in Tribute to Martin Gardner”: You have a cube and you select at random three (different) corners. What is the chance that the triangle formed by these corners is acute (all angles < 90 degrees)? is a right triangle (has one angle == 90 degrees)?
Answers are here, along with another puzzle.
Bonus followup: I just noticed that the book I mentioned, “The Mathemagician…”, is available as a free PDF download.