Author Archives: Naty

SIGGRAPH 2011 Talks – Part 3

This is the third and last in a series of posts about the SIGGRAPH 2011 Talk program – see Part 1 and Part 2. If you found these useful you may also want to check out my previous series of posts about the SIGGRAPH 2011 Courses program (see Part 1, Part 2, Part 3, and Part 4). These posts are not intended as a general SIGGRAPH survey – they are focused on content related to real-time rendering and game development.

Show Me The Pixels

Three of the talks in this session have possibly relevant content:

  • Slow Art With a Trillion Frames Per Second Camera – I guess this one stretches the definition of “relevant” somewhat, but I just find it extremely cool and interesting. The talk describes some research done at MIT (in collaboration between the Media Lab and Department of Chemistry) in which a “trillion frames per second camera” captures how pulses of light travel within a scene, including bouncing off surfaces and scattering inside objects. Besides the general coolness factor, this may impart some insight into light behavior which could be useful when working on shading and lighting models.
  • Device-Independent Imaging System for High-Fidelity Colors – color management (including display calibration, color space management of data, etc.) is important for both game and film production. It turns out that getting good device-independent color reproduction is far from simple. This talk covers some advances in this field by SHARP Corporation and Shizuoka University.
  • Who Do You Think You Really Are? – augmented reality is becoming an important technology for handheld games (see examples on the Nintendo 3DS and iPhone); this talk discusses an interactive media installation at London’s Natural History Museum (in partnership with BBC Television) which includes augmented reality elements.

Hiding Complexity

This entire session is comprised of game industry talks:

  • Occlusion Culling in Alan Wake – occlusion culling is a key technology for many games, especially first-person shooters. This talk discusses the occlusion culling system (developed by Umbra Software) used in the game Alan Wake by Remedy Entertainment. Topics include visibility culling as well as shadow-caster culling for dynamic light sources.
  • Increasing Scene Complexity: Distributed Vectorized View Culling – another talk on visibility culling, this time focusing on the technical issues involved in parallelizing culling computations on current game platforms. The talk is given by Electronic Arts Blackbox.
  • Practical Occlusion Culling in Killzone 3 – the third occlusion culling talk of the session focuses on the implementation used by Guerrilla Games for the game Killzone 3. This implementation uses PlayStation 3 SPUs to rasterize a conservative depth buffer, against which occlusion queries are performed.
  • High-Quality Previewing of Shading and Lighting for Killzone 3 – another Killzone 3 talk but unrelated to occlusion culling, this talk by Guerrilla Games covers a content creation framework which supports high-fidelity previews of assets in Autodesk Maya.

Smokin’ Fluids

The talks in this session (three from the film industry and one from the academic research community) cover topics related to smoke and fluid simulation. Such simulations are currently too costly to be feasible for most games, though games such as the LittleBigPlanet and PixelJunk Shooter series (both featured at SIGGRAPH this year) include two-dimensional versions. In VFX and CG animation work smoke, fluid and fire simulations are common, forming one of the key elements differentiating film and game visuals. I firmly believe that as game platforms increase in computational power, we will start seeing full 3D simulations of this kind in games.

  • DB+Grid: A Novel Dynamic Blocked Grid For Sparse High-Resolution Volumes and Level Sets – The author, Ken Museth, has a history of developing novel data structures for level set and volumetric data and applying them for VFX, first at Digital Domain and now at DreamWorks Animation. His data structures have been constantly improving, from DT-Grid to DB-Grid and now DB+Grid, which is described in this talk.
  • Capturing Thin Features in Smoke Simulations – In production simulation work, there is a constant tension between the need to speed up simulation times for faster iteration (which implies reducing the resolution of the simulation grid) and the desire to simulate finer detail (which implies increasing the resolution). This talk covers a system developed by Sony Pictures Imageworks that allows thin smoke features to be captured even with low resolution simulation grids.
  • Implicit FEM and Fluid Coupling on GPU for Interactive Multiphysics Simulation – typically distinct simulation methods are used for fluids, rigid objects, deformable objects, etc. This can pose problems when different types of objects can affect each other, which requires coupling different simulation methods. This talk from INRIA and Université de Grenoble covers a GPU-based method for coupled simulation of deformable objects and fluids – interestingly “screen-space collision” is mentioned as one of the techniques employed.
  • Correcting Low-Frequency Impulses in Distributed Simulations – production rendering is typically distributed over a large number of machines. It is desirable to do the same for simulations, but often this is difficult since the simulation domain is not easily separable – each part of the simulation affects all other parts. This talk from Side Effects Software (developers of Houdini) describes a method for distributing level-set fluid simulations while keeping them coupled via a shared low-resolution pressure projection.

Volumes and Rendering

All four talks in this session (three from the film industry and one from the academic research community) contain potentially relevant content:

  • Gaussian Quadrature for Photon Beams in “Tangled” – rendering lighting effects in participating media (often called “light beams” or “god rays”) is a common problem in games and film, typically solved with various hacks. A recent Transactions on Graphics (ToG) paper presented a comprehensive analysis of the problem as well as a new rendering approach called “photon beams” which is both physically correct and efficient – it appears potentially feasible for real-time implementation. This talk (with authors from the University of Central Florida, Disney Research Zürich, and Walt Disney Animation Studios – including the first author of the aforementioned ToG paper) presents an efficient implementation of the photon beams technique in Renderman, extending it to artist-specified non-physical light attenuation curves. A broader overview of the artist-driven volumetric lighting in Tangled (of which this work is a part) is given in a Technical Paper.
  • Importance Sampling of Area Lights in Participating Media – in principle, ray tracers like the Arnold rendering engine (developed by Solid Angle SL and used by Sony Pictures Imageworks, among others) solve the participating media lighting problem in a straightforward manner by sampling the underlying integrals. In practice, achieving noise-free images in reasonable time requires a lot of engineering effort, mostly relying on various forms of importance sampling. This talk (with authors from both Solid Angle and Imageworks) presents an importance sampling method for single scattering of light from arbitrary area lights in homogeneous participating media.
  • Decoupled Ray Marching of Heterogeneous Participating Media – after two talks on the relatively easy problem of lighting homogeneous participating media, this talk (also from Sony Pictures Imageworks) covers heterogeneous media such as smoke. It covers a method for speeding up ray marching by decoupling lighting calculations from the sampling of volume properties. Ray marching is amenable to real-time implementation since it is easy to scale down (albeit with reduced visual quality) by reducing the number of samples – several companies have demonstrated real-time implementations (though I’m not sure if any shipping games yet use it). The technique presented in this talk can make raymarching for volumetric lighting even faster, so is definitely of interest.
  • Demand-Driven Volume Rendering of Terascale EM Data -unlike the other talks in this session which focus on volumetric lighting, this talk (from King Abdullah University of Science and Technology and Harvard University) focuses on a different issue – rendering volumetric datasets which are too large to fit in memory. Given a good solution to this problem, games should be able to precompute volumetric effects in certain situations and stream them from disk, so this looks interesting.

Heads or Tails

Rigging game or movie characters for animation is a very tricky problem – the rig needs to be powerful enough to handle all needed motions and deformations, while also being easy to control either via hand-keying or motion capture. This session includes two CG feature animation talks and one research talk, all covering the rigging problem from different angles (note that the game industry talk Modular Rigging in Battlefield 3 has been cancelled). Character rigging is one of the areas where film and game production are quite similar – there are differences in scale and complexity, but even these are not so large as differences in say, triangle count or shader instructions.

  • Building the Birds of “Rio” – this talk covers the process and technology used at Blue Sky Studios to build control systems for the bird characters in the movie Rio – using the main character “Blu” as a case study.
  • “Kung Fu Panda 2”: Rigging a Peacock Tail – this talk describes the approach DreamWorks Animation used to create the tail rig for the peacock character in the film Kung Fu Panda 2.
  • Optimized Local Blendshape Mapping for Facial-Motion Retargeting -this talk from the Graphics Lab at the USC Institute for Creative Technologies details an automatic facial-motion retargeting method for blendshapes.

Speed of Light

Three of the talks in this session contain potentially relevant content:

  • Run-Time Implementation of Modular Radiance Transfer – Precomputed Radiance Transfer is a powerful rendering technique which has spun off many variations. This talk from Disney Interactive Studios, Disney Research Zürich, the University of Utah and the University of North Carolina at Chapel Hill covers a modular variant which enables warping and combining precomputed transport from a small library of simple shapes. The technique was implemented for platforms from mobile devices to high-end GPUs – the talk discusses various implementation issues involved.
  • Next-Generation Image-Based Lighting Using HDR Video – image-based lighting is becoming a key rendering technique in both film and games. This talk from Linköping University and Spheron VR describes a system for high-dynamic-range video capture, reconstruction, and modeling of real-world scenes for use in image-based lighting of synthetic objects placed in the scene.
  • Triple Depth Culling – real-time rendering applications such as games rely heavily upon hardware features such as hierarchical Z-culling for performance. However, this has some drawbacks – it requires either depth sorting or a previous depth prepass, and it doesn’t work well with shaders that modify depth. This talk proposes a technique to avoid these drawbacks – the authors show a pixel shader implementation, though for best performance they suggest that the technique be implemented in hardware. The talk abstract and video are both available online.

Capture and Construction

This session has one film talk of relevance: Building and Animating Cobwebs for Antique Sets. It describes a workflow used at DreamWorks Animation to model and animate cobwebs, including a specialized modeling tool, a physics-based solver, and a procedural-modeling engine. These types of specialized asset workflows can be extremely effective for games or movies which require many examples of a given kind of asset.

Light My Fire

This session has one game talk, as well as three relevant film talks:

  • Simulating Massive Dust in “Megamind” – in film production, there is a constant push for fluid simulations to continually increase in size and complexity, but the need for fine control by artists implies fast turnaround times. For this reason a lot of research and development is spent on making these simulations faster – research that I hope will eventually benefit real-time applications as well. This talk from t DreamWorks Animation covers a fast fluid simulation framework used for the movie Megamind. The presentation covers the specific numerical methods used to ensure efficiency and quality, as well as the setup and control framework that allowed artists to work efficiently.
  • “Megamind”: Fire, Smoke, and Data – another Megamind talk, this time focusing on the specific case study of an especially large and involved explosion effect. I like attending such “war story” talks – the most interesting film and game work is done when trying to push boundaries, and the solutions are often a mixture of technical cleverness and artistic inspiration.
  • Volumetric Effects in a Snap – grid-based simulation and volumetric rendering frameworks have become a staple of VFX and CG feature animation work; every studio has its own system with different strengths. I suspect similar systems will start cropping up in game studios when the hardware becomes a bit faster and memory capacities increase a bit more. This talk describes the creation of the “Snap” system developed at Animal Logic and used in the films Legend of the Guardians: The Owls of Ga’Hoole and Sucker Punch.
  • Fluid Dynamics and Lighting Implementation in PixelJunk Shooter 2 – games rarely incorporate fluid simulations – including 2D games, though current platforms can run two-dimensional simulations quite quickly. LittleBigPlanet notably incorporated 2D fluid simulations in its fire and smoke effects, but these did not affect gameplay. The game PixelJunk Shooter incorporated some very nice fluid-simulation-driven gameplay, including several types of fluids and gases that affected each other in different ways. The recent sequel expanded this gameplay element, adding some novel light/darkness gameplay as well. This talk from independent developer Q-Games covers the technical aspects of these elements.

Now that I’ve finished the courses and talks, my next few blog posts will cover the remaining SIGGRAPH 2011 programs.

SIGGRAPH 2011 Talks – Part 2

This is the second in a series of posts on the SIGGRAPH 2011 Talk program – Part 1 can be found here. These posts focus on talks with relevant content for real-time rendering researchers and practitioners, including game developers.

Building Blocks

One of the talks in this session looks relevant – KinectFusion: Real-Time Dynamic 3D Surface Reconstruction and Interaction describes the use of a Kinect camera to acquire real-time dense 3D models of an entire room and its contents, enabling some interesting augmented reality and interaction possibilities. The reconstruction appears to require a high-end GPU to achieve real-time performance so this isn’t something for current generation consoles, but it definitely could be feasible on future platforms. It may also be interesting in the context of digitizing real-world objects as part of the film or game modeling process. The authors are from Microsoft Research Cambridge, except for one from Imperial College London.

Walk the Line

Two academic research talks in this session are potentially relevant for games and other real-time applications that use stylized rendering or deformations:

  • Parameterizing Animated Lines for Stylized Rendering – this talk describes a paper from the 2011 NPAR (Non-Photorealistic Animation and Rendering) conference (which is co-located with SIGGRAPH this year). It shows a way to have details along an outline track the geometry cleanly as the scene animates in 3D. Material from the NPAR paper can be found here. The authors are from École d’Ingénieurs Télécom ParisTech, except for one from Adobe.
  • Multiperspective Rendering for Anime-Like Exaggeration of Joint Models – this talk describes a more unusual type of stylization, where the model deforms in a stylized way as it animates, inspired by anime visual conventions. The authors are from Hitachi, except for one from The University of Tokyo.

1000 Points of Light

This session contains one game talk, as well as two relevant CG feature animation talks:

  • Lighting Tokyo for Pixar’s “Cars 2” – rendering cities at night is challenging (definitely for games, but even for CG feature animation) due to the extreme dynamic range and large number of lights. Tokyo, with its massive quantities of illuminated billboards and neon signs, is one of the most famous and extreme examples of this type of lighting situation. This talk covers the techniques used by Pixar Animation Studios to light a stylized version of nighttime Tokyo for the movie Cars 2 – note that the speaker will also present a Studio Talk on a similar topic.
  • “Megamind” – Lighting Metro City at Night – this task covers a similar challenge as the previous one, but with a distinct set of solutions from a different company (DreamWorks Animation) for a different film (Megamind).
  • Deferred Shading Techniques Using Frostbite in Need for Speed The Run – this talk will cover the tile-based deferred lighting architecture used in the Frostbite 2 engine, with emphasis on the PS3 implementation as used in the Electronic Arts game Need for Speed The Run (the talk was originally intended to cover the XBox 360 and Battlefield 3 as well, but has been refocused – the removed material will be covered in more depth in this course). It makes for an interesting combination with the previous two talks since it will show how the current state-of-the-art in game technology solves a similar problem as film (albeit at smaller scale) in real time.

Fur and Feathers

The three CG feature animation talks in this session cover fur and feather techniques which are too computationally costly to be feasible for most real-time applications today. They also don’t seem amenable to “animation baking” precomputation approaches since the resulting data would most likely be too heavy. However, these techniques should be able to run in real-time on future hardware platforms, making these talks of interest to forward-looking real-time researchers:

  • Quill: Birds of a Feather Tool – this talk describes a specialized pipeline developed by Animal Logic to procedurally model, animate and simulate feathers while avoiding intersections and rendering at various levels of detail.
  • Dynamic, Penetration-Free Feathers in “Rango” – somewhat similar to the previous talk, but focusing more narrowly on interpenetration avoidance and from the perspective of a different company (Industrial Light and Magic).
  • Accurate Contact Resolution for Interpolated Hairs – another ILM / Rango talk, but focusing on a different problem – handling collision between hairs and other geometry. The solution needed to be very fast and cheap since it was intended for use on interpolated hairs (it is common in CG feature animation and VFX to fully simulate a relatively small number of “guide hairs” and then interpolate a much larger number of cheap “interpolated hairs” between them).

Mixed Grill

This session contains two film talks, one game talk, and one academic research talk; all four are relevant:

  • The Power of Atomic Assets: An Automated Approach to Pipeline on “Legend of the Guardians: The Owls of Ga’Hoole” – games and movies share the challenge of structuring a production pipeline (software tools as well as workflow practices) to handle large numbers of assets. This talk will describe the system used at Animal Logic to handle the assets for the film Legend of the Guardians: The Owls of Ga’Hoole.
  • Animation Workflow in Killzone 3: A Fast Facial Retargeting System for Game Characters – handling facial motion capture data is tricky, especially retargeting to (possibly multiple) in-game models. This talk describes a technique used by Guerrilla Games to animate a large number of different faces for the extensive cut-scenes in the game Killzone 3.
  • Adaptive Importance Sampling for Multi-Ray Gathering – importance sampling (basically sampling a function more densely in areas that are estimated to have higher impact on the result) has recently become a key technology for production rendering. There was a whole SIGGRAPH course about it last year, and Pixar has added native support to the latest version of Renderman. Importance sampling is typically thought of as a ray tracing technique, but it is also important for image-based lighting (IBL) sources such as environment maps. Importance-sampled IBL is currently useful for game light baking tools, and is likely to be done in real-time on future platforms. This talk describes importance sampling improvements developed at Rhythm & Hues. Talk materials including an abstract and movie are available here.
  • High-Resolution Relightable Buildings From Photographs – efficient digitization of real-world scenes and objects is useful for both film and game development. Tools such as Crazybump are widely used in the game industry to infer relightable surface details from photographs, but do not always work as well as could be hoped. This research talk looks like it could offer some improvements in this area, making it of wide interest. The authors are from The University of Manchester, Loughborough University, and Dolby Canada.

From the Ground Up

All three CG feature animation talks in this session are relevant for game developers:

  • We Built This City: Big City Design and Implementation in “Kung Fu Panda 2” – games and movies sometimes contain large urban environments, which are very difficult to construct within reasonable time and staffing constraints. This talk will detail how DreamWorks Animation solved this problem for the film Kung Fu Panda 2.
  • The Visual Style of “Legend of the Guardians: The Owls of Ga’Hoole” – finding a good visual style is another difficult task shared by film and games; my feeling is that films tend to have more established processes for look and style development. This talk will detail the visual style established by Animal Logic for the movie Legend of the Guardians: The Owls of Ga’Hoole. I saw a similar presentation at FMX 2011, and it was full of interesting and relevant content.
  • Clouds in the Skies of Rio – in most games and films, clouds are off in the distance and can be handled with straightforward methods. But sometimes the camera needs to get up close and personal with the clouds, which can pose some interesting modeling and rendering challenges. Although cloud rendering techniques used in film can rarely run in real-time on current platforms, the way in which the clouds are art-directed and authored can be of interest. This talk discusses how Blue Sky Studios handled cloud authoring and rendering for the movie Rio.

Directing Destruction

Mixing simulation with manual control to create large, physically-believable and art-directed effects is a tough challenge which VFX and CG feature animation professionals have been focusing on for some time. The techniques used rarely lend themselves to real-time computation on current hardware. However, in many cases these effects can be pre-computed, and on future hardware they are likely to run in real time (perhaps with some reduction in scale). The four talks in this session discuss various case studies of this type:

  • End of Line: Character Destruction in “Tron: Legacy” – this talk discusses the tools developed by Digital Domain for the character destruction effects in Tron: Legacy.
  • Kali: High-Quality FEM Destruction in Zack Snyder’s “Sucker Punch” – in this talk, The Moving Picture Company discusses a finite-element simulation toolkit developed in partnership with Pixelux, with examples of its use in the film Sucker Punch. It is interesting to note that the tool is based on the same Digital Molecular Matter technology used in the games Star Wars: The Force Unleashed and Star Wars: The Force Unleashed II.
  • Directing Hair Motion on “Tangled” – this talk discusses the system developed by Walt Disney Animation Studios to animate the main character’s hair (almost a character in itself) in the movie Tangled.
  • Choreographing Destruction: Art Directing a Dam Break in “Tangled” – another Tangled talk from Walt Disney Animation, this one describes the way in which a complex water and rigid body simulation was art-directed for the “dam break” sequence.

Crowds

Scenes with large crowds are another differentiating factor between film and games. Sufficiently large crowds pose authoring and rendering challenges even for film; the solutions to these may be of interest to game developers working with smaller real-time crowds on next generation platforms. The three talks in this session discuss crowd case studies from three CG animated feature films:

  • Crowds on “Cars 2” – this talk discusses how Pixar Animation Studios improved their production pipeline to enable higher productivity when managing assets and controlling agent behaviors for Cars 2 crowd shots.
  • Synthesizing Complexity for Characters and Landscapes in “Rio” – this talk covers the systems used at Blue Sky Studios to procedurally generate large varied crowds of people and flocks of birds for the movie Rio, as well as the renderer enhancements done to efficiently ray-trace the resulting massive geometric detail.
  • Staging Carnival: Ray Tracing Crowds in “Rio” – another Blue Sky Studios talk about Rio, this time focusing on a specific case study (the carnival crowds).

There are eight more Talk sessions with relevant content, which I will cover in a subsequent blog post.

SIGGRAPH 2011 Talks – Part 1

After summarizing the course program, I’ll continue going over content in other SIGGRAPH 2011 programs which may be of interest to game developers or real-time rendering researchers. Next up is the Talks program; this post will also be a multi-parter, since there is a LOT of content to cover in this program.

Update July 16, 2011: Added link to “Coherent Out-of-Core Point-Based Global Illumination” EGSR 2011 paper.

Talks (which used to be called “Sketches” a few years ago) are short presentations – 20 minutes long (rare “long talks” are 40 minutes). Talks are a lot “leaner” than Technical Papers, which require detailed analysis, comprehensive citations of previous work, and comparisons to competing techniques. For this reason, SIGGRAPH Talks tend to be the venue of choice for industry practitioners, who often have limited time to spend on writing publications.

The SIGGRAPH Talk program has historically been dominated by talks from the fields VFX and CG feature animation – many of these contain relevant information for game developers, but the game industry itself has been under-represented. SIGGRAPH 2011 has a record number of game industry Talks, but there is still a lot to go before we match the film people (I hope to get a lot closer in future years!)

I will now summarize relevant Talks regardless of speaker affiliation. Since Talks are scheduled in sessions of four I will organize my summary along the same lines, skipping sessions without any relevant Talks and using the session order from the SIGGRAPH 2011 Talks page.

Pushing Production Data

This session contains four film talks, all of potential interest:

  • Coherent Out-of-Core Point-Based Global Illumination describes a system used at DreamWorks Animation for computing global illumination and ambient occlusion- the details may be of interest to game developers working on “baking” precomputation systems. There is also an EGSR 2011 paper by the authors on this topic.
  • Similarly, the information in Destroying Metro City: An Artist-Friendly and Efficient Demolition Pipeline for “Megamind” (also from DreamWorks Animation) could be relevant for precomputation of destroyed and fractured versions of game assets.
  • The efficient digital acquisition of real-world props is a problem facing games as well as film; PhotoSpace: A Vision-Based Approach for Digiziting Props describes an interesting system used at Weta Digital for this.
  • Games and film development are not “one size fits all” – individual games and films often require specific assets which can benefit from specialized authoring and rendering systems. Artistic Rendering of Feathers for Animated Films (yet another DreamWorks Animation talk) describes such a system.

Facing Hairy Production Problems

This session contains one game talk and three relevant film talks:

  • Extensive use of geometry instancing is important in both games and film to save on asset authoring time and memory. The talk Kami Geometry Instancer: Putting the “Smurfy” in Smurf Village describes an instancing pipeline developed by Sony Pictures Imageworks which allows for distinct deformation of individual instances.
  • The talk Making Faces: Eve Online’s New Portrait Rendering describes the impressive new avatar portrait system developed by CCP Games for the Eve Online space MMO.
  • SpeedFur: A GPU-Based Procedural Hair and Fur Modeling System describes a hair modeling system (developed by Fido). The procedural authoring system and the GPU-accelerated preview mode both appear relevant for hair and fur in games.
  • The talk GPU Fluids in Production: A Compiler Approach to Parallelism details a specialized CPU/GPU parallel compiler for fluid simulation developed by Double Negative Visual Effects. New parallelism approaches are always interesting, and I suspect fluid simulation will be a major differentiating feature for games on the next generation of platforms.

Eye on the Road

Two of the talks in this session (one by game developers, and one by academic researchers) appear relevant:

  • MotorStorm Apocalypse: Creating Urban Off-Road Racing – this talk by Evolution Studios presents rendering and tools advances which enabled adding large-scale dynamic events to MotorStorm Apocalypse, the latest entry in the MotorStorm racing game franchise (also showcased in The Sandbox).
  • Facial scanning has been a topic of heightened interest in the game industry since its highly publicized use in L.A. Noire. The talk R&D Facial Cartography: Interactive High-Resolution Scan Correspondence (by Paul Debevec’s graphics lab at the USC Institute for Creative Technologies) covers some interesting advances in this area.

Tiles and Textures and Faces Oh My!

This session contains talks by game developers, CG feature animation professionals, and hardware vendors; all four are relevant:

  • Artist-guided procedural authoring systems can help with the asset creation issues faced by both game and film production. The talk Procedural Mosaic Arrangement In “Rio” details Blue Sky Studios‘ art-directable procedural pipeline for sidewalk and street tile mosaics.
  • Programmable tessellation is one of the primary features of DirectX11, but authoring content for it can be challenging. NVIDIA‘s talk Generating Displacement From Normal Map for Use in 3D Games describes one possible solution to this problem.
  • The film industry has found the open-source Ptex (per-face texture mapping) technology developed by Walt Disney Animation extremely useful for getting rid of UV layout issues. The talk Per-Face Texture Mapping for Real-Time Rendering (jointly presented by an NVIDIA developer technology engineer and the first author of the original Ptex paper) presents a real-time implementation of this technology.
  • Skinning is one of the most fundamental technologies in game rendering and has not changed much in the last twenty years. The talk Spherical Skinning With Dual Quaternions and QTangents presents some skinning improvements achieved by Crytek during development of the Crysis franchise.

Let There Be Light

This session contains three CG feature animation case studies, all with interesting information for game  developers:

  • I find Rango to be an intriguing case of live-action and VFX methods being used by Industrial Light and Magic to make a CG animated feature with a unique photorealistic style. The talk “Rango”: A Case of Lighting and Compositing a CG-Animated Feature in an FX-Oriented Facility appears to have some interesting information on the methods used by the lighting and compositing artists.
  • Ocean Mission on “Cars 2” – this talk describes how Pixar addressed several multi-disciplinary challenges involving the ocean in the opening sequence of Cars 2.
  • Hair is another area where games lag noticeably behind film, so learning about film methods is valuable. The talk Untangling Hair Rendering at Disney details technology, tools and workflow advances adopted by Walt Disney Animation for the film Tangled.

Out Of Core

The four talks in this session are all relevant for game developers or real-time rendering researchers:

  • Google Body: 3D Human Anatomy in the Browser – this talk describes how Google used the WebGL API in an innovative way to create an impressive in-browser application. Browser-based games are a rapidly increasing market, making APIs such as WebGL important to many game developers. The Google Body application is also showcased in The Sandbox.
  • As a possible future alternative to the traditional rendering pipeline, ray tracing sparse voxel octrees has attracted some interesting research work, including GigaVoxels (several publications on which can be found on Cyril Crassin’s webpage). The talk Interactive Indirect Illumination Using Voxel Cone Tracing: An Insight builds on the GigaVoxels work to compute indirect lighting and ambient occlusion for complex scenes in real time.  A preview was presented as an I3D 2011 poster, and various materials relating to the SIGGRAPH Talk can be found on a dedicated web page.
  • Rendering the Interactive Dynamic Natural World of the Game: From Dust – in this talk, Ubisoft Montpellier discusses the simulation and rendering techniques used for the dynamic world of the game From Dust.
  • Out-of-Core GPU Ray Tracing of Complex Scenes –  this talk covers the CentiLeo GPU ray tracer (based on Kirill Garanzha’s PhD research at the Keldysh Institute of Applied Mathematics), which can render models composed of several hundred million polygons in real-time. More information on CentiLeo (including a video of it in action) can be found here.

SIGGRAPH 2011 Courses – Part 4

This is the fourth and last of a series of posts on the SIGGRAPH 2011 course program; each describing several of the courses that will be presented at the conference. Links to previous posts in the series: Part 1, Part 2, Part 3.

Cinematography: The Visuals & the Story

Cinematography is the art of communicating a story via camera and lighting choices. As a game developer, I find it fascinating for several reasons. One is that it is such a well-established art; over a century old, and based upon many of the principles of still older arts such as photography and painting. The maturity of the field can be seen in the way that the practice is codified – there are clear roles in film production, everyone knows what a director of photography, first camera assistant, etc. do from film to film. The field’s most prominent professional organization, the American Society of Cinematographers, was created in 1919 and its magazine American Cinematographer has been discussing tips and tricks of the trade since 1920. It’s an interesting contrast to game development – an extremely young discipline where most of the fundamentals are still being figured out.

Another reason I’m interested in cinematography is its relevance to game visuals; the primary problem (turning three-dimensional scenes into compelling screen images that carry a narrative) is the same. While issues of camera placement may be less relevant for some game genres (e.g. first person shooters), lighting, color, and scene composition considerations are relevant for almost any game.

The third reason is that most game developers (including myself until fairly recently) are either unaware of this vast wealth of relevant knowledge, or are indifferent to it. CG animated features have made great strides by incorporating principles of live-action cinematography; not many videogames are doing the same.

For these reasons, I’m glad to see a SIGGRAPH course covering cinematographic fundamentals. The speaker, Bruce Block, has had a lot of experience working in film (albeit not in the camera department) and has written a well-regarded and influential book (The Visual Story) about how visual structure is used to present story in film.

Storytelling With Color

The way in which color choices are applied throughout production is another area where I think games have a lot to learn from film. In film, the colors of almost every costume and piece of set decoration are part of a conscious choice to drive the narrative, establish a mood, or support character development. This was brought home to me last year when I visited Pixar and saw the “color script” for Toy Story 3 – a wall covered by postcard sized sketches, one for each shot in the film. Each rough sketch blocked out the shapes and colors in the shot, and when they were put together, you could clearly see how the carefully chosen color palette helped drive the story and emotional tone of the movie. Two of the Toy Story 3 color script images can be seen here, and the entire color script for a different Pixar film (Up) can be seen here.

This course will cover exactly these kinds of color choices, and will be presented by Kathy Altieri (Production Designer, Dreamworks Animation) and Dave Walvoord (Digital FX Supervisor, Dreamworks Animation). Kathy’s career in TV and film spans three decades; after working on backgrounds for multiple animated TV shows as well as classic animated feature films such as The Little Mermaid, Aladdin and The Lion King, she moved to Dreamworks, where she was Art Director on The Prince of Egypt and Production Designer on Spirit: Stallion of the Cimarron, Over the Hedge, and How to Train Your Dragon. Dave has 15 years of experience in VFX and CG feature animation, working at Blue Sky on films such as Fight Club and Ice Age before joining Dreamworks, where he was CG Supervisor on Shark Tale, Over the Hedge and Kung Fu Panda and Digital FX Supervisor on Kung Fu Panda 2.

Applying Color Theory to Digital Media and Visualization

This is another course on color, but focused more on theory and on non-entertainment applications, such as scientific visualization. The course is presented by Theresa-Marie Rhyne, a prominent visualization expert with three decades of experience as a researcher, educator, designer and artist. She has taught several courses on this topic, most recently at IEEE Visualization 2010 (a video of her slides from that talk is available online), and has a blog on the topic as well. Interestingly, she has already put up a video of the slides from the upcoming SIGGRAPH 2011 course.

Liquid Simulation With Mesh-Based Surface Tracking

While most fluid rendering and simulation work over the years has focused on level-set approaches, an important recent trend in this area consists of tracking a mesh over the surface of the fluid, thus enabling more detailed surfaces. This advanced course (prior knowledge of fluid simulation techniques is assumed) covers the current state of the art in this important area, and is presented by Chris Wojtan (Assistant Professor, Institute of Science and Technology Austria), Matthias Müller-Fischer (Research Lead, NVIDIA), and Tyson Brochu (PhD Candidate, University of British Columbia). Having performed much of the leading research in this area, the speakers are uniquely qualified to speak about the topic.

Although complex fluid simulations are used extensively in film VFX and animated features, they are currently too computationally expensive for games. As game platforms become more powerful, I believe this will change. There are already some impressive real-time demonstrations, for example the Raging Rapids Rides demo which will be shown at the SIGGRAPH 2011 Real-Time Live! program and the SIGGRAPH 2011 paper Real-Time Eulerian Water Simulation Using a Restricted Tall-Cell Grid, which has an impressive video here (check out the lighthouse part at the end). Note that one of the course speakers (Matthias) was involved with both of these examples.

Introduction to Modern OpenGL Programming

Dave Shreiner (co-author of the famous OpenGL Red Book, which has a new edition coming out this November) has taught an introductory course on OpenGL (almost) every year at SIGGRAPH since 1998. He was accompanied by various co-lecturers – most often Edward Angel – and evolved the course content to keep up with changes in the OpenGL API. The only two years Dave didn’t do this course were 2003 (when he  did a “performance OpenGL” course instead of an introductory course – in some other years he did both), and 2010 (when there was no OpenGL course for the first time since 1992). Dave and Edward are back this year with an updated course, which should be of great interest to beginning graphics programmers, OpenGL programmers who have been using older versions of the API, or experienced graphics programmers with plans to start working with OpenGL.

An course on this topic couldn’t hope for better speakers. Besides his highly influential books and courses, Dave Shreiner also had an important role in the development of OpenGL (and its spinoff OpenGL ES) in the 15 years he worked at SGI (where OpenGL evolved from the proprietary IRIS GL library) and since, as Technical Advisory Panel Chair for The Khronos Group and Director of Graphics Technology at ARM. Edward Angel has taught at the University of New Mexico for over 30 years; he holds the positions of Professor Emeritus of Computer Science and Founding Director of the Art, Research, Technology and Science Laboratory (ARTS Lab). Edward has written several influential books on computer graphics, most notably the OpenGL Primer and Interactive Computer Graphics.

Modeling 3D Urban Spaces Using Procedural and Simulation-Based Techniques

As scene complexity increases, the amount of artist work (and thus the expense) required to create these scenes increases commensurately, a problem that afflicts both film and game production. Audience expectations are always increasing, and budgets cannot keep pace – more efficient ways to model large, complex scenes must be found. While most natural scenes are very complex, techniques for procedurally modeling them have been used in production for some time; see off-the-shelf products such as Vue and Speedtree, or in-house tools such as were used to model trees in Tangled. Urban scenes can be as complex, but tools for modeling them procedurally have been less widely used (the creation of 1930’s New York City in the 2005 remake of King Kong is a notable example – more details here). The last few years have seen a flourishing of research into procedural modeling of buildings and cities, and the fruits of this research are finding their way into production. This course will cover procedural as well as image-based and simulation-based modeling techniques, and is targeted at applications including computer games, movies, architecture, and urban planning.

This course will have five speakers, each extremely well-suited to teach a course of this type: Peter Wonka (Associate Professor, Arizona State University), Daniel Aliaga (Associate Professor, Purdue University), Carlos Vanegas (Research Assistant, Purdue University), Pascal Mueller (Founder & CEO, Procedural Inc.), and Michael Frederickson (Technical Director, Pixar).The first four speakers have, between them, performed or led most of the notable academic research in this area. Pascal Mueller has founded a company (Procedural Inc.) based on his research, which sells a commercial software package (CityEngine) for procedural urban modeling (Peter Wonka serves on Procedural’s advisory board). The last speaker, Michael Frederickson, was responsible for modeling the 40,000 buildings in the city of London as seen in the movie Cars 2, and it appears that this will be the topic of his presentation. Presumably (given his participation in this course, and also given the magnitude of the task) this was done procedurally. While watching Cars 2 (story issues aside) I was struck by the visuals in the film – the urban environments, especially London, in particular; I look forward to finding out how this was done.

3D Spatial Interaction: Applications for Art, Design, and Science

This course will be taught by Joseph LaViola (Assistant Professor, University of Central Florida) and Daniel Keefe (Assistant Professor, University of Minnesota). Last year at SIGGRAPH 2010, Prof. LaViola taught (with Richard Marks, the primary researcher behind Sony’s EyeToy and Move peripherals), a course about spatial interaction with videogame motion controllers. This year’s course, judging by its abstract, appears to be focused on applications other than videogames. These novel interfaces surely have interesting applications in many fields, and this course will be of interest to many. Both Prof. LaViola and Prof. Keefe have done important research in this field, and Prof. LaViola has authored a book on the subject.

Build Your Own Glasses-Free 3D Display

Last year, two of this course’s speakers, Douglas Lanman (Postdoctoral Associate, MIT Media Lab) and Matthew Hirsch (PhD Student, MIT Media Lab), taught a SIGGRAPH 2010 course called Build Your Own 3D Display. This year, they are joined by Gregg Favalora (Principal, Optics for Hire) and are focusing the course specifically on autostereoscopic displays, which do not require glasses. Douglas and Matthew have done important research into this area – most notably this SIGGRAPH Asia 2010 paper, and have taught versions of this course not only at SIGGRAPH 2010 (as mentioned), but also at SIGGRAPH Asia 2010. Gregg has 15 years experience as an entrepreneur, inventor and researcher and has authored multiple key publications and patents relating to autostereoscopic display design.

Advances in New Interfaces for Musical Expression

This course is presented by Michael Lyons (Professor, Ritsumeikan University) and Sidney Fels (Professor, University of British Columbia) who in 2001 organized the first workshop on New Interfaces for Musical Expression (NIME). This workshop, dedicated to scientific research on the development of new technologies for musical expression and artistic performance, has since blossomed into a full-fledged international conference. This course will summarize the content of the last several years of NIME, including both theory and practice, and presenting several case studies.

SIGGRAPH 2011 Courses – Part 3

Third post in a series about the SIGGRAPH 2011 courses (Part 1 and Part 2).

Stereoscopy From XY to Z

Although there had been fits and starts since the mid-1950’s, stereoscopic (“3D”) feature films really kicked off in 2009. This was primarily due to the convergence of two factors: CG animation and Avatar. CG animated features are easier for stereoscopy since they don’t require bulky and expensive stereoscopic cameras; Disney Animation had been doing all their CG animated films in 3D since Chicken Little (2005), joined in 2009 by Pixar and Dreamworks with Up and Monsters vs. Aliens respectively. Avatar‘s huge box-office success in the same year goosed studio executives into mandating stereoscopic releases of VFX-heavy live-action films as well. Although somewhat controversial among experts (mostly due to brightness issues), the increase in stereoscopic theatrical content resulted in a push for compatible televisions, Blu-ray players and game consoles at home. Around the same time, the PC side of the game market also saw an increase in stereoscopic support (mostly led by NVIDIA). By 2011, stereoscopy had become a dominant trend in computer graphics, with implications in areas ranging from videogame user interfaces to feature shot editing. Many of these implications are as yet not commonly understood, which increases the need for courses like this one.

The course is presented by Samuel Gateau (3D Software Engineer, NVIDIA) and Robert Neuman (Stereoscopic Supervisor, Walt Disney Animation Studios) who have presented earlier versions of it at SIGGRAPH Asia 2010 and at FMX 2011. This time Samuel and Robert are joined by Marc Salvati (R&D Software Engineer, OLM Digital). It appears that the course will cover both the technical and aesthetic aspects of stereoscopy, for games as well as film. The speaker lineup is well-suited for this scope; Samuel has helped many game developers integrate stereoscopy into their titles, Marc has worked on tools for converting Japanese animation to 3D (the topic of a separate talk this year), and Robert has supervised stereoscopy for several films at Disney Animation, most recently working on the stereoscopic conversions of classic hand-animated Disney films (also the topic of a separate talk).

Production Volume Rendering (Part I and Part II)

The SIGGRAPH 2010 course Volumetric Methods in Visual Effects was a great look into an important and little-understood area of production rendering, so I was happy to see that an updated and expanded version will be presented this year. Both courses are organized by Magnus Wrenninge (Senior Technical Director, Sony Pictures Imageworks) and Nafees bin Zafar (Senior Production Engineer, Dreamworks Animation). Magnus has been working on visual effects software at Imageworks (and previously at Digital Domain) for almost a decade, in later years mostly focusing on volumetric modeling and rendering. He is currently in the process of writing a book on the topic, which will include source code for a fully functional volume renderer. Nafees has worked on simulation and volumetrics tools (at Dreamworks and previously at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. The course is divided into two parts. Part I (“Fundamentals”) is presented by Magnus and Nafees, and is an overview of the fundamental technologies behind computer generated volumetric elements such as clouds, fire, and whitewater. At 90 minutes, Part I is an expansion of the first hour of last year’s course, and includes an introduction to the subject, followed by in-depth explanations of how volumetric effects are modeled and rendered.

Over three hours long, Part II (“Systems”) is a greatly expanded version of the second half of last year’s course. It will focus on specific VFX volumetric technologies, tools, workflows and case studies. Nafees and Magnus will each give a presentation on the systems used at their respective studios. In addition, there will be presentations by speakers from the following companies:

  • Double Negative: presented by Ollie Harding (R&D Programmer) and Gavin Graham (CG Supervisor). I wasn’t able to find out much about Ollie; Gavin has worked at Double Negative for over ten years, during which he did various shot based effects work, assisted R&D in battle testing in-house volumetric rendering and fluid simulation tools, and CG-supervised several effects heavy feature films.
  • Rhythm & Hues: Jerry Tessendorf (former Principal Graphics Scientist) and Victor Grant (FX Supervisor). Jerry Tessendorf is currently Director of the Digital Production Arts Program at Clemson University, following an extensive and highly influential body of work in simulation and VFX production spanning three decades. Notable achievements include a Technical Achievement Academy Award and a series of hugely influential SIGGRAPH presentations on ocean wave simulation (the latest version of the notes and slides are well worth reading). Victor Grant has worked on VFX for many feature films over the past decade, specializing in volumetric modeling and rendering as well as particle and fluid simulation.
  • Side Effects Software: Andrew Clinton (Software Developer). Side Effects’ Houdini software is used extensively in the VFX industry; Andrew is responsible for the research and development of Houdini’s Mantra renderer. He has worked on improvements to the volumetric rendering engine, a micropolygon-like approach to volume rendering, a physically-based renderer, and a port of the renderer to the Cell processor.
  • Weta Digital: Antoine Bouthors (R&D Engineer): Weta is a new addition over last year’s course. Before joining Weta, Antoine worked on research including realistic rendering of clouds in real-time.

Volumetric effects are one of the areas where the gap between game and film visuals is biggest; as game platforms become more powerful, game developers will start focusing R&D efforts on this topic. In parallel, VFX houses will develop ways to rapidly previsualize feature film volumetric effects, to allow for better artist control and directability. I predict that in the next few years these converging lines of research will “meet in the middle”, enabling unprecedented scale and quality of volumetric effects in games. Attending this course is a good way for game developers and real-time rendering researchers to get a head start on this process.

Compiler Techniques for Rendering

This course is a bit more specialized than the others I’ve discussed. It is focused on the uses of advanced compiler technology for rendering, covering five different projects which are on the cutting edge of this technology trend. Most of the techniques use LLVM and/or involve the compilation of shading languages. The course is comprised of five talks:

  • Intro to LLVM, and Native RSL Shader Compilation, presented by Mark Leone (Researcher, Weta Digital):  Before joining Weta, Mark led development at Intel of a new shading language for native rendering on Larrabee, and previously worked on the RenderMan shading system at Pixar. His talk will begin with an overview of LLVM (useful background for several of the other talks), and continue with a description of the implementation of the PostHaste system, which analyzes RenderMan shaders and automatically identifies kernels within them that can be compiled for x86 native execution using LLVM.
  • Open Shading Language, presented by Larry Gritz (Principal Engineer, Sony Pictures Imageworks): Larry Gritz is the chief architect of the Imageworks in-house renderer, as well as the designer and open source administrator of the Open Shading Language (OSL) and OpenImageIO projects. Other rendering systems for which he’s had a leading architectural role include NVIDIA’s Gelato GPU-accelerated film-quality renderer, Exluna’s Entropy renderer, Pixar’s PhotoRealistic RenderMan, and BMRT. Larry’s talk describes the design and implementation of OSL, which was developed by Imageworks for use in its in-house renderer, and released as open source software. OSL is specifically designed for advanced rendering algorithms and has a number of key technologies whose implementations will be discussed: radiance closures, light path expressions, automatic differentiation, and LLVM just-in-time compilation.
  • AnySL: Efficient Portable Multi-Language Shading, presented by Philipp Slusallek (Scientific Director, German Research Center for Artificial Intelligence – DFKI): Philipp leads the “Agents and Simulated Reality” research lab at DFKI. He is also a full professor for Computer Graphics at Saarland University, where he holds the additional positions of Director of Research at the Intel Visual Computing Institute, principal investigator at the Cluster of Excellence in Multimodal Computing and Interaction, and founding speaker of the Competence Center for Computer Science. Philipp’s talk will describe the AnySL system, which compiles shaders from different languages into a common, portable representation, using a generic shading library. AnySL also incorporates an embedded compiler based on LLVM that instantiates this generic code in terms of the renderers native types and operations. AnySL also supports programmable kernels for tasks other than shading – such as animation, geometry processing, tesselation, and image processing.
  • Automatic Shader Bounding for Efficient Global Illumination, presented by Bruce Walter (Research Associate, Cornell University Program of Computer Graphics): Bruce’s research focuses on expanding the capabilities of physically-based rendering and global illumination algorithms with respect to robustness, scalability, and generality. He has published many related research papers at SIGGRAPH and elsewhere, including my favorite BRDF paper. This talk will discuss research that was published in a SIGGRAPH Asia 2009 paper, which uses a compiler to automatically generate interval versions of programmable shaders. These interval versions can be used to provide the high level query functions needed by physically-based rendering systems (such as ray tracers).
  • Compilation for GPU Accelerated Ray Tracing in OptiX, presented by Steven Parker (Director of HPC & Computational Graphics, NVIDIA): Steven also leads the OptiX ray tracing team; prior to joining NVIDIA he developed a long history of research and publication in interactive ray tracing and scientific computing. Steven’s talk will discuss the domain-specific just-in-time compiler that lies at the core of the NVIDIA OptiX ray tracing engine. This compiler generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. The CUDA C compiler is used for writing shader programs with function overloading, templates, and full pointer support while a just-in-time compiler provides ray tracing specific optimizations. Steven will discuss some of the compiler analysis techniques that enables a natural programming model, supports a rich object model designed for compact scene representation, provides dynamic dispatch for complex scenes, and continuations for recursion while executing efficiently on a CUDA-enabled GPU.

Another project which seems to fit in with this “compilers for rendering” trend (though not covered in this course) is Microsoft’s recent work to enable symbolic differentiation in HLSL.

SIGGRAPH 2011 Courses – Part 2

This is a continuation of the series of posts started here.

Character Rigging, Deformations, and Simulations in Film and Game Production

Character animation is one of those areas where film and game production have intriguing similarities as well as differences; especially in the ways that the character meshes deform in response to animation and simulation. This course includes three talks, each covering a different application domain: games, visual effects, and feature animation. These talks will be presented by:

  • David Coleman (Senior CG Supervisor, Electronic Arts Canada). David (who has worked at Electronic Arts for 15 years and is currently responsible for the central team that provides rigging for many of EA’s sports titles) will present the games portion of the course. He will discuss character rigging, deformations and simulations in game production, emphasizing the technical restrictions imposed due to the real-time and interactive nature of games. This talk will also cover some strategies for setting up procedural secondary rigging systems in Maya, MotionBuilder and at run-time in games.
  • Tim McLaughlin (Department Head and Associate Professor, Department of Visualization at Texas A&M University). Tim (who had 13 years of experience at ILM – most of it on digital creatures – before heading the Texas A&M Department of Visualization) will discuss rigging for visual effects. He will cover the unique requirements brought on by integration with live action but also the affordances offered by the limited range of scope of performance requirements relative to feature animation and games. Tim will discuss rigging modularity, provisions for animator control, non-linear deformations, areas of highest importance for deformations, and the efficient use of muscle systems.
  • Larry Cutler (Supervising Character TD, DreamWorks Animation). Larry (who worked at Dreamworks Animation for 10 years, and at Pixar for four years previously) will be discussing rigging issues for feature animation. Larry’s talk will deal with the impact of character design, modeling, and scalability for thousands of shots on rigging, deformation, and simulation. He will discuss the issues arising from the unique needs of feature animation: accommodating for extreme range of motion, and increased emphasis on art directability and animator control. Larry will also cover hair, cloth, and facial animation systems.

Destruction and Dynamics for Film and Game Production

Another “X for film and games” course, this time focusing on rigid body dynamics and destruction / fracturing methods. The course will cover production aspects such as authoring tools and game engine integration, in addition to the computational and algorithmic aspects. Like the last course, this one will highlight interesting commonalities and differences between film and game practice. There are areas where each can learn from the other: the film techniques can point the way to future methods for games running on more powerful platforms; and the efficient game methods are useful for fast prototyping, previsualization and even speeding up final shots in film.

The course will start with a 30-minute presentation by the course organizer, Erwin Coumans (Principal Physics Engineer, AMD). Erwin has worked on physics in games for over a decade, and is also the main author of the open-source Bullet Physics Library. Although Bullet was originally designed for game use, it has been used on many films as well, including big-budget Hollywood blockbusters such as How to Train Your Dragon, Sherlock Holmes and 2012. Erwin will give an overview of the course, as well as a brief introduction to the basic theory of rigid body dynamics and destruction/fracturing methods. He will also cover collision detection and handling contacts, approximate methods for the modeling of stress and strain, and how to decide when and where to break rigid bodies into several parts. The course will continue with the following talks:

  • Authoring Destruction With the Dynamica Bullet Maya Plugin (15 minutes), by Michael Baker (Faculty, Art Institute of Las Vegas): Michael has worked on Las Vegas casino games, visual effects for various short films and games, and the Bullet Physics Library (in particular the Dynamica Maya plugin which is the primary topic of his talk). Michael will discuss the development and use of Dynamica to support choreographed rigid body behavior such as progressive crumbling of pre-shattered objects, sequential structural failure and timed directional explosions.
  • Destruction and Dynamics Artist Tools for Film (45 minutes), by Nafees Bin Zafar (Senior Production Engineer, Dreamworks Animation) and Mark T. Carlson (Lead Engineer, Dreamworks Animation): Nafees has worked on simulation and volumetrics tools (both at Dreamworks and in his previous job at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. Mark has worked on cloth, fluid and crowd simulation for six years at DNA Productions, Walt Disney Animation and Dreamworks Animation. This talk will cover 3rd party software integration in the movie pipeline, building artist tools with Bullet, and authoring of destruction using Maya and Houdini. Examples from recent Dreamworks Animation movies will showcase the techniques described.
  • Deformable Rigid Bodies and Fragment Clustering for Film (45 minutes), by Brice Criswell (Senior Software Engineer, Industrial Light & Magic): Brice has been developing production related software for 12 years with ILM, and specializes in rigid body and crowd dynamics. Brice’s talk is divided into three presentations. The first discusses a deformable rigid system which efficiently simulates on-impact bending and denting of normally rigid bodies. The second covers a fragment clustering system which allows artists to initialize sets of geometry as a single rigid body, then dynamically break the objects during the progression of the simulation. The third presentation covers the challenges involved in animating, simulating, and deforming the tentacle beard of the Davy Jones character in the Pirates of the Caribbean movies. Each of the talks will detail algorithms as well as production issues, and will include VFX production examples from prominent feature films.
  • Procedurally Generating Fragmented Meshes for Games (15 minutes), by Phil Knight (Lead Programmer, Avalanche Software – a division of Disney Interactive Studios): Phil has 13 years of game development experience, working most recently on Cars 2, Toy Story 3, and Bolt, and previously on the Links and Amped series. His talk will cover a procedural technique for automatically generating fragmented meshes, especially useful for modeling large explosions with lots of fragmentation and debris. Besides detailing the technique itself, Phil will also describe the fragmentation tool (‘Frag’) which implements it, and its use in game production at Disney Interactive Studios.
  • Accelerating Rigid Body Simulation and Fracture Using the GPU (30 minutes), by Takahiro Harada (Researcher, AMD): Takahiro Harada has performed research and development into physics simulation at The University of Tokyo and Havok as well as his current position at AMD (where he focuses on the use of GPU computing for physics simulation). He will present a GPU-based rigid body simulation which can be used to quickly simulate the large numbers of rigid bodies typically created by object destruction. The talk starts with an overview of the simulation and proceeds to the detailed GPU implementation of each stage of the simulation.

PhysBAM: Physically Based Simulation

Similarly to the previous course, this is targeted at physics simulation and has strong ties to film production. However, its structure is very different; instead of covering a variety of production examples, it focuses on one code library – PhysBAM, initially developed by Ronald Fedkiw and continued by him and many others at Stanford. PhysBAM is used by many VFX and feature animation houses including ILM, Disney Animation, and Pixar; large portions were recently released under an open-source license. This course is presented by Craig Schroeder (PhD Student, Stanford Computer Science Department); it will cover information on the PhysBAM library release: how to obtain the source code, set up the library, and use it to run example smoke and water simulations, as well as descriptions of visualization and rendering tools included in the release. In addition to the PhysBAM library, the course will explain the underlying techniques that make these simulations possible, in particular level set methods such as fast marching, fast sweeping, and the particle level set method. It will also address the important aspects of a fluid simulation, including advection, viscosity, and projection.

There are 12 courses left to cover; I’ll do so over my next few blog posts.

SIGGRAPH 2011 Courses – Part 1

At 18 courses, the SIGGRAPH 2011 course program is smaller than it has been in previous years, but what it lacks in size it more than makes up in quality. I’ll go over the list with a focus on courses of interest to game developers and/or real-time rendering researchers. If you are going to be attending SIGGRAPH, this should help you decide which courses to attend – if not, you’ll at least know which course notes and Encore videos to hunt down after the conference. Since this post is turning out to be quite long, I’ll split it up into several parts, spread out over the next few days.

UPDATES:

  • 6/20/2011: Added details to Beyond Programmable Shading regarding Peter-Pike Sloan’s talk and the Software Rasterization on GPUs talk, as well as correcting the titles of several of the speakers.
  • 6/21/2011: Added links to the papers High-Performance Software Rasterization on GPUs and VoxelPipe: A Programmable Pipeline for 3D Voxelization (the second link is the paper webpage – no PDF yet).
  • 6/24/2011: Removed an incorrect detail about the DEAA technique.
  • 7/10/2011: Updated the description of the Battlefield 3 / Need for Speed: The Run talk in the Advances in Real-Time Rendering in Games course.

Advances in Real-Time Rendering in Games (Part I & Part II)

Since 2006, this course series (organized by Natalya Tatarchuk, formerly at AMD and now at Bungie) has been my favorite thing to see at SIGGRAPH. Each year it has showcased new content from the cutting edge of game and IHV graphics development. Since Natalya joined Bungie, the emphasis has been less on IHV demos and more on games, which in my opinion makes the course even better – this year looks like the best yet! Part I starts with a  brief introduction by Natalya and continues with four talks, each between 45 and 60 minutes in length:

  • Bungie’s Graphics Secret Sauce, by Natalya Tatarchuk (Senior Graphics Researcher, Bungie) and Hao Chen (Engineering Lead, Bungie): Bungie’s talk will cover the graphics techniques developed for the award-winning game Halo: Reach,  along with some new research undertaken for Bungie’s next title.
  • Rendering in Cars 2, by Christopher Hall, Robert Hall, and David Edwards (Programmers at Avalanche Software): this talk will describe rendering techniques used in Cars 2: The Video Game, including offloading of post-processing and stereoscopy computations onto the Playstation 3’s SPUs. Other topics covered will include new developments in color precision, post processing effects, shadows, and the use of light probes.
  • Secrets of CryENGINE 3 Graphics Technology, by Tiago Sousa (Principal R&D Graphics Engineer, Crytek), Nickolay Kasyan (Senior Rendering Engineer, Crytek), and Nicolas Schulz (Graphics Engineer, Crytek): an overview of the novel deferred lighting approach used in CryENGINE 3, along with an in-depth description of optimization techniques (both general and platform-specific), as well as stereoscopic 3D rendering and shadowing techniques.
  • Two Uses of Voxels in LittleBigPlanet2’s Graphics Engine Alex Evans (CTO & Co-Founder, Media Molecule) and Anton Kirczenow (Senior Programmer, Media Molecule): this talk will describe a PlayStation 3-centric implementation of real-time dynamic scene voxelization and demonstrate two ways this voxel representation was used for rendering and special effects in the game LittleBigPlanet 2.

Part II also starts with a short introduction by Natalya; this is followed by five 30-50 minute talks:

  • More Performance! Five Rendering Ideas from Battlefield 3 and Need For Speed: The Run, by John White (Senior Rendering Engineer, NFS) and Colin Barré-Brisebois (Rendering Engineer, BF3): this talk will cover several techniques from Battlefield 3 and Need for Speed: The Run designed to increase performance without sacrificing visual quality. These will include chroma sub-sampling for faster full-screen effects, a novel DirectX 9+ scatter-gather approach to bokeh rendering, improved temporally-stable dynamic ambient occlusion, HiZ reverse-reload for faster shadow and tile-based deferred shading on Xbox 360 (the last topic is a good complement to Christina Coffin’s GDC 2011 presentation giving Playstation 3 implementation details).
  • Physically-based Lighting in Call of Duty: Black Ops, by Dimitar Lazarov (Lead Graphics Engineer, Treyarch): Dimitar will give an overview of the lighting architecture used in the Call of Duty games to achieve competitive visual quality at 60 frames per second. He will then describe the process of introducing a physically-based lighting model to the series in Call of Duty: Black Ops, from the premise behind the model to the specific benefits and issues encountered when integrating it into the game.
  • Real-time Image Quilting: Arbitrary Material Blends, Invisible Seams, and No Repeats, by Hugh Malan (Graphics Programmer, CCP Games): A pixel shader-based image quilting technique which handles situations where standard environment texturing has problems: transitions between arbitrary neighbor materials, localized texture features due to custom geometry, and geometry-dependent edge effects. Production details such as vertex sharing and compaction techniques, texture storage options, and implementation issues for PC and console will also be covered.
  • Dynamic Lighting in God of War III, by Vassily Filippov (Lead Game Programmer, SCEA Santa Monica): this talk will cover a novel forward lighting approach used in God of War III to create rich dynamically lit environments with dozens of light sources applied to a single pixel. The description will include a complete mathematical explanation of the algorithm, as well as implementation details such as the combination of multiple lights into a single aggregate light per vertex on the Playstation 3’s SPUs, a new light interpolation approach which improved lighting accuracy, and the application of the aggregate lights per pixel on the GPU. Usability constraints, edge cases and ways to reduce artifacts will be covered in detail.
  • Pre-Integrated Skin Shading, by Eric Penner (Rendering Engineer, Electronic Arts Vancouver): Eric will describe a technique for rendering realistic skin in games, where rather than gathering neighboring light to simulate subsurface scattering, the effects of scattered light are pre-integrated. This allows for achieving the non-local effects of subsurface scattering using only locally stored information and a custom shading model.

Filtering Approaches for Real-Time Anti-Aliasing

From a theoretical standpoint, performing anti-aliasing as a post-process is locking the barn door after the horses have bolted. However, such techniques have recently proven to be surprisingly effective in practice – a flurry of algorithms, implementations, and variants have created one of the most important real-time rendering trends. For this course, the organizers – Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza) and Diego Gutierrez (Associate Professor, University of Zaragoza) – have tracked down the inventors of pretty much every important technique in this area and recruited them to present their work:

  • Morphological Antialiasing (MLAA), presented by Alexander Reshetov (Senior Staff Researcher, Intel) – this technique was presented as a paper at the High Performance Graphics (HPG) conference in 2009; the impressive results shown sparked most of the current interest in this general approach.
  • A Directionally Adaptive Edge Anti-Aliasing Filter, presented by Jason Yang (Principal Member of Technical Staff , AMD). This technique was also presented as a HPG 2009 paper, and was influential as well.
  • A GPU-friendly variant of MLAA, presented by Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza). This variant was published in the book GPU Pro 2; the talk will also cover recent developments not included in the book.
  • A hybrid CPU/GPU MLAA variant implemented for Costume Quest on the XBox 360,  presented by Pete Demoreuille (Lead Programmer, Double Fine).
  • The Playstation 3/SPU MLAA implementation first used in God of War III and subsequently made available to all Playstation 3 developers as part of the EDGE library. Tobias Berghoff (Senior Programmer, SCEE) will detail the implementation (including recent improvements), and Cedric Perthuis (Senior Staff Graphics Engineer, SCEA Santa Monica) will talk about how the technique was integrated into the God of War III engine.
  • The SPU-based Anti-Aliasing technique (SPUAA) used on the Playstation 3 version of The Saboteur, presented by Henry Yu (Founder and CEO, Kalloc Studios). This technique has long been a topic of speculation among game developers, and will be discussed here for the first time.
  • Subpixel Reconstruction Antialiasing (SRAA), presented by Morgan McGuire (Assistant Professor, Williams College and Visiting Scientist, NVIDIA). This technique was presented as a paper in the 2011 Symposium on Interactive 3D Graphics and Games (I3D).
  • Fast approXimate Anti-Aliasing (FXAA), presented by Timothy Lottes (Developer Technology, NVIDIA). Fast and effective, this technique is currently being evaluated by many developers for inclusion in their games.
  • Distance-to-Edge Anti-Aliasing (DEAA), presented by Hugh Malan (Graphics Programmer, CCP Games – formerly at Realtime Worlds).
  • Geometry Buffer Anti-Aliasing (GBAA), presented by Emil Persson (also known as “Humus” – Graphics Programmer, Avalanche Studios).
  • The Directionally Localized Anti-Aliasing (DLAA) technique used in Star Wars: The Force Unleashed 2, presented by Dmitry Andreev (Senior Rendering Engineer, Visceral Games – formerly at LucasArts).
  • The temporal filtering anti-aliasing technique used in Crysis 2, presented by Tiago Sousa (Principal R&D Graphics Engineer, Crytek).

Beyond Programmable Shading (Part I & Part II)

Similarly to the “Advances in Real-Time Rendering” course, “Beyond Programmable Shading” is an “ensemble” course which has been presented annually at SIGGRAPH for several years running. As its name reflects, it deals with GPU-based graphics that go beyond the traditional graphics pipeline. This course has had uniformly high quality each year, and 2011 appears to be no exception. Part I starts with a 20-minute introduction by the course organizers – Aaron Lefohn (Lead Research Scientist, Intel) and Mike Houston (Fellow, AMD) – and continues with six 25-30 minute talks:

  • Peter-Pike Sloan (Research & Development Lead, Disney Interactive Studios) will give a talk (title to be determined) about the applicability of current graphics research to games, discussing examples of research that works in games today, as well as research that does not work  – and why.
  • GPU Architecture, by Mike Houston (Fellow, AMD): an overview talk covering GPU architecture – unlike similar talks in previous iterations of the course, the architecture talk is extended this year to include heterogeneous architectures.
  • Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (PhD Student, MIT): this is an extension of a talk given by Jonathan in last year’s course – it will include significant new material, including more specifics on how scheduling works in particular GPU architectures.
  • Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Lead Research Scientist, Intel): compared to the talk of the same name in last year’s course, this talk will be significantly re-written and updated, including an increase in the number of concrete examples.
  • Software Rasterization on GPUs, by Samuli Laine (Senior Research Scientist, NVIDIA) and Jacopo Pantaleoni (Senior Architect, NVIDIA): software rasterization on GPUs can be an effective way to bypass the limitations of the GPU’s fixed-function rasterizer. Each of the speakers will be discussing papers they will publish at HPG 2011 – in Samuli’s case, High-Performance Software Rasterization on GPUs and in Jacopo’s case, VoxelPipe: A Programmable Pipeline for 3D Voxelization.
  • The course organizers are still in the process of finalizing the topic and speaker of the last talk.

Part II starts with a brief welcome and re-introduction by Mike Houston. This is followed by four 30-40 minute talks, all new to this course series:

  • Toward a Blurry Rasterizer, by Jacob Munkberg (Research Scientist, Intel): this talk will cover the current state of the art in rasterizing triangles with motion and defocus blur – this is a very active area of research, which I suspect will yield some important GPU advances in the near future. Jacob has co-authored several important papers in this area – most notable are the Graphics Hardware 2007 paper Stochastic Rasterization using Time-Continuous Triangles and the HPG 2011 paper Hierarchical Stochastic Motion Blur Rasterization.
  • Order-Independent Transparency, by Marco Salvi (Research Scientist, Intel): similarly to the previous talk, this covers the current state of the art in an important topic on which the speaker has considerable expertise. Of Marco’s work on the topic, most notable is the HPG 2011 paper Adaptive Transparency (not yet available online but his GDC 2011 talk on the topic – including source code – is available).
  • Interactive Global Illumination, by Chris Wyman (Associate Professor, University of Iowa): this is the third “state-of-the-art talk” covering a relatively broad topic. Chris’ publications page includes numerous papers on this topic, some including source code.
  • User-Defined Pipelines for Ray Tracing, by Steven Parker (Director of High Performance Computing and Computational Graphics, NVIDIA): this is a more tightly focused talk than the previous three. It has the potential to be quite interesting, given the speaker’s central role in the development of the Optix ray tracing system (he was the first author on the SIGGRAPH 2010 paper) as well as his area of responsibility at NVIDIA.

The course closes with a 15-minute wrap-up talk by the organizers (on the topic “What’s Next for Interactive Rendering Research?”), followed by a 45-minute panel discussion between the various course speakers.

I’ll continue going over the remaining SIGGRAPH 2011 courses in my next few blog posts.

Hybrid CPU/GPU MLAA on the Xbox-360Pete Demoreuille

Quick SIGGRAPH Roundup

I’m planning a series of more extensive posts on SIGGRAPH content (starting with the courses), but I’ll start with a quick roundup to help people decide on their attendance before the early-bird registration expires at the end of this week. The roundup is focused on those sessions of potential interest to professional game artists, professional game programmers, real-time rendering researchers and real-time rendering students. I’m not listing paper sessions – I typically skip those in favor of other sessions since the papers themselves tend to be readily available. I’ve also skipped the Reception and the various Birds of a Feather sessions for brevity since those tend to be more social (some Birds of a Feather sessions do have presentations, and others might be of particular interest, so it’s probably a good idea to check the BoF list). More information can be found on the individual SIGGRAPH web pages (linked where available) as well as the SIGGRAPH Advance Program.

UPDATES:

  • June 15, 2011: Added SIGGRAPH Dailies! and relevant Exhibitor Tech Talks; added links to individual Panels.
  • June 20, 2011: Added links to individual CAF Production Sessions, The Studio Workshops, The Studio Digital Artistry Sessions, and the Keynote.
  • June 23, 2011: Added links to remaining sessions, and corrected the classification of some of The Studio presentations.
  • June 24, 2011: Removed Reception and Birds of a Feather sessions for brevity; also corrected times of some Studio Talks.
  • July 15, 2011: Added individual NVIDIA Exhibitor Tech Talks.

Multiple Days

  • Electronic Theater (6:00-8:00 on August 8, 9, and 10)
  • Emerging Technologies (2:00-5:30 on August 7; 9:00-5:30 on August 8, 9, and 10; 9:00-1:00 on August 11; also open during Reception)
  • Exhibition (9:30-6:00 on August 9 and 10; 9:30-3:30 on August 11)
  • Posters (12:00-5:30 on August 7; 9:00-5:30 on August 8, 9, 10, and 11)
  • Real-Time Live! (4:30-5:15 on August 8, 9, and 10)
  • The Sandbox (12:00-5:30 on August 7; 9:00-5:30 on August 8,9, and 10; 9:00-1:00 on August 11; also open during Reception)
  • There are also several co-located conferences which may be of interest

Sunday, August 7th

12:00-1:45:

12:30-1:45:

2:00-3:30:

2:00-5:15:

3:00-3:30:

3:45-4:15:

3:45-5:15:

4:30-5:00:

5:00–5:30:

6:00-8:00:

Monday, August 8th

9:00-9:30:

9:00-10:00:

9:00-10:30:

9:00-12:15:

9:30-10:30:

10:15-11:15:

10:40-12:10:

11:00-1:00:

11:30-12:30:

12:00-1:00:

12:45-1:30:

1:45-3:00:

2:00-2:30:

2:00-3:30:

2:00-5:15:

3:15-4:15:

3:45-4:15:

3:45-5:00:

3:45-5:15:

4:30-5:00:

4:30- 5:30:

5:00-5:30:

Tuesday, August 9th

9:00-9:30:

9:00-10:30:

9:00-12:15:

10:30-11:30:

10:40-12:15:

10:45-12:15:

12:30-1:45:

1:15-1:45:

2:00-3:30:

2:00-3:30:

2:00-5:15:

3:00-3:30:

3:45-4:15:

3:45-4:40:

3:45-5:00:

3:45-5:15:

4:30-5:00:

Wednesday, August 10th

9:00-9:30:

9:00-10:30:

9:00-12:15:

9:45-10:45:

10:30-11:30:

10:40-12:10:

10:45-12:15:

11:15-12:15:

11:30-12:00:

12:30-1:45:

2:00-3:30:

2:00-5:15:

2:15-3:15:

3:45-5:00:

3:45-5:15:

4:30-5:00:

  • The Visual Style of “Legend of the Guardians: The Owls of Ga’Hoole” (The Studio Talk)

6:00-7:30:

Thursday, August 11th

9:00-10:30:

9:00-12:15:

10:40-12:15:

10:45-12:15:

2:00-3:30:

2:00-5:15:

3:45–5:15:

More Free GDC 2011 Content in Vault

In my previous GDC links post, I briefly mentioned the free section of the GDC Vault, and listed individual links to a few of the many videos and presentation slides available there. I’ll list more links to free Vault content in this post, mostly stuff of interest to readers of this blog that isn’t otherwise available online.

Videos (many of these have presentation slides available from one of the links included in my previous post):

Slides (skipping any talks linked in my previous post):