Tag Archives: SIGGRAPH 2009

SIGGRAPH 2009 Production Sessions

Another part of SIGGRAPH I like are the big film production sessions – they are like a DVD “behind the scenes” on steroids.  They do tend to have long lines, though.  This year, the SIGGRAPH production sessions have been brought under the wing of the Computer Animation Festival.  A full list of production sessions can be found here.  They all look pretty interesting, actually, but I think the following ones are most noteworthy:

Big, Fast and Cool: Making the Art for Fight Night 4 & Gears of War 2: This is the first SIGGRAPH production session discussing game production rather than film production, and I hope to see many more like it in future years.

The Curious Case of Benjamin Button marked a watershed in digital character technology – the first time anyone had successfully rendered a photorealistic human character with significant onscreen presence.  The production session for this film spends a fair amount of time discussing the character, and also touches upon some other interesting bits of tech used in the film.

ILM was heavily involved with three big, flashy effects shows this year: Transformers: Revenge of the Fallen, Terminator Salvation, and Star TrekThe production session discussing all three is sure to be a lot of fun (unfortunately, there are also sure to be long lines).

Sony Pictures ImageworksCloudy with a Chance of Meatballs has some very unusual scenes (including spaghetti twisters and Jell-O mountains); it is also unusual in being fully ray-traced.  The production session discusses both of these aspects.

Although not directly relevant to real-time rendering, I am fascinated by the way in which 3D modeling and rapid prototyping were used for facial expressions in the stop-motion film Coraline (and I wrote about it in a previous blog post).  There is a production session about this very topic – anyone else who thinks this is an interesting use of technology might want to attend this one.

SIGGRAPH 2009 Talks

The full list of SIGGRAPH 2009 talks is finally up here.

Talks (formerly known as sketches) are one of my favorite parts of SIGGRAPH.  They always have a lot of interesting techniques from film production (CG animation and visual effects), many of which can be adapted for real-time rendering.  There are typically some research talks as well; most are “teasers” for papers from recent or upcoming conferences, and some are of interest for real-time rendering.  This year, SIGGRAPH also has a few talks by game developers – hopefully next year will have even more.  Unfortunately, talks have the least documentation of all SIGGRAPH programs (except perhaps panels) – just a one page abstract is published, so if you didn’t attend the talk you are usually out of luck.

The Cameras and Imaging talk session has a talk on the cameras used in Pixar‘s “Up” which may be relevant to developers of games with scripted cameras (such as God of War).

From Indie Jams to Professional Pipelines has two good game development talks: Houdini in a Games Pipeline by Paulus Bannink of Guerilla Games discusses how Houdini was used for procedural modeling in the development of Killzone 2.  Although this type of procedural modeling is fairly common in films, it is not typically employed in game development.  This is of particular interest since most developers are looking for ways to increase the productivity of their artists.  In the talk Spore API: Accessing a Unique Database of Player Creativity Shodhan Shalin, Dan Moskowitz and Michael Twardos discuss how the Spore team exposed a huge database of player-created assets to external applications via a public API.

The Splashing in Pipelines talk session has a talk by Ken Museth of Digital Domain about DB-Grid, an interesting data structure for volumetric effects; a GPU implementation of this could possibly be useful for real-time effects.  Another talk from this session, Underground Cave Sequence for “Land of the Lost” sounds like the kind of film talk which often has nuggets which can be adapted to real-time use.

Making it Move has another game development talk, Fight Night 4: Physics-Driven Animation and Visuals by Frank Vitz and Georges Taorres from Electronic Arts.  Fight Night 4 is a game with extremely realistic visuals; the physics-based animation system described here is sure to be of interest to many game developers.  The talk about rigging the “Bob” character from Monsters vs. Aliens also sounds interesting; the technical challenges behind the rig of such an amorphous – yet engaging – character must have been considerable.

Partly Cloudy was the short film accompanying Pixar’s 10th feature film, Up.  Like all of Pixar’s short films, Partly Cloudy was a creative and technical triumph.  The talk by the director, Peter Sohn, also includes a screening of the film.

Although film characters have more complex models, rigs, and shaders than game characters, there are many similarities in how a character translates from initial concept to the (big or small) screen.  The session Taking Care of Your Pet has two talks discussing this process for characters from the movie Up.  There is also a session dedicated to Character Animation and Rigging which may be of interest for similar reasons.

Another game development talk can be found in the Painterly Lighting session; Radially Symmetric Reflection Maps by Jonathan Stone of Double Fine Productions describes an intriguing twist on prefiltered environment maps used in the game Brutal Legend.  The two talks on stylized rendering methods (Applying Painterly Concepts in a CG Film and Painting with Polygons) also look interesting; the first of these discusses techniques used in the movie Bolt.

Real-time rendering has long used techniques borrowed from film rendering.  One way in which the field has “given back” is the increasing adoption of real-time pre-visualization techniques in film production.  In this talk, Steve Sullivan and Michael Sanders from Industrial Light & Magic discuss various film visualization techniques.

The session Two Bolts and a Button has two film lighting talks that look interesting; one on HDRI-mapped area lights in The Curious Case of Benjamin Button, and one on lighting effects with point clouds in Bolt.

The Capture and Display session has two research talks from Paul Debevec’s group.  As you would expect, they both deal with acquisition of computer models from real-world objects.  One discusses tracking correspondences between facial expressions to aid in 2D parametrization (UV mapping), the other describes a method for capturing per-pixel specular roughness parameters (e.g. Phong cosine power) and is more fully described in an EGSR 2009 paper.  Given the high cost of creating realistic and detailed art assets for games, model acquisition is important for game development and likely to become more so.

Flower is the second game from thatgamecompany (not a placeholder; that’s their real name), the creators of FlowFlower is visually stunning and thematically unusual; the talk describing the creation of its impressionistic rendering style will be of interest to many.

Flower was one of two games selected for the new real-time rendering section of the Computer Animation Festival’s Evening Theater (which used to be called the Electronic Theater and was sorely missed when it was skipped at last year’s SIGGRAPH).  Fight Night 4 was the other; these two are accompanied by real-time rendering demonstrations from AMD and Soka University.  Several other games and real-time demos were selected for other parts of the Computer Animation Festival, including Epic GamesGears of War 2 and Disney Interactive‘s Split Second.  These are demonstrated (and discussed) by some of their creators in the Real Time Live talk session.

The Effects Omelette session has been presented at SIGGRAPH for a few years running; it traditionally has interesting film visual effects work.  This year two of the talks look interesting for game developers: one on designing the character’s clothing in Up, and one on a modular pipeline used to collapse the Eiffel Tower in G.I. Joe: The Rise of Cobra.

Although most of the game content at SIGGRAPH is targeted at programmers and artists, there is at least one talk of interest to game designers: in Building Story in Games: No Cut Scenes Required Danny Bilson from THQ and Bob Nicoll from Electronic Arts discuss how interactive entertainment can be used to tell a story.

As one would expect, the Rendering session has at least one talk of interest to readers of this blog.  Multi-Layer, Dual-Resolution Screen-Space Ambient Occlusion by Louis Bavoil and Miguel Sainz of NVIDIA uses multiple depth layers and resolutions to improve SSAO.  Although not directly relevant to real-time rendering, I am also interested in the talk Practical Uses of a Ray Tracer for “Cloudy With a Chance of Meatballs” by Karl Herbst and Danny Dimian from Sony Pictures Imageworks.  For years, animation and VFX houses used rasterization-based renderers almost exclusively (Blue Sky Studios, creators of the Ice Age series, being a notable exception).  Recently, Sony Pictures Imageworks licensed the Arnold ray-tracing renderer and switched to using it for features; Cloudy with a Chance of Meatballs is the first result.  Another talk from this session I think is interesting: Rendering Volumes With Microvoxels by Andrew Clinton and Mark Elendt from Side Effects Software, makers of the procedural modeling tool Houdini.  The micropolygon-based REYES rendering system (on which Pixar’s Photorealistic Renderman is based) has fascinated me for some time; this talk discusses how to add microvoxels to this engine to render volumetric effects.

Above, I mentioned previsualization as one case where film rendering is informed by game rendering.  A more direct example is shown in the talk Making a Feature-Length Animated Movie With a Game Engine (by Alexis Casas, Pierre Augeard and Ali Hamdan from Delacave), in the Doing it with Game Engines session (which I am chairing).  They actually used a game engine to render their film, using it not as a real-time renderer, but as a very fast renderer enabling rapid art iteration times.

All of the talks in the Real Fast Rendering session are on the topic of real-time rendering, and are worth attending.  One of these is by game developers: Normal Mapping With Low-Frequency Precomputed Visibility by Michal Iwanicki of CD Projekt RED and Peter-Pike Sloan of Disney Interactive Studios describes an interesting PRT-like technique which encodes precomputed visibility in spherical harmonics.

Finally, the Rendering and Visualization session has a particularly interesting talk: Beyond Triangles: GigaVoxels Effects In Video Games by Cyril Crassin, Fabrice Neyret and Sylvain Lefebvre from INRIA, Miguel Sainz from NVIDIA and Elmar Eisemann from MPI Informatik.  Ray-casting into large voxel databases has aroused interest in the field since John Carmack made some intriguing comments on the topic (further borne out by Jon Olick’s presentation at SIGGRAPH last year).  The speakers at this talk have shown interesting work at I3D this year, and I look forward to seeing their latest advances.

SIGGRAPH 2009 courses

No sooner had I written about the full SIGGRAPH 2009 course list not being up yet, and bam! there it is.  As I hinted at, there is a lot of stuff there for real-time rendering folks.  Some highlights:

Advances in Real-Time Rendering in 3D Graphics and Games (two-parter; second part here):  This (somewhat awkwardly-named course) has been my favorite thing at SIGGRAPH for the past three years.  Each year it presents all-new material.  Previous courses have seen the debut of important rendering techniques algorithms like SSAO, signed distance-field vector texturing, and wrinkle mapping, as well as details on the rendering secrets behind games like Halo 3, Crysis, Starcraft 2, Half-Life 2, Team Fortress 2, and LittleBigPlanet.  Not much is known about this year’s content, except that it will include details on Crytek’s new global illumination algorithm; but this is one course I know I’m going to!

Beyond Programmable Shading (another two-parter):  GPGPU was promoted by GPU manufacturers in an attempt to find non-graphics uses for their products, and then turned full circle as people realized that drawing pretty pictures is really the best way to use massive general-purpose computing power.  Between CUDA, OpenCL, and Larrabee, this has been a pretty hot topic.  This is the second year that this course has been presented; last year had information on all the major APIs, and some case studies including a presentation on id’s Voxel Octree research.  A subsequent SIGGRAPH Asia presentation added some new material, such as a presentation on real-time implementations of Renderman‘s REYES algorithm.  This year, presenters include people from NVIDIA, AMD, Intel and Stanford; I expect this course to add significant new material, given the rapid development of the field.

Efficient Substitutes for Subdivision Surfaces: Tessellation is another hot topic; Direct3D 11 is mostly designed around this and Compute Shaders (the topic of the previous course).  There has been a lot of work on mapping subdivision surfaces and other types of high-order surface representations to D3D11 hardware.  Including presenters from ILM, Valve, and NVIDIA, this course promises to be a great overview of the state of the art.

Color Imaging: Color is one of the most fundamental topics in rendering.  This course is presented by some of the leading researchers on color and HDR imaging, and should be well worth attending.

Advanced Material Appearance Modeling: Previous versions of this course were the basis for an excellent book on material appearance modeling.  This is a great overview of an important rendering topic, and well worth attending if you haven’t seen it in previous years.

Visual Algorithms in Post-Production: It is well-known that you can find the latest academic rendering research at SIGGRAPH, but there is always a lot of material from the trenches of film effects and animation production as well.  A surprisingly large percentage of this is relevant for real-time and game graphics.  This course has presenters from film production as well as graphics academia describing ways in which academic research is used for film post-production.  I think our community can learn a lot from the film production folks; this course is high on my list.

The Digital Emily Project: Photoreal Facial Modeling and Animation: Last year, Digital Emily was one of the most impressive technology demonstrations; it was the result of a collaboration between Paul Debevec‘s group at USC and Image Metrics, a leading facial capture company.  In this course, presenters from both ICT and Image Metrics describe how this was done, as well as more recent research.

Real-Time Global Illumination for Dynamic Scenes: Real-time global illumination is another active topic of research.  This course is presented by the researchers who have done some of the best (and most practical-looking) work in this area.  It will be interesting to compare the techniques from this course with Crytek’s technique (presented in the “Advances in Real-Time Rendering” course).

SIGGRAPH 2009 papers

Ke-Sen Huang has had a SIGGRAPH 2009 papers page up for a while, and this weekend he’s added a bunch of new papers.

I found two of these to be of direct relevance to real-time rendering:

Gaussian KD-Trees for High-Dimensional Filtering: This paper generalizes the approach used in the SIGGRAPH 2007 bilateral grid paper.  Large-scale image filters are typically performed on downscaled buffers for better performance, but this cannot normally be done for bilateral filters (which are used in real-time rendering for things like filtering SSAO).  The bilateral grid is a three-dimensional low-resolution grid, where the third dimension is the image intensity (it could also be depth or some other scalar quantity).  However, bilateral grids cannot be used to accelerate bilateral filters based on higher-dimensional quantities like RGB color or depth + surface normal; this paper addresses that limitation.

Modeling Human Color Perception under Extended Luminance Levels: An understanding of human color perception is fundamental to computer graphics; many rendering processes are perceptual rather than physical (such as tone mapping and color correction), but even physical computations are affected by the properties of human vision (such as the range of visible wavelengths and the fact that human color perception is trichromatic, or three-dimensional).  Most computer graphics people are familiar with color spaces such as CIE XYZ, but color appearance models such as CIECAM02 are less familiar.  These are used to model the effects of adaptation, background, etc. on color perception.  Current color appearance models are based on perceptual experiments performed under relatively low luminance values; this paper extends the experiments to high values, up to about 17,000 candelas per square meter (white paper in noon sunlight), and propose a new color appearance model based on their findings.  I also found the background and related work sections illuminating for their succinct overview of the current state of color science.

Two more papers, although not directly relevant to real-time rendering, are interesting and thought-provoking:

Single Scattering in Refractive Media with Triangle Mesh Boundaries: This paper finds a rapid (although not quite real-time) solution to refraction in objects composed of faceted or smooth triangle meshes.  The methods described here are interesting and look like they could inspire some real-time techniques, perhaps on the next generation of graphics hardware.

Fabricating Microgeometry for Custom Surface Reflectance: This one is not useful for rendering, but is just plain cool.  Instead of using the microfacet model to predict the appearance of surfaces based on their structure, they turn the idea around and construct surfaces so that they have a desired appearance.  One of the examples they show (inspired by Figure 1 in this paper), is a material with a teapot-shaped highlight! Well, with their current fabrication methods it is really a teapot-shaped reflection cast on a wall, but once manufacturers get their hands on this, all kinds of weird and wonderful materials will start showing up.

So far the yield of papers relevant to real-time rendering practitioners is disappointingly low; perhaps more relevant papers will show up when the official list is published.  In any case, the early list of courses has a lot of relevant material, and I have reason to believe the final list will have even more good stuff on it.  In addition, the Talks (formerly Sketches) program always has useful stuff, Will Wright is giving a keynote speech, and the Electronic Theater (which is back, renamed as the Evening Theater) now has real-time content, so there are more than enough reasons to attend SIGGRAPH this year (and it’s in New Orleans!).  Registration has already started!