SIGGRAPH 2011 Courses – Part 3

Third post in a series about the SIGGRAPH 2011 courses (Part 1 and Part 2).

Stereoscopy From XY to Z

Although there had been fits and starts since the mid-1950’s, stereoscopic (“3D”) feature films really kicked off in 2009. This was primarily due to the convergence of two factors: CG animation and Avatar. CG animated features are easier for stereoscopy since they don’t require bulky and expensive stereoscopic cameras; Disney Animation had been doing all their CG animated films in 3D since Chicken Little (2005), joined in 2009 by Pixar and Dreamworks with Up and Monsters vs. Aliens respectively. Avatar‘s huge box-office success in the same year goosed studio executives into mandating stereoscopic releases of VFX-heavy live-action films as well. Although somewhat controversial among experts (mostly due to brightness issues), the increase in stereoscopic theatrical content resulted in a push for compatible televisions, Blu-ray players and game consoles at home. Around the same time, the PC side of the game market also saw an increase in stereoscopic support (mostly led by NVIDIA). By 2011, stereoscopy had become a dominant trend in computer graphics, with implications in areas ranging from videogame user interfaces to feature shot editing. Many of these implications are as yet not commonly understood, which increases the need for courses like this one.

The course is presented by Samuel Gateau (3D Software Engineer, NVIDIA) and Robert Neuman (Stereoscopic Supervisor, Walt Disney Animation Studios) who have presented earlier versions of it at SIGGRAPH Asia 2010 and at FMX 2011. This time Samuel and Robert are joined by Marc Salvati (R&D Software Engineer, OLM Digital). It appears that the course will cover both the technical and aesthetic aspects of stereoscopy, for games as well as film. The speaker lineup is well-suited for this scope; Samuel has helped many game developers integrate stereoscopy into their titles, Marc has worked on tools for converting Japanese animation to 3D (the topic of a separate talk this year), and Robert has supervised stereoscopy for several films at Disney Animation, most recently working on the stereoscopic conversions of classic hand-animated Disney films (also the topic of a separate talk).

Production Volume Rendering (Part I and Part II)

The SIGGRAPH 2010 course Volumetric Methods in Visual Effects was a great look into an important and little-understood area of production rendering, so I was happy to see that an updated and expanded version will be presented this year. Both courses are organized by Magnus Wrenninge (Senior Technical Director, Sony Pictures Imageworks) and Nafees bin Zafar (Senior Production Engineer, Dreamworks Animation). Magnus has been working on visual effects software at Imageworks (and previously at Digital Domain) for almost a decade, in later years mostly focusing on volumetric modeling and rendering. He is currently in the process of writing a book on the topic, which will include source code for a fully functional volume renderer. Nafees has worked on simulation and volumetrics tools (at Dreamworks and previously at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. The course is divided into two parts. Part I (“Fundamentals”) is presented by Magnus and Nafees, and is an overview of the fundamental technologies behind computer generated volumetric elements such as clouds, fire, and whitewater. At 90 minutes, Part I is an expansion of the first hour of last year’s course, and includes an introduction to the subject, followed by in-depth explanations of how volumetric effects are modeled and rendered.

Over three hours long, Part II (“Systems”) is a greatly expanded version of the second half of last year’s course. It will focus on specific VFX volumetric technologies, tools, workflows and case studies. Nafees and Magnus will each give a presentation on the systems used at their respective studios. In addition, there will be presentations by speakers from the following companies:

  • Double Negative: presented by Ollie Harding (R&D Programmer) and Gavin Graham (CG Supervisor). I wasn’t able to find out much about Ollie; Gavin has worked at Double Negative for over ten years, during which he did various shot based effects work, assisted R&D in battle testing in-house volumetric rendering and fluid simulation tools, and CG-supervised several effects heavy feature films.
  • Rhythm & Hues: Jerry Tessendorf (former Principal Graphics Scientist) and Victor Grant (FX Supervisor). Jerry Tessendorf is currently Director of the Digital Production Arts Program at Clemson University, following an extensive and highly influential body of work in simulation and VFX production spanning three decades. Notable achievements include a Technical Achievement Academy Award and a series of hugely influential SIGGRAPH presentations on ocean wave simulation (the latest version of the notes and slides are well worth reading). Victor Grant has worked on VFX for many feature films over the past decade, specializing in volumetric modeling and rendering as well as particle and fluid simulation.
  • Side Effects Software: Andrew Clinton (Software Developer). Side Effects’ Houdini software is used extensively in the VFX industry; Andrew is responsible for the research and development of Houdini’s Mantra renderer. He has worked on improvements to the volumetric rendering engine, a micropolygon-like approach to volume rendering, a physically-based renderer, and a port of the renderer to the Cell processor.
  • Weta Digital: Antoine Bouthors (R&D Engineer): Weta is a new addition over last year’s course. Before joining Weta, Antoine worked on research including realistic rendering of clouds in real-time.

Volumetric effects are one of the areas where the gap between game and film visuals is biggest; as game platforms become more powerful, game developers will start focusing R&D efforts on this topic. In parallel, VFX houses will develop ways to rapidly previsualize feature film volumetric effects, to allow for better artist control and directability. I predict that in the next few years these converging lines of research will “meet in the middle”, enabling unprecedented scale and quality of volumetric effects in games. Attending this course is a good way for game developers and real-time rendering researchers to get a head start on this process.

Compiler Techniques for Rendering

This course is a bit more specialized than the others I’ve discussed. It is focused on the uses of advanced compiler technology for rendering, covering five different projects which are on the cutting edge of this technology trend. Most of the techniques use LLVM and/or involve the compilation of shading languages. The course is comprised of five talks:

  • Intro to LLVM, and Native RSL Shader Compilation, presented by Mark Leone (Researcher, Weta Digital):  Before joining Weta, Mark led development at Intel of a new shading language for native rendering on Larrabee, and previously worked on the RenderMan shading system at Pixar. His talk will begin with an overview of LLVM (useful background for several of the other talks), and continue with a description of the implementation of the PostHaste system, which analyzes RenderMan shaders and automatically identifies kernels within them that can be compiled for x86 native execution using LLVM.
  • Open Shading Language, presented by Larry Gritz (Principal Engineer, Sony Pictures Imageworks): Larry Gritz is the chief architect of the Imageworks in-house renderer, as well as the designer and open source administrator of the Open Shading Language (OSL) and OpenImageIO projects. Other rendering systems for which he’s had a leading architectural role include NVIDIA’s Gelato GPU-accelerated film-quality renderer, Exluna’s Entropy renderer, Pixar’s PhotoRealistic RenderMan, and BMRT. Larry’s talk describes the design and implementation of OSL, which was developed by Imageworks for use in its in-house renderer, and released as open source software. OSL is specifically designed for advanced rendering algorithms and has a number of key technologies whose implementations will be discussed: radiance closures, light path expressions, automatic differentiation, and LLVM just-in-time compilation.
  • AnySL: Efficient Portable Multi-Language Shading, presented by Philipp Slusallek (Scientific Director, German Research Center for Artificial Intelligence – DFKI): Philipp leads the “Agents and Simulated Reality” research lab at DFKI. He is also a full professor for Computer Graphics at Saarland University, where he holds the additional positions of Director of Research at the Intel Visual Computing Institute, principal investigator at the Cluster of Excellence in Multimodal Computing and Interaction, and founding speaker of the Competence Center for Computer Science. Philipp’s talk will describe the AnySL system, which compiles shaders from different languages into a common, portable representation, using a generic shading library. AnySL also incorporates an embedded compiler based on LLVM that instantiates this generic code in terms of the renderers native types and operations. AnySL also supports programmable kernels for tasks other than shading – such as animation, geometry processing, tesselation, and image processing.
  • Automatic Shader Bounding for Efficient Global Illumination, presented by Bruce Walter (Research Associate, Cornell University Program of Computer Graphics): Bruce’s research focuses on expanding the capabilities of physically-based rendering and global illumination algorithms with respect to robustness, scalability, and generality. He has published many related research papers at SIGGRAPH and elsewhere, including my favorite BRDF paper. This talk will discuss research that was published in a SIGGRAPH Asia 2009 paper, which uses a compiler to automatically generate interval versions of programmable shaders. These interval versions can be used to provide the high level query functions needed by physically-based rendering systems (such as ray tracers).
  • Compilation for GPU Accelerated Ray Tracing in OptiX, presented by Steven Parker (Director of HPC & Computational Graphics, NVIDIA): Steven also leads the OptiX ray tracing team; prior to joining NVIDIA he developed a long history of research and publication in interactive ray tracing and scientific computing. Steven’s talk will discuss the domain-specific just-in-time compiler that lies at the core of the NVIDIA OptiX ray tracing engine. This compiler generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. The CUDA C compiler is used for writing shader programs with function overloading, templates, and full pointer support while a just-in-time compiler provides ray tracing specific optimizations. Steven will discuss some of the compiler analysis techniques that enables a natural programming model, supports a rich object model designed for compact scene representation, provides dynamic dispatch for complex scenes, and continuations for recursion while executing efficiently on a CUDA-enabled GPU.

Another project which seems to fit in with this “compilers for rendering” trend (though not covered in this course) is Microsoft’s recent work to enable symbolic differentiation in HLSL.