Reports

You are currently browsing the archive for the Reports category.

This is the fourth and last of a series of posts on the SIGGRAPH 2011 course program; each describing several of the courses that will be presented at the conference. Links to previous posts in the series: Part 1, Part 2, Part 3.

Cinematography: The Visuals & the Story

Cinematography is the art of communicating a story via camera and lighting choices. As a game developer, I find it fascinating for several reasons. One is that it is such a well-established art; over a century old, and based upon many of the principles of still older arts such as photography and painting. The maturity of the field can be seen in the way that the practice is codified – there are clear roles in film production, everyone knows what a director of photography, first camera assistant, etc. do from film to film. The field’s most prominent professional organization, the American Society of Cinematographers, was created in 1919 and its magazine American Cinematographer has been discussing tips and tricks of the trade since 1920. It’s an interesting contrast to game development – an extremely young discipline where most of the fundamentals are still being figured out.

Another reason I’m interested in cinematography is its relevance to game visuals; the primary problem (turning three-dimensional scenes into compelling screen images that carry a narrative) is the same. While issues of camera placement may be less relevant for some game genres (e.g. first person shooters), lighting, color, and scene composition considerations are relevant for almost any game.

The third reason is that most game developers (including myself until fairly recently) are either unaware of this vast wealth of relevant knowledge, or are indifferent to it. CG animated features have made great strides by incorporating principles of live-action cinematography; not many videogames are doing the same.

For these reasons, I’m glad to see a SIGGRAPH course covering cinematographic fundamentals. The speaker, Bruce Block, has had a lot of experience working in film (albeit not in the camera department) and has written a well-regarded and influential book (The Visual Story) about how visual structure is used to present story in film.

Storytelling With Color

The way in which color choices are applied throughout production is another area where I think games have a lot to learn from film. In film, the colors of almost every costume and piece of set decoration are part of a conscious choice to drive the narrative, establish a mood, or support character development. This was brought home to me last year when I visited Pixar and saw the “color script” for Toy Story 3 – a wall covered by postcard sized sketches, one for each shot in the film. Each rough sketch blocked out the shapes and colors in the shot, and when they were put together, you could clearly see how the carefully chosen color palette helped drive the story and emotional tone of the movie. Two of the Toy Story 3 color script images can be seen here, and the entire color script for a different Pixar film (Up) can be seen here.

This course will cover exactly these kinds of color choices, and will be presented by Kathy Altieri (Production Designer, Dreamworks Animation) and Dave Walvoord (Digital FX Supervisor, Dreamworks Animation). Kathy’s career in TV and film spans three decades; after working on backgrounds for multiple animated TV shows as well as classic animated feature films such as The Little Mermaid, Aladdin and The Lion King, she moved to Dreamworks, where she was Art Director on The Prince of Egypt and Production Designer on Spirit: Stallion of the Cimarron, Over the Hedge, and How to Train Your Dragon. Dave has 15 years of experience in VFX and CG feature animation, working at Blue Sky on films such as Fight Club and Ice Age before joining Dreamworks, where he was CG Supervisor on Shark Tale, Over the Hedge and Kung Fu Panda and Digital FX Supervisor on Kung Fu Panda 2.

Applying Color Theory to Digital Media and Visualization

This is another course on color, but focused more on theory and on non-entertainment applications, such as scientific visualization. The course is presented by Theresa-Marie Rhyne, a prominent visualization expert with three decades of experience as a researcher, educator, designer and artist. She has taught several courses on this topic, most recently at IEEE Visualization 2010 (a video of her slides from that talk is available online), and has a blog on the topic as well. Interestingly, she has already put up a video of the slides from the upcoming SIGGRAPH 2011 course.

Liquid Simulation With Mesh-Based Surface Tracking

While most fluid rendering and simulation work over the years has focused on level-set approaches, an important recent trend in this area consists of tracking a mesh over the surface of the fluid, thus enabling more detailed surfaces. This advanced course (prior knowledge of fluid simulation techniques is assumed) covers the current state of the art in this important area, and is presented by Chris Wojtan (Assistant Professor, Institute of Science and Technology Austria), Matthias Müller-Fischer (Research Lead, NVIDIA), and Tyson Brochu (PhD Candidate, University of British Columbia). Having performed much of the leading research in this area, the speakers are uniquely qualified to speak about the topic.

Although complex fluid simulations are used extensively in film VFX and animated features, they are currently too computationally expensive for games. As game platforms become more powerful, I believe this will change. There are already some impressive real-time demonstrations, for example the Raging Rapids Rides demo which will be shown at the SIGGRAPH 2011 Real-Time Live! program and the SIGGRAPH 2011 paper Real-Time Eulerian Water Simulation Using a Restricted Tall-Cell Grid, which has an impressive video here (check out the lighthouse part at the end). Note that one of the course speakers (Matthias) was involved with both of these examples.

Introduction to Modern OpenGL Programming

Dave Shreiner (co-author of the famous OpenGL Red Book, which has a new edition coming out this November) has taught an introductory course on OpenGL (almost) every year at SIGGRAPH since 1998. He was accompanied by various co-lecturers – most often Edward Angel – and evolved the course content to keep up with changes in the OpenGL API. The only two years Dave didn’t do this course were 2003 (when he  did a “performance OpenGL” course instead of an introductory course – in some other years he did both), and 2010 (when there was no OpenGL course for the first time since 1992). Dave and Edward are back this year with an updated course, which should be of great interest to beginning graphics programmers, OpenGL programmers who have been using older versions of the API, or experienced graphics programmers with plans to start working with OpenGL.

An course on this topic couldn’t hope for better speakers. Besides his highly influential books and courses, Dave Shreiner also had an important role in the development of OpenGL (and its spinoff OpenGL ES) in the 15 years he worked at SGI (where OpenGL evolved from the proprietary IRIS GL library) and since, as Technical Advisory Panel Chair for The Khronos Group and Director of Graphics Technology at ARM. Edward Angel has taught at the University of New Mexico for over 30 years; he holds the positions of Professor Emeritus of Computer Science and Founding Director of the Art, Research, Technology and Science Laboratory (ARTS Lab). Edward has written several influential books on computer graphics, most notably the OpenGL Primer and Interactive Computer Graphics.

Modeling 3D Urban Spaces Using Procedural and Simulation-Based Techniques

As scene complexity increases, the amount of artist work (and thus the expense) required to create these scenes increases commensurately, a problem that afflicts both film and game production. Audience expectations are always increasing, and budgets cannot keep pace – more efficient ways to model large, complex scenes must be found. While most natural scenes are very complex, techniques for procedurally modeling them have been used in production for some time; see off-the-shelf products such as Vue and Speedtree, or in-house tools such as were used to model trees in Tangled. Urban scenes can be as complex, but tools for modeling them procedurally have been less widely used (the creation of 1930′s New York City in the 2005 remake of King Kong is a notable example – more details here). The last few years have seen a flourishing of research into procedural modeling of buildings and cities, and the fruits of this research are finding their way into production. This course will cover procedural as well as image-based and simulation-based modeling techniques, and is targeted at applications including computer games, movies, architecture, and urban planning.

This course will have five speakers, each extremely well-suited to teach a course of this type: Peter Wonka (Associate Professor, Arizona State University), Daniel Aliaga (Associate Professor, Purdue University), Carlos Vanegas (Research Assistant, Purdue University), Pascal Mueller (Founder & CEO, Procedural Inc.), and Michael Frederickson (Technical Director, Pixar).The first four speakers have, between them, performed or led most of the notable academic research in this area. Pascal Mueller has founded a company (Procedural Inc.) based on his research, which sells a commercial software package (CityEngine) for procedural urban modeling (Peter Wonka serves on Procedural’s advisory board). The last speaker, Michael Frederickson, was responsible for modeling the 40,000 buildings in the city of London as seen in the movie Cars 2, and it appears that this will be the topic of his presentation. Presumably (given his participation in this course, and also given the magnitude of the task) this was done procedurally. While watching Cars 2 (story issues aside) I was struck by the visuals in the film – the urban environments, especially London, in particular; I look forward to finding out how this was done.

3D Spatial Interaction: Applications for Art, Design, and Science

This course will be taught by Joseph LaViola (Assistant Professor, University of Central Florida) and Daniel Keefe (Assistant Professor, University of Minnesota). Last year at SIGGRAPH 2010, Prof. LaViola taught (with Richard Marks, the primary researcher behind Sony’s EyeToy and Move peripherals), a course about spatial interaction with videogame motion controllers. This year’s course, judging by its abstract, appears to be focused on applications other than videogames. These novel interfaces surely have interesting applications in many fields, and this course will be of interest to many. Both Prof. LaViola and Prof. Keefe have done important research in this field, and Prof. LaViola has authored a book on the subject.

Build Your Own Glasses-Free 3D Display

Last year, two of this course’s speakers, Douglas Lanman (Postdoctoral Associate, MIT Media Lab) and Matthew Hirsch (PhD Student, MIT Media Lab), taught a SIGGRAPH 2010 course called Build Your Own 3D Display. This year, they are joined by Gregg Favalora (Principal, Optics for Hire) and are focusing the course specifically on autostereoscopic displays, which do not require glasses. Douglas and Matthew have done important research into this area – most notably this SIGGRAPH Asia 2010 paper, and have taught versions of this course not only at SIGGRAPH 2010 (as mentioned), but also at SIGGRAPH Asia 2010. Gregg has 15 years experience as an entrepreneur, inventor and researcher and has authored multiple key publications and patents relating to autostereoscopic display design.

Advances in New Interfaces for Musical Expression

This course is presented by Michael Lyons (Professor, Ritsumeikan University) and Sidney Fels (Professor, University of British Columbia) who in 2001 organized the first workshop on New Interfaces for Musical Expression (NIME). This workshop, dedicated to scientific research on the development of new technologies for musical expression and artistic performance, has since blossomed into a full-fledged international conference. This course will summarize the content of the last several years of NIME, including both theory and practice, and presenting several case studies.

Tags: ,

Third post in a series about the SIGGRAPH 2011 courses (Part 1 and Part 2).

Stereoscopy From XY to Z

Although there had been fits and starts since the mid-1950′s, stereoscopic (“3D”) feature films really kicked off in 2009. This was primarily due to the convergence of two factors: CG animation and Avatar. CG animated features are easier for stereoscopy since they don’t require bulky and expensive stereoscopic cameras; Disney Animation had been doing all their CG animated films in 3D since Chicken Little (2005), joined in 2009 by Pixar and Dreamworks with Up and Monsters vs. Aliens respectively. Avatar‘s huge box-office success in the same year goosed studio executives into mandating stereoscopic releases of VFX-heavy live-action films as well. Although somewhat controversial among experts (mostly due to brightness issues), the increase in stereoscopic theatrical content resulted in a push for compatible televisions, Blu-ray players and game consoles at home. Around the same time, the PC side of the game market also saw an increase in stereoscopic support (mostly led by NVIDIA). By 2011, stereoscopy had become a dominant trend in computer graphics, with implications in areas ranging from videogame user interfaces to feature shot editing. Many of these implications are as yet not commonly understood, which increases the need for courses like this one.

The course is presented by Samuel Gateau (3D Software Engineer, NVIDIA) and Robert Neuman (Stereoscopic Supervisor, Walt Disney Animation Studios) who have presented earlier versions of it at SIGGRAPH Asia 2010 and at FMX 2011. This time Samuel and Robert are joined by Marc Salvati (R&D Software Engineer, OLM Digital). It appears that the course will cover both the technical and aesthetic aspects of stereoscopy, for games as well as film. The speaker lineup is well-suited for this scope; Samuel has helped many game developers integrate stereoscopy into their titles, Marc has worked on tools for converting Japanese animation to 3D (the topic of a separate talk this year), and Robert has supervised stereoscopy for several films at Disney Animation, most recently working on the stereoscopic conversions of classic hand-animated Disney films (also the topic of a separate talk).

Production Volume Rendering (Part I and Part II)

The SIGGRAPH 2010 course Volumetric Methods in Visual Effects was a great look into an important and little-understood area of production rendering, so I was happy to see that an updated and expanded version will be presented this year. Both courses are organized by Magnus Wrenninge (Senior Technical Director, Sony Pictures Imageworks) and Nafees bin Zafar (Senior Production Engineer, Dreamworks Animation). Magnus has been working on visual effects software at Imageworks (and previously at Digital Domain) for almost a decade, in later years mostly focusing on volumetric modeling and rendering. He is currently in the process of writing a book on the topic, which will include source code for a fully functional volume renderer. Nafees has worked on simulation and volumetrics tools (at Dreamworks and previously at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. The course is divided into two parts. Part I (“Fundamentals”) is presented by Magnus and Nafees, and is an overview of the fundamental technologies behind computer generated volumetric elements such as clouds, fire, and whitewater. At 90 minutes, Part I is an expansion of the first hour of last year’s course, and includes an introduction to the subject, followed by in-depth explanations of how volumetric effects are modeled and rendered.

Over three hours long, Part II (“Systems”) is a greatly expanded version of the second half of last year’s course. It will focus on specific VFX volumetric technologies, tools, workflows and case studies. Nafees and Magnus will each give a presentation on the systems used at their respective studios. In addition, there will be presentations by speakers from the following companies:

  • Double Negative: presented by Ollie Harding (R&D Programmer) and Gavin Graham (CG Supervisor). I wasn’t able to find out much about Ollie; Gavin has worked at Double Negative for over ten years, during which he did various shot based effects work, assisted R&D in battle testing in-house volumetric rendering and fluid simulation tools, and CG-supervised several effects heavy feature films.
  • Rhythm & Hues: Jerry Tessendorf (former Principal Graphics Scientist) and Victor Grant (FX Supervisor). Jerry Tessendorf is currently Director of the Digital Production Arts Program at Clemson University, following an extensive and highly influential body of work in simulation and VFX production spanning three decades. Notable achievements include a Technical Achievement Academy Award and a series of hugely influential SIGGRAPH presentations on ocean wave simulation (the latest version of the notes and slides are well worth reading). Victor Grant has worked on VFX for many feature films over the past decade, specializing in volumetric modeling and rendering as well as particle and fluid simulation.
  • Side Effects Software: Andrew Clinton (Software Developer). Side Effects’ Houdini software is used extensively in the VFX industry; Andrew is responsible for the research and development of Houdini’s Mantra renderer. He has worked on improvements to the volumetric rendering engine, a micropolygon-like approach to volume rendering, a physically-based renderer, and a port of the renderer to the Cell processor.
  • Weta Digital: Antoine Bouthors (R&D Engineer): Weta is a new addition over last year’s course. Before joining Weta, Antoine worked on research including realistic rendering of clouds in real-time.

Volumetric effects are one of the areas where the gap between game and film visuals is biggest; as game platforms become more powerful, game developers will start focusing R&D efforts on this topic. In parallel, VFX houses will develop ways to rapidly previsualize feature film volumetric effects, to allow for better artist control and directability. I predict that in the next few years these converging lines of research will “meet in the middle”, enabling unprecedented scale and quality of volumetric effects in games. Attending this course is a good way for game developers and real-time rendering researchers to get a head start on this process.

Compiler Techniques for Rendering

This course is a bit more specialized than the others I’ve discussed. It is focused on the uses of advanced compiler technology for rendering, covering five different projects which are on the cutting edge of this technology trend. Most of the techniques use LLVM and/or involve the compilation of shading languages. The course is comprised of five talks:

  • Intro to LLVM, and Native RSL Shader Compilation, presented by Mark Leone (Researcher, Weta Digital):  Before joining Weta, Mark led development at Intel of a new shading language for native rendering on Larrabee, and previously worked on the RenderMan shading system at Pixar. His talk will begin with an overview of LLVM (useful background for several of the other talks), and continue with a description of the implementation of the PostHaste system, which analyzes RenderMan shaders and automatically identifies kernels within them that can be compiled for x86 native execution using LLVM.
  • Open Shading Language, presented by Larry Gritz (Principal Engineer, Sony Pictures Imageworks): Larry Gritz is the chief architect of the Imageworks in-house renderer, as well as the designer and open source administrator of the Open Shading Language (OSL) and OpenImageIO projects. Other rendering systems for which he’s had a leading architectural role include NVIDIA’s Gelato GPU-accelerated film-quality renderer, Exluna’s Entropy renderer, Pixar’s PhotoRealistic RenderMan, and BMRT. Larry’s talk describes the design and implementation of OSL, which was developed by Imageworks for use in its in-house renderer, and released as open source software. OSL is specifically designed for advanced rendering algorithms and has a number of key technologies whose implementations will be discussed: radiance closures, light path expressions, automatic differentiation, and LLVM just-in-time compilation.
  • AnySL: Efficient Portable Multi-Language Shading, presented by Philipp Slusallek (Scientific Director, German Research Center for Artificial Intelligence – DFKI): Philipp leads the “Agents and Simulated Reality” research lab at DFKI. He is also a full professor for Computer Graphics at Saarland University, where he holds the additional positions of Director of Research at the Intel Visual Computing Institute, principal investigator at the Cluster of Excellence in Multimodal Computing and Interaction, and founding speaker of the Competence Center for Computer Science. Philipp’s talk will describe the AnySL system, which compiles shaders from different languages into a common, portable representation, using a generic shading library. AnySL also incorporates an embedded compiler based on LLVM that instantiates this generic code in terms of the renderers native types and operations. AnySL also supports programmable kernels for tasks other than shading – such as animation, geometry processing, tesselation, and image processing.
  • Automatic Shader Bounding for Efficient Global Illumination, presented by Bruce Walter (Research Associate, Cornell University Program of Computer Graphics): Bruce’s research focuses on expanding the capabilities of physically-based rendering and global illumination algorithms with respect to robustness, scalability, and generality. He has published many related research papers at SIGGRAPH and elsewhere, including my favorite BRDF paper. This talk will discuss research that was published in a SIGGRAPH Asia 2009 paper, which uses a compiler to automatically generate interval versions of programmable shaders. These interval versions can be used to provide the high level query functions needed by physically-based rendering systems (such as ray tracers).
  • Compilation for GPU Accelerated Ray Tracing in OptiX, presented by Steven Parker (Director of HPC & Computational Graphics, NVIDIA): Steven also leads the OptiX ray tracing team; prior to joining NVIDIA he developed a long history of research and publication in interactive ray tracing and scientific computing. Steven’s talk will discuss the domain-specific just-in-time compiler that lies at the core of the NVIDIA OptiX ray tracing engine. This compiler generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. The CUDA C compiler is used for writing shader programs with function overloading, templates, and full pointer support while a just-in-time compiler provides ray tracing specific optimizations. Steven will discuss some of the compiler analysis techniques that enables a natural programming model, supports a rich object model designed for compact scene representation, provides dynamic dispatch for complex scenes, and continuations for recursion while executing efficiently on a CUDA-enabled GPU.

Another project which seems to fit in with this “compilers for rendering” trend (though not covered in this course) is Microsoft’s recent work to enable symbolic differentiation in HLSL.

Tags: ,

This is a continuation of the series of posts started here.

Character Rigging, Deformations, and Simulations in Film and Game Production

Character animation is one of those areas where film and game production have intriguing similarities as well as differences; especially in the ways that the character meshes deform in response to animation and simulation. This course includes three talks, each covering a different application domain: games, visual effects, and feature animation. These talks will be presented by:

  • David Coleman (Senior CG Supervisor, Electronic Arts Canada). David (who has worked at Electronic Arts for 15 years and is currently responsible for the central team that provides rigging for many of EA’s sports titles) will present the games portion of the course. He will discuss character rigging, deformations and simulations in game production, emphasizing the technical restrictions imposed due to the real-time and interactive nature of games. This talk will also cover some strategies for setting up procedural secondary rigging systems in Maya, MotionBuilder and at run-time in games.
  • Tim McLaughlin (Department Head and Associate Professor, Department of Visualization at Texas A&M University). Tim (who had 13 years of experience at ILM – most of it on digital creatures – before heading the Texas A&M Department of Visualization) will discuss rigging for visual effects. He will cover the unique requirements brought on by integration with live action but also the affordances offered by the limited range of scope of performance requirements relative to feature animation and games. Tim will discuss rigging modularity, provisions for animator control, non-linear deformations, areas of highest importance for deformations, and the efficient use of muscle systems.
  • Larry Cutler (Supervising Character TD, DreamWorks Animation). Larry (who worked at Dreamworks Animation for 10 years, and at Pixar for four years previously) will be discussing rigging issues for feature animation. Larry’s talk will deal with the impact of character design, modeling, and scalability for thousands of shots on rigging, deformation, and simulation. He will discuss the issues arising from the unique needs of feature animation: accommodating for extreme range of motion, and increased emphasis on art directability and animator control. Larry will also cover hair, cloth, and facial animation systems.

Destruction and Dynamics for Film and Game Production

Another “X for film and games” course, this time focusing on rigid body dynamics and destruction / fracturing methods. The course will cover production aspects such as authoring tools and game engine integration, in addition to the computational and algorithmic aspects. Like the last course, this one will highlight interesting commonalities and differences between film and game practice. There are areas where each can learn from the other: the film techniques can point the way to future methods for games running on more powerful platforms; and the efficient game methods are useful for fast prototyping, previsualization and even speeding up final shots in film.

The course will start with a 30-minute presentation by the course organizer, Erwin Coumans (Principal Physics Engineer, AMD). Erwin has worked on physics in games for over a decade, and is also the main author of the open-source Bullet Physics Library. Although Bullet was originally designed for game use, it has been used on many films as well, including big-budget Hollywood blockbusters such as How to Train Your Dragon, Sherlock Holmes and 2012. Erwin will give an overview of the course, as well as a brief introduction to the basic theory of rigid body dynamics and destruction/fracturing methods. He will also cover collision detection and handling contacts, approximate methods for the modeling of stress and strain, and how to decide when and where to break rigid bodies into several parts. The course will continue with the following talks:

  • Authoring Destruction With the Dynamica Bullet Maya Plugin (15 minutes), by Michael Baker (Faculty, Art Institute of Las Vegas): Michael has worked on Las Vegas casino games, visual effects for various short films and games, and the Bullet Physics Library (in particular the Dynamica Maya plugin which is the primary topic of his talk). Michael will discuss the development and use of Dynamica to support choreographed rigid body behavior such as progressive crumbling of pre-shattered objects, sequential structural failure and timed directional explosions.
  • Destruction and Dynamics Artist Tools for Film (45 minutes), by Nafees Bin Zafar (Senior Production Engineer, Dreamworks Animation) and Mark T. Carlson (Lead Engineer, Dreamworks Animation): Nafees has worked on simulation and volumetrics tools (both at Dreamworks and in his previous job at Digital Domain) for over ten years, winning a Scientific and Engineering Academy Award in 2007. Mark has worked on cloth, fluid and crowd simulation for six years at DNA Productions, Walt Disney Animation and Dreamworks Animation. This talk will cover 3rd party software integration in the movie pipeline, building artist tools with Bullet, and authoring of destruction using Maya and Houdini. Examples from recent Dreamworks Animation movies will showcase the techniques described.
  • Deformable Rigid Bodies and Fragment Clustering for Film (45 minutes), by Brice Criswell (Senior Software Engineer, Industrial Light & Magic): Brice has been developing production related software for 12 years with ILM, and specializes in rigid body and crowd dynamics. Brice’s talk is divided into three presentations. The first discusses a deformable rigid system which efficiently simulates on-impact bending and denting of normally rigid bodies. The second covers a fragment clustering system which allows artists to initialize sets of geometry as a single rigid body, then dynamically break the objects during the progression of the simulation. The third presentation covers the challenges involved in animating, simulating, and deforming the tentacle beard of the Davy Jones character in the Pirates of the Caribbean movies. Each of the talks will detail algorithms as well as production issues, and will include VFX production examples from prominent feature films.
  • Procedurally Generating Fragmented Meshes for Games (15 minutes), by Phil Knight (Lead Programmer, Avalanche Software – a division of Disney Interactive Studios): Phil has 13 years of game development experience, working most recently on Cars 2, Toy Story 3, and Bolt, and previously on the Links and Amped series. His talk will cover a procedural technique for automatically generating fragmented meshes, especially useful for modeling large explosions with lots of fragmentation and debris. Besides detailing the technique itself, Phil will also describe the fragmentation tool (‘Frag’) which implements it, and its use in game production at Disney Interactive Studios.
  • Accelerating Rigid Body Simulation and Fracture Using the GPU (30 minutes), by Takahiro Harada (Researcher, AMD): Takahiro Harada has performed research and development into physics simulation at The University of Tokyo and Havok as well as his current position at AMD (where he focuses on the use of GPU computing for physics simulation). He will present a GPU-based rigid body simulation which can be used to quickly simulate the large numbers of rigid bodies typically created by object destruction. The talk starts with an overview of the simulation and proceeds to the detailed GPU implementation of each stage of the simulation.

PhysBAM: Physically Based Simulation

Similarly to the previous course, this is targeted at physics simulation and has strong ties to film production. However, its structure is very different; instead of covering a variety of production examples, it focuses on one code library – PhysBAM, initially developed by Ronald Fedkiw and continued by him and many others at Stanford. PhysBAM is used by many VFX and feature animation houses including ILM, Disney Animation, and Pixar; large portions were recently released under an open-source license. This course is presented by Craig Schroeder (PhD Student, Stanford Computer Science Department); it will cover information on the PhysBAM library release: how to obtain the source code, set up the library, and use it to run example smoke and water simulations, as well as descriptions of visualization and rendering tools included in the release. In addition to the PhysBAM library, the course will explain the underlying techniques that make these simulations possible, in particular level set methods such as fast marching, fast sweeping, and the particle level set method. It will also address the important aspects of a fluid simulation, including advection, viscosity, and projection.

There are 12 courses left to cover; I’ll do so over my next few blog posts.

Tags: ,

At 18 courses, the SIGGRAPH 2011 course program is smaller than it has been in previous years, but what it lacks in size it more than makes up in quality. I’ll go over the list with a focus on courses of interest to game developers and/or real-time rendering researchers. If you are going to be attending SIGGRAPH, this should help you decide which courses to attend – if not, you’ll at least know which course notes and Encore videos to hunt down after the conference. Since this post is turning out to be quite long, I’ll split it up into several parts, spread out over the next few days.

UPDATES:

  • 6/20/2011: Added details to Beyond Programmable Shading regarding Peter-Pike Sloan’s talk and the Software Rasterization on GPUs talk, as well as correcting the titles of several of the speakers.
  • 6/21/2011: Added links to the papers High-Performance Software Rasterization on GPUs and VoxelPipe: A Programmable Pipeline for 3D Voxelization (the second link is the paper webpage – no PDF yet).
  • 6/24/2011: Removed an incorrect detail about the DEAA technique.
  • 7/10/2011: Updated the description of the Battlefield 3 / Need for Speed: The Run talk in the Advances in Real-Time Rendering in Games course.

Advances in Real-Time Rendering in Games (Part I & Part II)

Since 2006, this course series (organized by Natalya Tatarchuk, formerly at AMD and now at Bungie) has been my favorite thing to see at SIGGRAPH. Each year it has showcased new content from the cutting edge of game and IHV graphics development. Since Natalya joined Bungie, the emphasis has been less on IHV demos and more on games, which in my opinion makes the course even better – this year looks like the best yet! Part I starts with a  brief introduction by Natalya and continues with four talks, each between 45 and 60 minutes in length:

  • Bungie’s Graphics Secret Sauce, by Natalya Tatarchuk (Senior Graphics Researcher, Bungie) and Hao Chen (Engineering Lead, Bungie): Bungie’s talk will cover the graphics techniques developed for the award-winning game Halo: Reach,  along with some new research undertaken for Bungie’s next title.
  • Rendering in Cars 2, by Christopher Hall, Robert Hall, and David Edwards (Programmers at Avalanche Software): this talk will describe rendering techniques used in Cars 2: The Video Game, including offloading of post-processing and stereoscopy computations onto the Playstation 3′s SPUs. Other topics covered will include new developments in color precision, post processing effects, shadows, and the use of light probes.
  • Secrets of CryENGINE 3 Graphics Technology, by Tiago Sousa (Principal R&D Graphics Engineer, Crytek), Nickolay Kasyan (Senior Rendering Engineer, Crytek), and Nicolas Schulz (Graphics Engineer, Crytek): an overview of the novel deferred lighting approach used in CryENGINE 3, along with an in-depth description of optimization techniques (both general and platform-specific), as well as stereoscopic 3D rendering and shadowing techniques.
  • Two Uses of Voxels in LittleBigPlanet2’s Graphics Engine Alex Evans (CTO & Co-Founder, Media Molecule) and Anton Kirczenow (Senior Programmer, Media Molecule): this talk will describe a PlayStation 3-centric implementation of real-time dynamic scene voxelization and demonstrate two ways this voxel representation was used for rendering and special effects in the game LittleBigPlanet 2.

Part II also starts with a short introduction by Natalya; this is followed by five 30-50 minute talks:

  • More Performance! Five Rendering Ideas from Battlefield 3 and Need For Speed: The Run, by John White (Senior Rendering Engineer, NFS) and Colin Barré-Brisebois (Rendering Engineer, BF3): this talk will cover several techniques from Battlefield 3 and Need for Speed: The Run designed to increase performance without sacrificing visual quality. These will include chroma sub-sampling for faster full-screen effects, a novel DirectX 9+ scatter-gather approach to bokeh rendering, improved temporally-stable dynamic ambient occlusion, HiZ reverse-reload for faster shadow and tile-based deferred shading on Xbox 360 (the last topic is a good complement to Christina Coffin’s GDC 2011 presentation giving Playstation 3 implementation details).
  • Physically-based Lighting in Call of Duty: Black Ops, by Dimitar Lazarov (Lead Graphics Engineer, Treyarch): Dimitar will give an overview of the lighting architecture used in the Call of Duty games to achieve competitive visual quality at 60 frames per second. He will then describe the process of introducing a physically-based lighting model to the series in Call of Duty: Black Ops, from the premise behind the model to the specific benefits and issues encountered when integrating it into the game.
  • Real-time Image Quilting: Arbitrary Material Blends, Invisible Seams, and No Repeats, by Hugh Malan (Graphics Programmer, CCP Games): A pixel shader-based image quilting technique which handles situations where standard environment texturing has problems: transitions between arbitrary neighbor materials, localized texture features due to custom geometry, and geometry-dependent edge effects. Production details such as vertex sharing and compaction techniques, texture storage options, and implementation issues for PC and console will also be covered.
  • Dynamic Lighting in God of War III, by Vassily Filippov (Lead Game Programmer, SCEA Santa Monica): this talk will cover a novel forward lighting approach used in God of War III to create rich dynamically lit environments with dozens of light sources applied to a single pixel. The description will include a complete mathematical explanation of the algorithm, as well as implementation details such as the combination of multiple lights into a single aggregate light per vertex on the Playstation 3’s SPUs, a new light interpolation approach which improved lighting accuracy, and the application of the aggregate lights per pixel on the GPU. Usability constraints, edge cases and ways to reduce artifacts will be covered in detail.
  • Pre-Integrated Skin Shading, by Eric Penner (Rendering Engineer, Electronic Arts Vancouver): Eric will describe a technique for rendering realistic skin in games, where rather than gathering neighboring light to simulate subsurface scattering, the effects of scattered light are pre-integrated. This allows for achieving the non-local effects of subsurface scattering using only locally stored information and a custom shading model.

Filtering Approaches for Real-Time Anti-Aliasing

From a theoretical standpoint, performing anti-aliasing as a post-process is locking the barn door after the horses have bolted. However, such techniques have recently proven to be surprisingly effective in practice – a flurry of algorithms, implementations, and variants have created one of the most important real-time rendering trends. For this course, the organizers – Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza) and Diego Gutierrez (Associate Professor, University of Zaragoza) – have tracked down the inventors of pretty much every important technique in this area and recruited them to present their work:

  • Morphological Antialiasing (MLAA), presented by Alexander Reshetov (Senior Staff Researcher, Intel) – this technique was presented as a paper at the High Performance Graphics (HPG) conference in 2009; the impressive results shown sparked most of the current interest in this general approach.
  • A Directionally Adaptive Edge Anti-Aliasing Filter, presented by Jason Yang (Principal Member of Technical Staff , AMD). This technique was also presented as a HPG 2009 paper, and was influential as well.
  • A GPU-friendly variant of MLAA, presented by Jorge Jimenez (Real-Time Graphics Researcher, University of Zaragoza). This variant was published in the book GPU Pro 2; the talk will also cover recent developments not included in the book.
  • A hybrid CPU/GPU MLAA variant implemented for Costume Quest on the XBox 360,  presented by Pete Demoreuille (Lead Programmer, Double Fine).
  • The Playstation 3/SPU MLAA implementation first used in God of War III and subsequently made available to all Playstation 3 developers as part of the EDGE library. Tobias Berghoff (Senior Programmer, SCEE) will detail the implementation (including recent improvements), and Cedric Perthuis (Senior Staff Graphics Engineer, SCEA Santa Monica) will talk about how the technique was integrated into the God of War III engine.
  • The SPU-based Anti-Aliasing technique (SPUAA) used on the Playstation 3 version of The Saboteur, presented by Henry Yu (Founder and CEO, Kalloc Studios). This technique has long been a topic of speculation among game developers, and will be discussed here for the first time.
  • Subpixel Reconstruction Antialiasing (SRAA), presented by Morgan McGuire (Assistant Professor, Williams College and Visiting Scientist, NVIDIA). This technique was presented as a paper in the 2011 Symposium on Interactive 3D Graphics and Games (I3D).
  • Fast approXimate Anti-Aliasing (FXAA), presented by Timothy Lottes (Developer Technology, NVIDIA). Fast and effective, this technique is currently being evaluated by many developers for inclusion in their games.
  • Distance-to-Edge Anti-Aliasing (DEAA), presented by Hugh Malan (Graphics Programmer, CCP Games – formerly at Realtime Worlds).
  • Geometry Buffer Anti-Aliasing (GBAA), presented by Emil Persson (also known as “Humus” – Graphics Programmer, Avalanche Studios).
  • The Directionally Localized Anti-Aliasing (DLAA) technique used in Star Wars: The Force Unleashed 2, presented by Dmitry Andreev (Senior Rendering Engineer, Visceral Games – formerly at LucasArts).
  • The temporal filtering anti-aliasing technique used in Crysis 2, presented by Tiago Sousa (Principal R&D Graphics Engineer, Crytek).

Beyond Programmable Shading (Part I & Part II)

Similarly to the “Advances in Real-Time Rendering” course, “Beyond Programmable Shading” is an “ensemble” course which has been presented annually at SIGGRAPH for several years running. As its name reflects, it deals with GPU-based graphics that go beyond the traditional graphics pipeline. This course has had uniformly high quality each year, and 2011 appears to be no exception. Part I starts with a 20-minute introduction by the course organizers – Aaron Lefohn (Lead Research Scientist, Intel) and Mike Houston (Fellow, AMD) – and continues with six 25-30 minute talks:

  • Peter-Pike Sloan (Research & Development Lead, Disney Interactive Studios) will give a talk (title to be determined) about the applicability of current graphics research to games, discussing examples of research that works in games today, as well as research that does not work  – and why.
  • GPU Architecture, by Mike Houston (Fellow, AMD): an overview talk covering GPU architecture – unlike similar talks in previous iterations of the course, the architecture talk is extended this year to include heterogeneous architectures.
  • Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (PhD Student, MIT): this is an extension of a talk given by Jonathan in last year’s course – it will include significant new material, including more specifics on how scheduling works in particular GPU architectures.
  • Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Lead Research Scientist, Intel): compared to the talk of the same name in last year’s course, this talk will be significantly re-written and updated, including an increase in the number of concrete examples.
  • Software Rasterization on GPUs, by Samuli Laine (Senior Research Scientist, NVIDIA) and Jacopo Pantaleoni (Senior Architect, NVIDIA): software rasterization on GPUs can be an effective way to bypass the limitations of the GPU’s fixed-function rasterizer. Each of the speakers will be discussing papers they will publish at HPG 2011 – in Samuli’s case, High-Performance Software Rasterization on GPUs and in Jacopo’s case, VoxelPipe: A Programmable Pipeline for 3D Voxelization.
  • The course organizers are still in the process of finalizing the topic and speaker of the last talk.

Part II starts with a brief welcome and re-introduction by Mike Houston. This is followed by four 30-40 minute talks, all new to this course series:

  • Toward a Blurry Rasterizer, by Jacob Munkberg (Research Scientist, Intel): this talk will cover the current state of the art in rasterizing triangles with motion and defocus blur – this is a very active area of research, which I suspect will yield some important GPU advances in the near future. Jacob has co-authored several important papers in this area – most notable are the Graphics Hardware 2007 paper Stochastic Rasterization using Time-Continuous Triangles and the HPG 2011 paper Hierarchical Stochastic Motion Blur Rasterization.
  • Order-Independent Transparency, by Marco Salvi (Research Scientist, Intel): similarly to the previous talk, this covers the current state of the art in an important topic on which the speaker has considerable expertise. Of Marco’s work on the topic, most notable is the HPG 2011 paper Adaptive Transparency (not yet available online but his GDC 2011 talk on the topic – including source code – is available).
  • Interactive Global Illumination, by Chris Wyman (Associate Professor, University of Iowa): this is the third “state-of-the-art talk” covering a relatively broad topic. Chris’ publications page includes numerous papers on this topic, some including source code.
  • User-Defined Pipelines for Ray Tracing, by Steven Parker (Director of High Performance Computing and Computational Graphics, NVIDIA): this is a more tightly focused talk than the previous three. It has the potential to be quite interesting, given the speaker’s central role in the development of the Optix ray tracing system (he was the first author on the SIGGRAPH 2010 paper) as well as his area of responsibility at NVIDIA.

The course closes with a 15-minute wrap-up talk by the organizers (on the topic “What’s Next for Interactive Rendering Research?”), followed by a 45-minute panel discussion between the various course speakers.

I’ll continue going over the remaining SIGGRAPH 2011 courses in my next few blog posts.

Hybrid CPU/GPU MLAA on the Xbox-360Pete Demoreuille

Tags: ,

You’ve probably heard about bitcoins by now, the currency of cryptoanarchist libertarian computer geeks or something. It turns out that GPUs are particularly good at mining bitcoins, compared to CPUs: check out this chart – the key factor is Mhash/sec (though Mhash/Joule is also an entertaining concept). The most interesting page (for me) at the site is their explanation of why a GPU is (so much) faster than a CPU for this task. Not a shocker for anyone reading this blog; we all know that GPGPU can rip through certain tasks at amazing speeds. What’s more interesting to me is how and why one IHV’s GPUs are considerably faster than the other’s. I won’t spoil the surprise here, see the page to learn more.

Tags: ,

We’ve talked about this before, how ACM’s copyright policy stated that they, not you, control the copyright of any images you publish in their journals, proceedings, or other publications. For example, if your hometown newspaper wants to publish a story of “local boy makes good” and wish to include samples of your work, they needed to ask the ACM for permission (and pay the ACM $28 per image). Not a huge problem, but it’s a bureaucratic roadblock for a reasonable request. Researchers are usually surprised to hear they have lost this right.

While it was possible to be assertive and push to retain copyright to your images (or even article) and just grant ACM unlimited permission – certainly firms such as Pixar and Disney have done so with their content – the default was to give the ACM this copyright control.

James O’Brien brought it to our attention that this policy has been revised, and I asked Stephen Spencer (SIGGRAPH’s Director of Publications) for details. His explanation follows.

ACM has recently changed its copyright policy to include the option, under certain circumstances, of retaining copyright on embedded content in material published by ACM. Embedded content can now fall into one of three categories: copyright of the content is transferred to ACM as part of the rest of the paper (the default), the content is “third-party” material (not created by the author(s)), or the content is considered an “artistic image.”

The revised copyright form includes this definition of “artistic images”:

“An exception to copyright transfer is allowed for images or figures in your paper which have ‘independent artistic value.’ You or your employer may retain copyright to the artistic images or figures which you created for some purpose other than to illustrate a point in this paper and which you wish to exploit in other contexts.”

The ACM Copyright Policy page also documents this change in policy.

ACM’s electronic copyright system is being updated to implement this change; authors who wish to declare one or more pieces of embedded content in their papers as “artistic images” should contact Stephen Spencer (at <[email protected]>) to receive a PDF version of the revised copyright form.

The copyright form includes instructions for declaring embedded content as “artistic images,” both in your paper and on the copyright form.

—-

Note that this change is “going forward”; if you have already given ACM the copyright, you cannot get it back. Understandable, as otherwise there could be a flood of requests for recategorization.

I’m happy to see this change, it is a good step in the right direction.

Tags: ,

Here’s a book CFP, proposals due right after SIGGRAPH. I have to admit, I was a little skeptical when I heard of this as a book idea. However, OpenGL truly is undergoing a resurgence as of late. Not so much on desktops and laptops, though more games are indeed getting made for Macs. Marc DeLoura has a good article on engines in “Game Developer,” May 2011, noting that 15% of traditional “big game” developers plan on Mac version of their games, vs. a mere 2% in 2009. As it is, most every serious game engine is cross-platform, so OpenGL’s special features (and bugs) are not so vital to engine users. Rather, the handheld market is where OpenGL is the only game in town. So, knowing how to make this API sing is pretty vital if you’re working in that area.

The editors: Patrick Cozzi you probably don’t know (yet), though I did point earlier to a poster for this year’s SIGGRAPH that he coauthored (it’s a clever technique). Among other things, he’s first author on a book that’s not out yet, but will be by SIGGRAPH: 3D Engine Design for Virtual Globes (you can download book samples and the code). Christophe Riccio you may have heard of if you work with the OpenGL SDK. He maintains the OpenGL Samples, GLM (math), and GLI (imaging). These guys look like good people for the job: energetic and intelligent. So, here’s the CFP – you can comment on it at their blog. Me, just reading their list of topics of interest, I’ll get a copy even if they get articles on just a very few of these. If $50 (or whatever) saves us a day of going down a wrong path, it’s worth it.


It is with great enthusiasm that we invite you to contribute to OpenGL Insights, a book containing original articles on OpenGL, OpenGL ES, and WebGL techniques by the OpenGL community and for the OpenGL community: from game programmers to web developers to researchers. OpenGL Insights will be published by A K Peters, Ltd. / CRC Press in time for SIGGRAPH 2012.

Given the wide array of OpenGL platforms, from Mac desktops to Android phones to web browsers, we invite you to submit article proposals on all aspects of OpenGL development, including performance tuning, recent GL features/extensions, application architecture, vendor-specific techniques, WebGL, and interoperability with other APIs. We are interested in proposals based on your unique real-world experience using OpenGL. Some ideas include:

  • OpenGL performance, for example:
    • Best performance practices for using vertex buffers
    • Best performance practices for texture streaming
    • Performance and memory profiling techniques
    • 64-bit performance considerations
    • Multithreading with OpenGL
  • Modern OpenGL 3 and 4 programming, for example:
    • Introduction to tessellation
    • Image load and store
    • Programmable multisampling
    • Using shader subroutines effectively
    • Managing uniform data
    • Strategies for debugging OpenGL applications
  • Application architecture, for example:
    • Porting between Direct3D and OpenGL
    • Writing portable code between OpenGL, OpenGL ES, and WebGL
    • Designing an OpenGL-based graphics engine
    • A testing framework for OpenGL applications
    • Shader architecture best practices, e.g., shader binaries and separate shaders
    • Cross-platform programming with OpenGL
    • Tools, libraries
  • Vendor-specific techniques, for example:
    • Understanding and optimizing for specific hardware and driver implementations: AMD, Apple, ARM, Imagination Technologies, Intel, NVIDIA, Qualcomm, S3 Graphics, etc.
    • Bindless Graphics: GL_NV_shader_buffer_load and GL_NV_vertex_buffer_unified_memory
    • How VAO works on AMD drivers
    • Taking advantage of deferred tile rendering on PowerVR
    • How GLSL compiler works
    • Understanding multithreaded OpenGL drivers
  • OpenGL ES, for example:
    • Best practices for targeting both desktop and mobile devices
    • Targeting multiple mobile device platforms
    • Developing with power consumption in mind
    • Differences between desktop and mobile devices
  • WebGL, for example:
    • Introduction to WebGL for web developers
    • Introduction to WebGL for OpenGL developers
    • Optimizing WebGL applications
    • Writing large-scale software in JavaScript
    • Understanding web browser implementations of WebGL
    • WebGL interoperability with WebCL
  • Interoperability, for example:
    • Hybrid OpenGL and OpenCL/CUDA rendering pipelines
    • Working with both OpenGL and Direct3D
    • OpenGL interoperability with OpenCL and CUDA
  • Inspirational thoughts and experiences:
    • OpenGL’s 20th anniversary: history and evolution
    • ARB members and OpenGL developers interviews
    • OpenGL software making of
    • Daily programmer experiences with OpenGL

These are, of course, examples. Please don’t feel limited to these areas.

The planned schedule is:

August 15, 2011 Proposals due
September 1, 2011 Authors selected
November 1, 2011 Articles due
December 1, 2011 Peer review feedback due
December 15, 2011 Revised articles due, all articles sent to publisher
January 1, 2012 Supplemental material due, e.g., videos, source code, etc.
SIGGRAPH 2012 Book released

Please send proposals to [email protected] using this example proposal as a template by August 15th.

Proposals should include the title, your name and affiliation, a one-page abstract, and anything else you feel helps convey your article such as related images or references. Proposals must demonstrate the author’s real-world OpenGL experience and ability to write clearly. Proposals can have multiple authors, and a single author can submit multiple proposals. There is no required article length, but we expect most articles will be 5-20 pages. Example code can be written in any language on any platform.

Please feel free to contact us for additional discussion. We’re looking forward to putting together a valuable book for the OpenGL community.

Thanks,

Patrick Cozzi and Christophe Riccio, Editors

www.openglinsights.com

Tags:

I3D 2011 Report

This report about I3D 2011 is from Mauricio Vives, a coworker at Autodesk; he also has photos from the symposium. The papers listing can be found at Ke-Sen Huang’s page and at the ACM Digital Library’s page (tabbed version here). The ACM site has some interesting info, by the way, such as acceptance rates and “most downloaded, most cited” for all I3D articles. I should also remind people: if you’re a member of ACM SIGGRAPH, you automatically have access to all SIGGRAPH-sponsored ACM Digital Library materials (which is essentially almost all their graphics papers).


In case you missed them, Naty’s reports (so far) about I3D are also on this blog: Keynote, Industry Session, and Banquet Talk.

Papers



Session: Filtering and Reconstruction


A Local Image Reconstruction Algorithm for Stochastic Rendering

In a stochastic rasterizer, effects like defocus (depth of field) and transparency need a very large number of samples (64 – 256) to remove noise and look decent. This paper describes a technique to remove the noise using far fewer samples, through sorting to selectively blur samples. The results look quite good, though it does have the disadvantage of blurring transparency.


Subpixel Reconstruction Antialiasing for Deferred Shading

The post-process MLAA antialiasing technique works to find sharp edges in an image and smooth them. It provides pretty good results. This technique (SRAA) is similar, but operates with supersampled depth and normal data to produce even better results with predictable performance. For example, it looks comparable to 16x SSAA (super-sampled antialiasing) in just 2 ms on a GTX 480 at HD resolution. As with MLAA, only single-sample shading is used, and it is compatible with deferred lighting. [The use of depth and normals means it is focused on geometric edge finding, so the technique is not applicable to shadow edges and NPR post-process thickened edges, for example. - Eric]


High Quality Elliptical Texture Filtering on GPU

Elliptical filtering is necessary for removing artifacts from some high-frequency textures, but is too slow for real-time applications. This is a technique for approximating true elliptical filtering using hardware anisotropic filtering. They claim it is easy to integrate, and they provide drop-in GLSL code to replace existing texture lookups.


Session: Lighting in Participating Media


Transmittance Function Mapping

This is about rendering light and shadow in participating media (e.g. fog, smoke) with single scattering. Two techniques are described: a faster one called volumetric shadow mapping for homogeneous media, and a slower one called transmittance function mapping for heterogeneous media. This is suitable for both offline rendering and interactive applications, but it is not quite real-time (> 30 fps) yet.


Real-Time Volumetric Shadows using 1D Min-Max Mipmaps

This is again about volumetric shadows, but with real-time results for homogeneous media, using only pixel shaders. A combination of epipolar warping on the shadow map, and a 1D min-max mipmap for storage, makes this possible. This is a pretty clever combination of techniques. [This paper was the "NVIDIA Best Paper Presentation".]


Real-Time Volume Caustics with Adaptive Beam Tracing

This is an extension of previous beam-tracing work to allow for real-time caustics, with receiving surfaces and volumes handled separately. A coarse grid of beams of refined based on the geometry and viewer to put detail where it is needed most. This enhancement is something an effect like caustics certainly needs to look good.


Session: Collision & Sound


Sound Synthesis for Impact Sounds in Video Games

This paper was motivated by having very little memory on a game console to store physics-based sounds, e.g. for footsteps or colliding objects. This uses a combination of modal synthesis and spectral modeling synthesis to produce extremely small sound definitions that could be varied on demand to provide a variety of believable effects. I don’t know much about sound generation, but the audio results (from the only audio paper in the conference) were very good.


Collision-Streams: Fast GPU-based Collision Detection for Deformable Models

Fast Continuous Collision Detection using Parallel Filter in Subspace

Both of these papers address the same problem: how to quickly perform continuous collision detection, which works by interpolating motion between time steps. I am a novice when it comes to collision detection, so I didn’t take away much from these.


Session: Shadows

This is the session that was most interesting to me, as I work on shadow algorithms.


Shadow Caster Culling for Efficient Shadow Mapping

This paper has a simple idea: when rendering a shadow map, cull objects that don’t produce a visible shadow to improve performance. Here occlusion culling is used to determine which objects are visible from the light source, and then these are rendered to produce a receiver mask in the stencil buffer. The stencil occlusion query operation can then be used to skip irrelevant objects. There are a few methods to render the receiver mask, trading off complexity and accuracy. This was mostly tested with city scenes, where it offered the same images at about 5x speed.


Colored Stochastic Shadow Maps

This included a nice overview of the problem, terminology, and existing techniques: how to render colored shadows from translucent objects? This extends existing techniques for rendering stochastic transparency to also support colored transparent shadows. It has a filter that can be adjusted for performance and quality, being based on stochastic techniques. The paper has simple explanations of the algorithm, and it seems from the presentation that this is fairly easy to integrate in a system that already handles basic stochastic shadow maps.


Sample Distribution Shadow Maps

This was the paper I was most interested in seeing, since it is an extension to cascaded shadow maps (CSM), a popular shadowing technique. The idea here is quite simple: reduce perspective aliasing with shadow maps by analyzing the distribution of samples visible to the viewer. This can be used to determine tight near/far planes for CSM partitions, and for tightly fitting the light frustum to just the visible shadow receivers.

The analysis requires a reduction operation, which can be done with rasterization (D3D9/10) or compute shaders (D3D11). Using this algorithm can result in sub-pixel sampling in many cases using a 2K shadow map; see the example below.


Session: Refraction & Global Illumination


Voxel-based Global Illumination

It wouldn’t be an interactive graphics conference without a paper about interactive global illumination, and here is one. The idea behind this technique is to create an atlas-based voxelization of the scene (every frame), and then perform fast ray casting into that grid. Direct lighting can be stored in the grid, or reflective shadow maps can be used; the latter seems to be preferred.

The technique is of course able to capture lighting and details that screen-space methods can’t. However, it can have artifacts that require a denser grid and a tuned offset to avoid self-intersections. Also, as noted earlier, it requires a texture atlas for generating the voxel grid. The results are interactive, if not quite real-time.


Real-Time Rough Refraction

Here the authors a solving the interesting, though somewhat specific, problem of rendering transparent materials (no scattering) with rough surfaces, which is different from translucent surfaces. Glossy reflection has been handled before; this is about glossy refraction. In this case, rays would be refracted twice, entering and exiting the surface, and this provides a fast way to perform what would otherwise be a double integration.

I wasn’t able to follow all of the math in this one, but the results are indeed real-time (> 30 fps), and comparable to what you would get out of a ray-tracer in dozens of seconds or minutes. There is also a cheap approximation for total internal reflection. This paper was selected as one of the best papers of the conference. An example with increasing roughness is shown below.


Screen-Space Bias Compensation for Interactive High-Quality Global Illumination with Virtual Point Lights

One of the problems with using virtual point lights (VPL) for global illumination is that you must apply a clamping above a certain amount of reflection to avoid pin-point light artifacts. This paper presents a fast way to compute the amount of light that was clamped away, which can then be added back to the VPL result. It is done in screen space, which can lead to some issues (as you might expect), but it is fully interactive and easy to integrate into other renderers that already have VPL.


Session: Human Animation

This is certainly not my area of expertise, so I only have a few comments about these papers.


Motion Rings for Interactive Gait Synthesis

This is human walking motion interpolation made more efficient, responding within a quarter-gait, which is necessary for interactive applications. It relies on a parameterized motion loop (called a “ring” here), and uneven terrain is handled with IK adjustments.


Realtime Human Motion Control with A Small Number of Inertial Sensors

This paper describes how to combine high-quality prerecorded motion capture data with live motion from just a few simple (noisy) sensors to enhance what would otherwise be very poor input. This is validated by comparing against the high-quality original data for motions like walking, boxing, golf swings, and jumping.


A Modular Framework for Adaptive Agent-Based Steering

Crowd simulations need hundreds of characters, but often lack local (per-person) intelligence. This paper presents a framework for dynamically choosing between one of several local steering strategies. This is able to handle fairly tight and deadlocked situations, such as two people walking toward each other down a narrow hall, though some of the resulting motion is awkward.


Session: Geometric and Procedural Modeling


Editable Polycube Map for GPU-based Subdivision Surfaces

This is an extension of previous “polycube map” work to allow transferring geometric detail from a high-resolution triangle mesh to subdivision surfaces. A simple modeling system was presented where the user creates a very coarse polycube and sketches a handful of correspondences between it and the high-resolution mesh. The results are really quite remarkable, and you can see the process below.


GPU Curvature Estimation on Deformable Meshes

It is not unusual to perform vertex skinning or iso-surface extraction on the GPU now. However, for some effects like ambient occlusion or NPR edge extraction, it is useful to have the mathematical curvature of the new surface, but this is slow or impossible to read back from the GPU. This paper presents a method for estimating curvature on the GPU in real-time, even for very detailed models, much faster than could be done on the CPU. [This paper was a "Best Paper - Honorable Mention".]


Urban Ecosystem Design

I have seen several papers at SIGGRAPH and elsewhere about procedurally generating urban environments: basically streets and buildings. This paper procedurally adds plants (mostly trees) to such urban layouts. City blocks are assigned a level of human “manageability” which determines how organized or wild the plants in that block will be. From there, growth and competition rules are applied. Only the city geometry is taken into account, so this can be used with systems where other information (like land use) is not available. It was implemented with CUDA, and as an example, it can simulate 70 years of growth for 250,000 plants in about two minutes.


Session: Interactivity and Interaction


Data Management for SSDs for Large-Scale Interactive Graphics Applications

Here “SSD” refers to solid-state disks, so this is about organizing a graphics database to allow for efficient out-of-core rendering using SSDs instead of traditional hard disks with spinning platters. Since SSDs don’t need data ordering, locally (as opposed to globally) optimized layouts work well with them. The presentation seem to be lacking in detail, but the demo was fairly impressive: a very large scene on disk being displayed and edited very smoothly with very little RAM. For any developers working with large graphics databases I would certainly recommend reading this paper for getting some ideas.


Coherent Image-Based Rendering of Real-World Objects

This paper attempts to generate a depth map from images captured from a few cameras. The goal is to build a virtual dressing room with a “mirror”: the mirror is a display that shows you with different virtual clothes on. The entire system runs with CUDA on a single machine with a single GPU, at interactive rates even with full body movement. It exploits frame coherence, i.e., reusing parts of previous frames, to reduce latency. This was surprisingly the only paper on image-based rendering, despite the first keynote (below) being entirely about IBR.


Slice WIM: A Multi-Surface, Multi-Touch Interface for Overview+Detail Exploration of Volume Datasets in Virtual Reality

Here WIM is “world in miniature,” a technique that presents a small version of a virtual environment to help the user navigate the environment. This paper extends the technique to aid in the visualization of complex volume data generated from slices, especially for medical imaging. The resulting system consists of a large main display, a smaller horizontal touch display through which the user can manipulate the view and slices, and a head-tracked VR display for the WIM. Better to just show you! [This paper was a "Best Paper - Honorable Mention".]


TVCG Papers

A few papers from IEEE Transactions on Visualization and Computer Graphics were also presented as “guests” of the conference. There didn’t seem as relevant, but I list them here:

  • Interactive Visualization of Rotational Symmetry Fields on Surfaces
  • Real-Time Ray-Tracing of Implicit Surfaces on the GPU
  • Simulating Multiple Character Interactions with Collaborative and Adversarial Goals
  • Directing Crowd Simulations Using Navigation Fields


Posters

I didn’t spend much time looking at the posters this year, but I did make note of a few that I would like to investigate further; see below. The full poster list is here [ACM Digital Library subscribers and SIGGRAPH members can download from here].

  • gHull: A Three-dimensional Convex Hull Algorithm for Graphics Hardware
  • Interactive Indirect Illumination Using Voxel Cone Tracing  [This was "Best Poster - Winner".]
  • Level-of-Detail and Streaming Optimized Irradiance Normal Mapping
  • Poisson Disk Ray-Marched Ambient Occlusion


Talks


Image-Based Rendering: A 15-Year Retrospective

This talk was given by Rick Szeliski from Microsoft Research. As the title implies, it was an overview of image-based rendering research, covering topics such as panoramas, image-based modeling, photo tourism, and (in particular) light fields. The fundamental problem of such research is determine how to make a scene from an existing one. At one point he mentioned that Autodesk “probably” has something for image-based modeling, and indeed we do; I sent that link to him after the conference. Naty Hoffman of Activision has a longer discussion of this talk in an earlier blog posting.


From Papers to Pixels: How research finds its way into Games

This talk by Dan Baker of Firaxis was a light “rant” about what researchers should do to get their research into games. For example, they need to consider quality and ease of integration, and realize that hardware advances will make certain techniques obsolete quickly. Most of it was reasonable, but it included a number of controversial points, in my opinion. For example, “nobody uses OpenGL” and there is too much research into GPGPU.

He also started out by stating that the vast majority of the research for I3D is for games, and that everything else is “boring”… calling out “3D CAD” in particular as boring! By the end, I was quite tempted to provide an on-the-spot rebuttal from the Autodesk perspective.


A Game Developer’s Wishlist for Researchers

Chris Hecker, formerly of Maxis, presented his opinion on almost the same topic, i.e. what game developers want from researchers – I liked it better. His priorities are robustness, simplicity, and performance, in that order, but researchers often make the mistake of putting performance first. The nature of interactivity means that robustness is absolutely critical. For example, you can’t afford errors even every few hundred frames when you are rendering 30 fps. He also states that papers frequently exclude negative results and worst-case scenarios, which would be helpful for assessing robustness.

I could say more, but Chris has the full talk available here. See for yourself why he says, “We are always about to fail.”


GPU Computing: Past, Present, and Future

This was the banquet talk, given by David Luebke of NVIDIA. It was a fairly light look at the state of GPU computing, where it has come and where it will go. A lot of it went against Dan Baker’s earlier comments about GPGPU, and David made sure to point that out (and I agree).

Some of this talk had the feel of a marketing presentation for NVIDIA’s CUDA and GPU computing products like Tesla, but it is hard to deny that this area is important. He cited several cases where GPU computing is saving lives, e.g. assisting in heart surgery, malaria treatment, and cancer detection. Of course, he also mentioned graphics, scientific computing, data mining, speech processing, etc. At one point he (amusingly) pointed out that all of this innovation and technology has its roots in humble VGA controller hardware.


Mobile Computational Photography

As someone who recently started going beyond point-and-shoot photography with a DSLR camera, this talk was quite interesting. Kari Pulli of Nokia described an API for controlling digital cameras called FCam… and it’s pretty cool what skilled hands are able to do with this. Note that this isn’t really about high-end cameras: the idea is to allow even cell phone cameras to take great photos.

FCam basically makes a supporting camera programmable, allowing you to do things that the camera normally can’t do. Most of it revolves around taking multiple images and combining them to produce a better result. For example, from his talk and the site:

… take two photos with different settings back-to-back. The first frame has a high ISO and short exposure time, and the second has a low ISO and longer exposure time. The first frame tends to be noisy, while the second instead exhibits blur due to camera shake. This application combines the two frames to create an output that’s better than either.

This is absolutely something I wish I could do with my current camera. Another example he showed was combining a few images with narrow depths of field into a single image with a wide depth of field (i.e. fully sharp). Another automatically took photos until one was captured with minimal camera shake, keeping only the “good” one. All of this is done right on the camera, which is a great improvement over the typical post-processing workflow, and it can leverage metadata that only the camera has.

Tags: ,

There  have been a lot of awards recently of interest to readers of this blog; I thought it would be useful to provide an overview, as well as covering some of the more obscure awards.

The Oscars are by far the most well-known awards, bestowed annually since 1929 by the Academy of Motion Pictures Arts and Sciences. Oscars are voted on by Academy voting members (who total about 6,000) in the relevant disciplines (e.g. directors vote for Best Director, actors for Best Actor, etc.). The Scientific and Technical Awards are especially notable; they are given in a separate ceremony in early February, two weeks before the main awards ceremony celeb-fest.

Although some of the Sci-Tech awards are still for “analog” stuff like camera mounts, lenses, and film emulsions, in recent times most of the honored developments have been digital. All except three of this year’s winners were for digital advances (the other three were for computer-controlled camera and prop cable suspension systems, so partially digital as well).

The most directly relevant award was for a development described in a SIGGRAPH paper (the 2004 paper, “An Approximate Global Illumination System for Computer Generated Films”, was even mentioned in the award text). The award was given to Eric Tabellion and Arnauld Lamorlette, “for the creation of a computer graphics bounce lighting methodology that is practical at feature film scale”. This technique (as described in the 2004 paper) is a fast one-bounce GI method that uses interesting approximations for both geometry and surface material. The paper is well worth reading; the technique was highly influential for film rendering and some of the ideas are relevant for real-time rendering as well.

Another computer graphics-related award was given to Dr. Mark Sagar “for his early and continuing development of influential facial motion retargeting solutions”. Dr. Sagar pioneered the use of the Facial Animation Coding System (FACS) for film production, starting with Monster House and supervising its use on King Kong and Avatar; this system is now widely used, with growing adoption by the game industry as well. Dr. Sagar also won an Academy Sci-Tech award last year, for his work on Light Stage.

As seen in the cable suspension case, Academy Sci-Tech awards tend to come in “clumps”. As a particular technology area is recognized as important by the Academy, several different groups who did important work in that area receive awards in one year. For example, a bunch of last year’s Sci-Tech awards were related to the digital intermediate (DI) process. The biggest “clump” this year was for another graphics-related topic: render queues (software used to manage render farms, the earliest  – and still most widespread in film production – form of parallel graphics processing):

The final computer graphics-related SciTech award was given to Tony Clark, Alan Rogers, Neil Wilson and Rory McGregor “for the software design and continued development of cineSync, a tool for remote collaboration and review of visual effects” (cineSync is developed and sold by Rising Sun Research).

Some of the main Academy Awards (announced in late February) are also of interest to readers of this blog; there’s a lot of information about these awards out there so I’ll just mention the winners for Visual Effects (Inception), Animated Feature Film (Toy Story 3), and Animated Short Film (The Lost Thing).

The closest video game equivalent to the Oscars are the Interactive Achievement Awards, bestowed annually by the Academy of Interactive Arts and Sciences at the annual D.I.C.E. Summit in early February. Similarly to the Oscars, they are voted for by registered AIAS members, who must be working in the appropriate game development discipline to vote on a given award. This year’s awards of interest to readers of this blog include: Outstanding Achievement in Animation (God of War III), Outstanding Achievement in Art Direction (Red Dead Redemption), and Outstanding Achievement in Visual Engineering (Heavy Rain).

The Game Developers Choice Awards are also prestigious, and are bestowed at the annual Game Developers Conference (which takes place in late February or early March). One must be a registered member of the Gamasutra website (owned by United Business Media, which also owns the Game Developers Conference) to nominate or vote, and the advisory committee which oversees the process is chosen by the editors of Game Developer Magazine (also owned by United Business Media) and Gamasutra. The Game Developers Choice Awards are unusual in thus being managed by a for-profit corporation rather than a nonprofit professional organization. This year’s awards of interest: Best Technology (Red Dead Redemption), and Best Visual Arts (Limbo).

Regarding video game awards, one notable event that happened this year (on February 12th) was the first Grammy award for music composed for a video game. The Grammys don’t have a dedicated award for video game music – this award was for a song, Baba Yetu, originally composed by Christopher Tin for Civilization IV and released on the 2009 album Calling All Dawns (which itself won a Grammy in addition to the award for Baba Yetu).

The awards given by the British Academy of Film and Television Arts (BAFTA) are almost as well-known in the UK as the Oscars are in the US. BAFTA gives awards for TV shows and video games as well as movies.

The British Academy Film Awards were held in mid-January. Awards of interest: Animated Film (Toy Story 3), Short Animation (The Eagleman Stag – which was stop-motion, not CG), and Special Visual Effects (Inception).

One of the British Academy Television Craft Awards was of interest: Visual Effects (The Day of the Triffids).

The British Academy Video Game Awards were held in mid-March. Awards of interest: Artistic Achievement (God of War III), and Technical Innovation (Heavy Rain). A minor controversy erupted after Red Dead Redemption did not win any awards – it turns out that it was not entered by the developers (Rockstar Games), most likely for reasons related to a perceived snub that Grand Theft Auto IV (also developed by Rockstar) received in the 2009 awards.

The last set of awards I will discuss are perhaps the most directly relevant for this blog, though not as well-known as the ones previously mentioned. The Visual Effects Society (VES) is a professional organization representing practitioners in visual effects and computer-generated animation for TV, film and video games. Among their other activities, they host the VES Awards every year in early February. Due to these awards’ focus, most of them are of interest – the full list can be found here. I’ll highlight some of the most interesting awards categories, but first I wanted to mention this year’s VES Lifetime Achievement Award recipient, Ray Harryhausen. Harryhausen is a giant in the field; his pioneering stop-motion effects work on many films, from Mighty Joe Young (1949) to Clash of the Titans (1981) inspired many of today’s most prominent filmmakers. I’ve been going through his films on recent weekends with the wife and kids; most of them are great fun and well worth seeing. I’m not sure why it took the VES nine years to recognize Harryhausen (even the Academy of Motion Pictures Arts and Sciences, which snubbed him for the special effects Oscars throughout his career, finally awarded him the Gordon E. Sawyer Award in 1992).

This year’s VES video game award winners: Outstanding Real-Time Visual Effects in a Video Game (Halo: Reach; presented to Marcus Lehto, Joseph Tung, Stephen Scott, and CJ Cowan from Bungie – two clips related to the submission are available on YouTube: “work to be considered” clip, “before and after” clip), Outstanding Animated Character in a Video Game (StarCraft II – Sarah Kerrigan; presented to Fausto De Martini, Xin Wang, Glenn Ramos, and Scott Lange from Blizzard), Outstanding Visual Effects in a Video Game Trailer (World of Warcraft – for the Cataclysm cinematic; presented to Marc Messenger and Phillip Hillenbrand, Jr. from Blizzard).

Notable VES feature film-related awards: VES Visionary Award (Christopher Nolan), Outstanding Visual Effects in a Visual-Effects Driven Feature Motion Picture (Inception), Outstanding Supporting Visual Effects in a Feature Motion Picture (Hereafter), Outstanding Animation in an Animated Feature Motion Picture (How to Train Your Dragon), Outstanding Achievement in an Animated Short (Day & Night), Outstanding Animated Character in a Live Action Feature Motion Picture (Harry Potter and the Deathly Hallows: Part 1 – Dobby), Outstanding Animated Character in an Animated Feature Motion Picture (How to Train Your Dragon – Toothless), Outstanding Effects Animation in an Animated Feature Motion Picture (How to Train Your Dragon).

Recognition of exceptional work is an important part of the advancement of any professional field; it’s good to see that the field of computer graphics is so well-covered in this respect.

Tags: , , , , , ,

A partial, early list of SIGGRAPH 2011 courses has recently been published. SIGGRAPH has published such preliminary lists in previous years, typically representing around half to a third of the final course list.

The list includes six very promising courses:

  1. Advances in Real-Time Rendering in Games: Part I – this is the next iteration in a course series, organized by Natalya Tatarchuk, that has been presented at SIGGRAPH every year (with new content) since 2006. This course has been a highlight of every SIGGRAPH it has appeared in and I’m pleased to see it coming back. The instructors are not yet listed, but Natasha has always been able to round up a top-notch speaker roster, and I am confident she will do so again this year. ”Advances…” has always been a full-day course, though since 2008 (when SIGGRAPH canceled the full-day course format) it’s been divided into two half-day courses. Only one of the two halves appears on this list; hopefully this is a simple oversight and SIGGRAPH didn’t reject the other half of the course!
  2. Character Rigging, Deformations, and Simulations in Film and Game Production – I’m always happy to see “X in film and games”-types courses. If well-organized and presented, such courses detail the current cutting-edge of actual production practice in both industries, emphasizing interesting differences and commonalities between the two. Such crossover content is an important feature of SIGGRAPH not found in industry-specific conferences like GDC. The topic is important; many games don’t put enough of an emphasis on animation quality. The speaker list is strong, including Tim McLaughlin (a graphics researcher at Texas A&M University who also has a nice body of film VFX work he did at ILM), Larry Cutler (a character technical director at Dreamworks Animation, formerly at Pixar), and David Coleman (a Senior CG Supervisor at Electronic Arts Canada, where he leads the EA Sports rigging team).
  3. Cinematography: The Visuals & the Story – I’m very happy to see this course on the list. I have become  increasingly fascinated with cinematography over the last few years; there is a lot that video games can learn from cinematography, from creative topics like lighting and composition to technical ones such as depth of field and tone mapping. This course is taught by Bruce Block, a film producer and visual consultant who wrote a very well-regarded and influential book called The Visual Story, about how visual structure is used to present story in film. I’m trying to get a course put together for next year which would cover the topic from a different angle, as presented by working film cinematographers; the two courses should make a nicely complementary pair.
  4. Destruction and Dynamics for Film and Game Production – Another “X in film and games” course on a key topic, organized by Erwin Coumans (AMD; formerly at SCEA R&D, Havok and Guerrilla Games). Erwin is the creator of the open-source Bullet Physics engine, which has been used in many films and games. Other speakers include Takahiro Harada (a GPU physics researcher at AMD, formerly Havok and the University of Tokyo), Nafees Bin Zafar (a senior production engineer at DreamWorks Animation who won an Academy Scientific & Engineering Award for his fluid simulation work at Digital Domain), Mark Carlson (an FX R&D programmer at DreamWorks Animation, formerly at Disney Animation), Brice Criswell (a senior software engineer at ILM), Michael Baker (no affiliation listed – I’m guessing it’s the Michael Baker who teaches at the Art Institute of Las Vegas and develops tools for the Dynamica Bullet Maya plugin), and Erin Catto (a principal software engineer at Blizzard who also developed the very widely used Box2D open source 2D physics engine).
  5. PhysBAM: Physically Based Simulation – Another physics course, but with a different emphasis. It focuses on the the PhysBAM simulation library developed at Stanford University and used by ILM, Disney Animation, and Pixar. Parts of PhysBAM are already open source – since the course webpage refers to “the soon-to-be-released simulation library PhysBAM”, presumably the rest will be available soon. The course is presented by Craig Schroeder (a PhD student at Stanford).
  6. Storytelling With Color – Anyone who saw my color course last year knows that I believe that getting the technical side of color right is important, for both film and games. But the reason it is important comes from the creative side – the way that a selection of colors can drive story or establish a mood. This course covers that topic, and should be of great interest to many game developers. It will be presented by Kathy Altieri (a production designer at DreamWorks Animation who worked on filmsm including The Prince of Egypt, Over the Hedge, and How to Train Your Dragon, and previously at Disney Animation on The Little Mermaid, Aladdin, and The Lion King).

If the rest of the content will be nearly as good as this preliminary set of courses appears to be, SIGGRAPH 2011 will be a conference to remember!

Tags: ,

« Older entries § Newer entries »