Monthly Archives: February 2011

I3D 2011 Report – Part II: Industry Session

An industry session kicked off the second day of I3D (there were papers on the first day that I’m skipping over since I’ll do a paper roundup post later).

This session was comprised of two talks by game industry speakers – Dan Baker (Firaxis), and Chris Hecker (currently working on the indie game SpyParty). I’ll summarize each talk as well as the discussion that followed.

Dan Baker’s talk: “From Papers to Pixels: how Research Finds (or Often Doesn’t) its Way into Games”

Dan started by stressing that although most interactive graphics research has games as the primary target application, researcher’s priorities and the needs of the game industry are often misaligned. Game developers prize papers that enable increasing visual quality and/or productivity, describe techniques that are art-directable, and that inspire further work.

Dan complained that researchers appear to prize novelty over practicality. Often papers intended to inspire future research instead create things nobody needs, like a cheese grater-mousetrap combination (an actual patent registration). Papers are judged by other researchers who often lack the background needed to determine their utility.

As Dan pointed out, the vast majority of graphics papers over the years have not been used. He illustrated this by contrasting two papers which had citation rates wildly out of step with their actual real-world impact.

The first example was the progressive meshes paper – one of the most widely cited papers in the field, but it had very little practical impact. Why wasn’t it used? Dan identified three primary reasons:

  1. It solved an already solved problem – artists were already trained to create low-polygon models and weren’t looking for automatic tools which they couldn’t control and which would require changes to the tool chain. Building the mesh was a relatively small part of the art asset pipeline in any case.
  2. The process was fragile and the quality of the final result varied.
  3. Hardware advances rapidly reduced the importance of vertex and triangle throughput as bottlenecks.

In contrast, a paper on vertex cache optimization (written by the same author – Hughes Hoppe) had very low citation rates. However, it had a profound impact in practice. Almost every game pipeline implements similar techniques – it is considered professional negligence at this point not to. Why is this paper so heavily used by game developers?

  1. It was easy to implement and integrate into the asset pipeline.
  2. It did not impact visual quality.
  3. It increased performance across the board.
  4. It remained valuable in the face of ongoing hardware changes.

The reason papers languish is not developers apathy; if a paper offers promise of solving an important problem, game developers will try to implement it. Dan mentioned talking to graphics programmers at a game development conference shortly after the variance shadow maps paper was published – all of them had tried to implement it (although they eventually abandoned it due to artifacts). Dan gave three rules to help researchers seeking to do relevant research for game development:

  1. Hardware advances can rapidly render techniques irrelevant, but this can typically be predicted based on current trends.
  2. Give developers something they need – an incremental improvement to something useful is better than a profound advance in an esoteric area.
  3. Radical changes in the way things are done are difficult to sell.

Dan gave several examples of graphics research used by Firaxis in Civilization V (participating media, subsurface scattering, Banks and Ashikhmin-Shirley anisotropic BRDFs, wavelet splines, and smoothed particle hydrodynamics), and suggested that researchers interact more with game developers (via sabbaticals and internships). He ended by listing some areas he wants to see more research in:

  • MIP generation for linear reconstruction
  • Temporally stable bloom
  • Texture compression
  • Shader anti-aliasing
  • Better normal generation from height data
  • Better cloth shading

Chris Hecker’s talk: “A Game Developer’s Wish List for Researchers”

Chris recorded the audio of his talk, and has a flash animation of the slides synchronized to the audio (as well as separate downloads) on his website.

One comment Chris made resonated with me, although it was unrelated to the main topic of his talk. He said he thought games are around where films were in 1905 in terms of development. I’ve been looking a lot into the history of films lately, and that sounds about right to me – games still have a long way to go. Anyway, back to the theme of stuff game developers want from researchers. Chris stated that although it is commonly thought that the top (indeed only) priority for game developers is performance, the top three game technology priorities in his opinion are robustness, simplicity, and performance, in that order. He went into some more detail on each of these.

Robustness: this is important because of interactivity. Players can get arbitrarily close to things, look at them from different angles, etc. and everything needs to hold up. Chris describes several dimensions of robustness: what are the edge cases (when does it break), what are the failure modes (how does it break), what are the downsides to using the technique, is the parameter space simply connected (i.e. can you tweak and interpolate the parameters and get reasonable results), and are the negative results described (things the researchers tried that didn’t work). Published papers in particular have a robustness problem – when game developers try to implement them they typically don’t work. Page limits mitigate against a proper analysis of drawbacks, implementation tips, etc. Now that most journals and conference proceedings are going paperless, Chris claimed that there is no reason to have these restrictive page limits.

Simplicity: Chris stated that games are always at the edge of systemic complexity failure. If the toolchain is not on the edge of collapse, that means the game developers missed out on an opportunity to add more stuff. It’s the classic “straw that breaks the camel’s back” problem – the added complexity needed to implement a given technique might not seem large by itself, but the existing complexity of the system needs to be taken into account. If the technique does not provide a 10X improvement in some important metric, adding any complexity is not worth it. Simplicity, even crudeness, is a virtue in game development. Like robustness, simplicity also has many dimensions: are the implications of the technique explainable to artists, does it have few parameters, is the output intuitive, are the results art-directable, is it easy to integrate into the tools pipeline, what dependencies (code, preprocessing, markup, order, compatibility) does the technique have, etc.

Performance: As Chris described it, this is different than classic computer science performance. The constant matters, typically more than the O() notation. Researchers need to specify the time it takes for their technique to operate in milliseconds, instead of giving a frames-per-second count that is useless since it includes overheads for rendering some arbitrary scene. It’s important to discuss worst case performance and optimize for that, rather than for the average case. Researchers should not just focus on embarrassingly parallel algorithms, and should do real comparisons of their technique’s performance. A “real comparison” means comparing against real techniques used in practice, against real (typically highly optimized) implementations used in the field, using real inputs, real scenes and real working set sizes. The issue of “real scenes” in particular is more nuanced than commonly thought – it’s not enough to have the same triangle count as a game scene. Any given game scene will have a particular distribution of triangle sizes, and particular “flavors” of overdraw, materials, shadows, lighting, etc. that all have significant performance implications.

Chris talked about the importance of providing source code. Researchers typically think about their paper as being rigorous, and their source code as being “dirty” or “hacky”. However, the source code is the most rigorous description of the algorithm. You can’t handwave details or gloss over edge cases in source code. Availability of source code greatly increases the chance of games developers using the technique. Given that only a small fraction of papers are relevant to game developers (and a small fraction of those work as advertised), the cost of implementing each paper just to figure out if it works is prohibitively high.

As for what to research, Chris stated that researchers should avoid “solutions looking for problems” – they should talk to game developers to find out what the actual urgent problems are. If the research is in the area of graphics, it needs to integrate well with everything else. AI research is especially tricky since AI is game design; the algorithm’s behavior will directly affect the gameplay experience. In the case of animation research, it is important to remember that interactivity is king; the nicest looking animation is a failure if it doesn’t respond fluidly to player commands. Perceptual models and metrics are an important area of research – what is important when it comes to visuals?

Chris ended his talk with a few miscellaneous recommendations to researchers:

  • Don’t patent! If you do, warn us in the abstract so we can skip reading the paper.
  • Put the paper online, not behind a paywall.
  • Publish negative results – knowing what didn’t work is as important as knowing what did.
  • Answer emails – often developers have questions about the technique (not covered in the paper due to page limits).
  • Play games! Without seeing examples of what games are doing now, it’s hard to know what they will need in the future.

Following discussions:

These two talks sparked lots of discussion in the Q&A sections and during subsequent breaks. The primary feeling among the researchers was that game developers have a very one-sided view of the relationship. While researchers do want their research to have a practical impact, they also have more direct needs, such as funding. Computer graphics research used to be largely funded by the military; this source dried up a while ago and many researchers are struggling for funding. If it’s true that games are the primary benefactor of research into computer graphics research, shouldn’t the game industry be the primary funding source as well?

Regarding the use of “real” data, most of the researchers are anxious to do so, but very few game companies will provide it! Valve is a notable exception, and indeed several of the papers at I3D used level data from Left 4 Dead 2. More game developers need to provide their data, if they hope for research which works well on their games.

Companies in other industries do a much better job of working with academic researchers, establishing mutually beneficial relationships. Industry R&D groups (Disney Research, HP Labs, Adobe’s Advanced Technology Labs, Microsoft Research, Intel Research, NVIDIA Research, etc.) are a key interface between industry and academia; if more game companies established such groups, that could help.

Photos from I3D 2011

Provided by Mauricio Vives: feast your squinties. Lots of headshots, which I’m sure the speakers (and their moms) will appreciate. I certainly do; it’s great putting a face to a name.

Also, save the date: as the last slide shows, March 9-11 2012 is the next I3D, in Costa Mesa, California. I’m suitably impressed that next year’s co-chairs (Sung-Eui Yoon and Gopi Meenakshisundaram) already have a place and date. This was possible since they’re colocating I3D to directly follow IEEE VR, which is March 4-8.

One photo, “Carlo’s Models“:

I3D 2011 Report – Part I: Keynote

Today was the first day of I3D 2011. I3D (full name: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games) is a great little conference – it’s small enough so you can talk to most of the people attending, and the percentage of useful papers is relatively high.

I’ll split my I3D report into several smaller blog posts to ensure timely updates; this one will focus on the opening keynote.

The keynote was titled “Image-Based Rendering: A 15-Year Retrospective”, presented  by Richard Szeliski (Microsoft Research), who’s been involved with a lot of the important research in this area. He started with a retrospective of the research in this area, and then followed with a discussion of a specific application: panoramic image stitching. Early research in this area lead to QuickTime VR, and panoramic stitching is now supported in many cameras as a basic feature. Gigapixel panoramas are common – see a great example of one from Obama’s inaugural address by Gigapan (360 cities is another good panorama site). Street-level views such as Google Street View and Bing’s upcoming Street Slide feature use sequences of panoramas.

Richard then discussed the mathematical basis of image-based rendering: a 4D space of light rays (reliant on the fact that radiance is constant over a ray – in participating media this no longer holds and you have to go to a full 5D plenoptic field). Having some geometric scene information is important, it is hard to get good results with just a 4D collection of rays  (this was the main difference between the first two implementations – lumigraphs and light fields). Several variants of these were developed over the years (e.g. unstructured lumigraphs and surface light fields).

Several successful developments used lower-dimensional “2.5D” representations such as layered depth images (a “depth image” is an image with a depth value as well as a color associated with each pixel). Richard remarked that Microsoft’s Kinect has revolutionized this area by making depth cameras (which used to cost many thousands of dollars) into $150 commodities; a lot of academic research is now being done using Kinect cameras.

Image-based modeling started primarily with Paul Debevec’s “Facade” work in 1996. The idea was to augment a very simple geometric model (which back then was created manually) with “view dependent textures” that combine images from several directions to remove occluded pixels and provide additional parallax. Now this can be done automatically at a large scale – Richard showed an aerial model of the city of Graz which was automatically generated in 2009 from several airplane flyovers – it allowed for photorealistic renders with free camera motion in any direction.

Richard also discussed environment matting, as well as video-based techniques such as video rewrite, video matting, and video textures. This last paper in particular seems to me like it should be revisited for possible game applications – it’s a powerful extension of the common practice of looping a short sequence of images on a sprite or particle.

Richard next talked about cases where showing the results of image-based rendering in a less realistic or lower-fidelity way can actually be better – similar to the “uncanny valley” in human animation.

The keynote ended by summarizing the areas where image-based rendering is currently working well, and areas where more work is needed. Automatic 3D pose estimation of the camera from images is pretty robust, as well as automatic aerial and manually augmented ground-level modeling. However, some challenges remain: accurate boundaries and matting, reflections & transparency, integration with recognition algorithms, and user-generated content.

For anyone interested in more in-depth research into these topics, Richard has written a computer vision book, which is also available freely online.

Just two weeks until the SIGGRAPH General Submission deadline!

SIGGRAPH 2011 will be in Vancouver, on August 7-11, 2011. I’ve given presentations at SIGGRAPH several times; each time was a great experience where I learned a lot and met some pretty awesome graphics people from the world’s top research institutions, film production companies, and game development studios.

SIGGRAPH has several programs at which game developers can show their work; I wanted to point out that two of the most important (Talks and Dailies) have deadlines on February 18th, less than two weeks away! Fortunately submitting a proposal to one of these programs doesn’t take much time. However, getting approval from your boss may take a while, so you don’t want to wait.

SIGGRAPH Talks are 20-minute long presentations which typically contain “nuggets” of novel film or game production tech. These can be rendering or shading techniques, tools for artists, enhancements done to support a tricky character design, etc. If it’s something a programmer or technical artist is proud of having done and it’s at least tangentially graphics-related, chances are it would make a good Talk submission. Submitting a talk only requires creating a one-page abstract; if the talk is accepted, you have until August to make 20 minutes worth of slides – not too bad. To get an idea of the level of detail expected in the abstract, and of the variety of possible talks, here are some film and game Talk abstracts from 2009 and 2010: Houdini in a Games Pipeline, Spore API: Accessing a Unique Database of Player Creativity, Radially-Symmetric Reflection Maps, Underground Cave Sequence for Land of the Lost, Hatching an Imaginary Bird, Fast Furry Ray Gathering, and NPR Gabor Noise for Coherent Stylization. If you are reading this, please consider submitted the coolest thing you’ve done last year as a Talk; the small time investment will repay itself many times over.

SIGGRAPH Dailies are relatively new (first introduced at SIGGRAPH 2010). These are very short (under two minutes!) presentations of individual art assets, such as models, animations, particle effects, shaders, etc. Unlike the rest of SIGGRAPH which emphasizes novel techniques, Dailies emphasize excellence in the result. Every good game or movie has many individual bits of excellence, each the result of an artist’s talent, imagination and sweat. These are often overlooked, or unknown outside the studio; Dailies aim to correct that. Dailies submissions are even easier than Talk submissions. All that is required is a short (60-90 second) video of the art asset, no audio, just something simple like an animation loop or model turntable. You will also need a short backstory; something that gives a feeling for the effort that went into the work, including any notable production frustrations, unlikely inspirations, sudden strokes of genius, etc. Don’t write too much – it should take about as long to say as the video length (60-90 seconds). To get a better idea of what a Dailies presentation looks like, here are two examples. The list of Dailies presented at SIGGRAPH 2010 can be found here: it runs the gamut from Pixar and Disney movies to student projects. I suspect not many artists read this blog, so any game programmers reading this, please forward it to the artists at your studio.

Good luck with your submissions!