Author Archives: Naty

FMX and Triangle Game Conference 2010

This post is about two conferences that might not be as familiar to readers of this blog as SIGGRAPH and GDC.

FMX is an annual conference run by the Baden-Württemberg Film Academy, held in Stuttgart, Germany at the beginning of May.  This year is the 15th one for FMX, and the content appears quite promising.

There are a bunch of game talks, including talks about Split/Second, Heavy Rain, Fight Night 4, Alan Wake, Habbo Hotel, two talks on God of War III and one on Arkham Asylum (the last three are repeats of GDC talks).  However, most of the talks relate to film production, including presentations on The Princess and The Frog, Tangled, 2012, Alice in Wonderland, Ice Age 3, Iron Man 2, Clash of the Titans, Sherlock Holmes, District 9, Shutter Island, Planet 51, Avatar, A Christmas Carol, The Imaginarium of Dr. Parnassus, How to Train Your Dragon, and Where the Wild Things Are.  FMX 2010 also has various master classes on the use of various DCC applications and middleware libraries, recruiting talks, presentation of selected SIGGRAPH 2009 papers, and more.  Attendance fees are quite reasonable (200 Euros, 90 for students) so this conference should be good value for readers in Europe who can travel cheaply to Germany.

The Triangle Game Conference is held in Raleigh, North Carolina.  Its name comes from the “research triangle” defined by Raleigh, Durham, and Chapel Hill.  This area is home to several prominent game companies, such as Epic Games, Red Storm Entertainment, and branches of Electronic Arts and Insomniac Games.  I first heard of this conference last year, when it hosted a very good talk by Crytek on deferred shading in CryEngine 3.  This year, the content looks interesting, if a bit mixed; the presentations by Epic and Insomniac seem to be the best ones.  Definitely worth attending if you’re in that area, but I wouldn’t travel far for it.

Morphological Antialiasing in God of War III

Eric wrote a post back in July about a paper called Morphological Antialiasing which had been presented at HPG 2009 (source code for the paper is available here).  The paper described a post-processing algorithm which could smooth out edges as if by magic.  Although the screenshots were impressive, the technique seemed too expensive to be practical for games on current hardware; also there were reportedly bad artifacts when applied to moving images.  For these reasons I didn’t pay too much attention to the technique.  It was reported (including by us) that the game The Saboteur was using this technique on the PS3 but this turned out to be a false alarm.

However, God of War III is actually using Morphological Antialiasing.  I’ve looked closely at the game and the technique they use appears not to exhibit significant motion artifacts; it definitely looks better than the MSAA2X technique it replaced (which was used in the E3 2009 demo).  According to the game’s art director, the technique used “goes beyond” the original paper; this may mean that they improved it in ways that reduce the motion artifacts.

My initial impression that the technique is too expensive did not take into account the impressive horsepower of the PS3’s Cell chip.  After optimization, the technique runs in 20 milliseconds on a single SPU; running it on 5 SPUs in parallel enables it to complete execution in 4 milliseconds.  Most importantly, turning off MSAA saved them 5 milliseconds of GPU time, which on the PS3 is a significant gain (the GPU is most often the bottleneck on PS3 games).

Cascaded Light Propagation Volumes for Indirect Illumination

This I3D 2010 paper by Anton Kaplanyan (Crytek) and Carsten Daschbacher (University of Stuttgart) is now online on Crytek’s publications page, with video and talk slides.  The paper extends and describes in more detail the real-time global illumination technique Anton presented at SIGGRAPH 2009.  The most significant extension over the SIGGRAPH 2009 presentation is the addition of coarse occlusion for indirect bounce light.

Although there have been many recent papers on “real-time” global illumination, this technique is the only one so far that is at all feasible for current-generation consoles.  Crytek’s new CryEngine 3 has implementations of it on XBox 360 and Playstation 3, and the technique will presumably be used in the upcoming Crysis 2 game as well.

Game Engine Gems

A little under a year ago, we mentioned the call for papers for a new “gems” book: Game Engine Gems, edited by Eric Lengyel.  It’s available for pre-order on Amazon, not surprising since it’s due to be launched at GDC, just two weeks from now.  The table of contents is available on the book website, and looks promising.  Although the book contains some rendering chapters, its focus is broader, similar to the successful Game Programming Gems series to which it is likely to be a worthy competitor.  I’ve added it to our upcoming books list.

Game developers – submit a SIGGRAPH Talk before February 18!

The deadline for submitting a Talk to SIGGRAPH is February 18 – less than two weeks away as I’m writing this.  Although the time is short, all game developers working in graphics should seriously consider submitting one; it’s not a lot of work, and the potential benefits are huge.  As a member of the 2010 conference committee, I thought I’d take a little time to elucidate.

SIGGRAPH 2010 is in Los Angeles this summer.  Although most people think of SIGGRAPH in connection with academic papers, it is also where film production people share practical tips and tricks, show off cool things they did on their last film, learn from their colleagues, and make professional connections.  Over the last few years, there has been a steadily growing game developer presence as well, which is exciting because SIGGRAPH is a unique opportunity for these two graphics communities to meet and learn from each other. The convergence between the technology, production methods, and artistic vision of film and games is a critical trend in both industries, and SIGGRAPH is where the rubber meets the road.

In 2010, SIGGRAPH is making a big push to increase the amount of game content.  Stop and think for a minute; isn’t there something you’ve done over the past year or two that’s just wicked awesome?  Wouldn’t it be cool to show it off not just to your fellow game developers, but to people from companies like ILM, Pixar and Sony Pictures Imageworks?  Imagine the conversations you could have, about adapting your technique for film use or improving it with ideas taken from film production!

Most film production content is presented as 20-minute Talks (formerly called Sketches); this makes the most sense for game developers as well.  Submitting a Talk requires only a one-page abstract and takes little time.  If you happen to have some video or additional documentation ready you can attach those as supplementary material.  This can help the reviewers assess your technique, but is not required.  If your talk is accepted, you have until the day of your presentation in late July to prepare slides (just 20 minutes worth).

To help see the level of detail expected in the one-page abstract, here are three examples.

A little time invested in submitting a Talk for SIGGRAPH 2010 can pay back considerable dividends in career development and advancement, so go for it!

I3D 2010 Papers

The Symposium on Interactive 3D Graphics and Games (I3D) has been a great little conference since its genesis in the mid-80s, featuring many influential papers over this period.  You can think of it as a much smaller SIGGRAPH, focused on topics of interest to readers of this blog.  This year, the I3D papers program is especially strong.

Most of the papers have online preprints (accessible from Ke-Sen Huang’s I3D 2010 paper page), so I can now do a proper survey.  Unfortunately, I was able to read two of the papers only under condition of non-disclosure (Stochastic Transparency and LEAN Mapping).  Both papers are very good; I look forward to being able to discuss them publicly (at the latest, when I3D starts on February 19th).

Other papers of interest:

  • Fourier Opacity Mapping riffs off the basic concept of Variance Shadow Maps, Exponential Shadow Maps (see also here) and Convolution Shadow Maps.  These techniques store a compact statistical depth distribution at each texel of a shadow map; here, the quantity stored is opacity as a function of depth, similarly to the Deep Shadow Maps technique commonly used in film rendering.  This is applied to shadows from volumetric effects (such as smoke), including self-shadowing.  This paper is particularly notable in that the technique it describes has been used in a highly regarded game (Batman: Arkham Asylum).
  • Volumetric Obscurance improves upon the SSAO technique by making better use of each depth buffer sample; instead of treating them as point samples (with a simple binary comparison between the depth buffer and the sampled depth), each sample is treated as a line sample (taking full account of the difference between the two values).  It is similar to a concurrently developed paper (Volumetric Ambient Occlusion); the techniques from either of these papers can be applied to most SSAO implementations to improve quality or increase performance.  The Volumetric Obscurance paper also includes the option to extend the idea further and perform area samples; this can produce a simple crease shading effect with a single sample, but does not scale well to multiple samples.
  • Spatio-Temporal Upsampling on the GPU – games commonly use cross-bilateral filtering to upsample quantities computed at low spatial resolutions.  There have also been several recent papers about temporal reprojection (reprojecting values from previous frames for reuse in the current frame); Gears of War 2 used this technique to improve the quality of its ambient occlusion effects. The paper Spatio-Temporal Upsampling on the GPU combines both of these techniques, filtering samples across both space and time.
  • Efficient Irradiance Normal Mapping – at GDC 2004, Valve introduced their “Irradiance Normal Mapping” technique for combining a low-resolution precomputed lightmap with a higher-resolution normal map.  Similar techniques are now common in games, e.g. spherical harmonics (used in Halo 3), and directional lightmaps (used in Far Cry).  Efficient Irradiance Normal Mapping proposes a new basis, similar to spherical harmonics (SH) but covering the hemisphere rather than the entire sphere.  The authors show that the new basis produces superior results to previous “hemispherical harmonics” work.  Is it better than plain spherical harmonics?  The answer depends on the desired quality level; with four coefficients, both produce similar results.  However, with six coefficients the new basis performs almost as well as quadratic SH (nine coefficients), making it a good choice for high-frequency lighting data.
  • Interactive Volume Caustics in Single-Scattering Media – I see real-time caustics as more of an item to check off a laundry list of optical phenomena than something that games really need, but they may be important for other real-time applications.  This paper handles the even more exotic combination of caustics with participating media (I do think participating media in themselves are important for games).  From a brief scan of the technique, it seems to involve drawing lines in screen space to render the volumetric caustics.  They do show one practical application for caustics in participating media – underwater rendering.  If this case is important to your application, by all means give this paper a read.
  • Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU – I’m a big fan of Valve’s work on using signed distance fields to improve font rendering and alpha testing.  These distance fields are typically computed offline (a process referred to as “computing a distance transform”, sometimes “an Euclidian distance transform”).  For this reason, brute-force methods are commonly employed, though there has been a lot of work on more efficient algorithms.  This paper gives a GPU-accelerated method which could be useful if you are looking to speed up your offline tools (or if you need to compute alpha silhouettes on the fly for some reason).  Distance fields have other uses (e.g. collision detection), so there may very well be other applications for this paper.  Notably, the paper project page includes links to source code.
  • A Programmable, Parallel Rendering Architecture for Efficient Multi-Fragment Effects – one of the touted advantages of  Larrabee was the promise of flexible graphics pipelines supporting stuff like multi-fragment effects (A-buffer-like things like order independent transparency and rendering to deep shadow maps).  Despite a massive software engineering effort (and an instruction set tailored to help), Larrabee has not yet been able to demonstrate software rasterization and blending running at speeds comparable to dedicated hardware.  The authors of this paper attempt to do the same on off-the-shelf NVIDIA hardware using CUDA – a very aggressive target!  Do they succeed?  it’s hard to say.  They do show performance which is pretty close to the same scene rendering through OpenGL on the same hardware, but until I have time to read the paper more carefully (with an eye on caveats and limitations) I reserve judgment.  I’d be curious to hear what other people have to say on this one.
  • On-the-Fly Decompression and Rendering of Multiresolution Terrain (link is to an earlier version of the paper) – the title pretty much says it all.  They get compression ratios between 3:1 and 12:1, which isn’t bad for on-the-fly GPU decompression.  A lot of water has gone under the terrain rendering bridge since I last worked on one, so it’s hard for me to judge how it compares to previous work; if you’re into terrain rendering give it a read.
  • Radiance Scaling for Versatile Surface Enhancement – this could be thought of as an NPR technique, but it’s a lot more subtle than painterly techniques.  It’s more like a “hyper-real” or “enhanced reality” technique, like ambient occlusion (which darkens creases a lot more than a correct global illumination solution, but often looks better; 3D Unsharp Masking achieves a more extreme version of this look).  Radiance Scaling for Versatile Surface Enhancement is a follow-on to a similar paper by the same authors, Light Warping for Enhanced Surface Depiction.  Light warping changes illumination directions based on curvature, while radiance scaling scales the illumination instead, which enables cheaper implementations and increased flexibility.  With some simplifications and optimizations, the technique should be fast enough for most games, making this paper useful to game developers trying to give their game a slightly stylized or “hyper-real” look.
  • Cascaded Light Propagation Volumes for Real-time Indirect Illumination – this appears to be an updated (and hopefully extended) version of the CryEngine 3 technique presented by Crytek at a SIGGRAPH 2009 course (see slides and course notes).  This technique, which computes dynamic approximate global illumination by propagating spherical harmonics coefficients through a 3D grid, was very well-received, and I look forward to reading the paper when it is available.
  • Efficient Sparse Voxel Octrees – there has been a lot of excited speculation around raycasting sparse voxel octrees since John Carmack first hinted that the next version of id software‘s rendering engine might be based on this technology.  A SIGGRAPH 2008 presentation by Jon Olick (then at id) raised the excitement further (demo video with unfortunate soundtrack here).  The Gigavoxels paper is another example of recent work in this area.  Efficient Sparse Voxel Octrees promises to extend this work in interesting directions (according to the abstract – no preprint yet unfortunately).
  • Assisted Texture Assignment – the highly labor-intensive (and thus expensive) nature of art asset creation is one of the primary problems facing game development.  According to its abstract (no preprint yet), this paper proposes a solution to part of this problem – assigning textures to surfaces.  There is also a teaser posted by one of the authors, which looks promising.
  • Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering – volumetric effects such as smoke, shafts of light (also called “god rays” or crepuscular rays) and volumetric shadows are important in film rendering, but usually missing (or coarsely approximated) in games.  Unfortunately, nothing is known about this paper except its title and the identities of its authors.  I’ll read it (and pass judgement on whether the technique seems practical) when a preprint becomes available (hopefully soon).

The remaining papers are outside my area of expertise, so it’s hard for me to judge their usefulness:

2009 Academy Sci & Tech Awards

Oops – I forgot to include Christophe Hery in the point-based color bleeding award below.  This has now been fixed; apologies and congratulations to Christophe.  Many thanks to Margarita Bratkova for pointing out the error!

Last week, the Academy of Motion Pictures Arts and Sciences (most known for its annual Academy Awards, or “Oscars”) announced the winners of it’s 2009 Scientific & Technical Awards.  No Awards of Merit (the highest award level) were given this year – those are the ones that come with an “Oscar” statuette and are shown in the Academy Awards telecast (Renderman and Maya have won Awards of Merit in previous years).

Two computer graphics-related Scientific and Engineering Awards were given this year; these are the second-highest award level and come with a bronze tablet:

  • Per Christensen, Michael Bunnell and Christophe Hery for point-based indirect illumination; an an interesting inversion of usual practice, this fast approximate global illumination / ambient occlusion technique started out as a real-time GPU technique and ended up as an offline rendering CPU technique (first used in Pirates of the Caribbean: Dead Man’s Chest, it is now a standard part of Pixar’s Renderman).  A recent SIGGRAPH Asia paper describes a closely related technique.
  • Paul Debevec, Tim Hawkins, John Monos and Mark Sagar for Light Stage and image-based character relighting.  The work done by Paul Debevec and his team at USC’s Institute for Creative Technologies on image-based capture and lighting has been hugely influential, resulting in widespread adoption of light probes, multi-exposure HDR image capture, and many other techniques commonly used in games as well as film.

One of the Technical Achievement Awards (the third level, which comes with a certificate) is also of interest to readers of this blog:

  • Hayden Landis, Ken McGaugh and Hilmar Koch for ambient occlusion.  The pioneering work on ambient occlusion for film production was done by these guys at ILM; first publication was at the Renderman in Production course at SIGGRAPH 2002 (the relevant chapter of the course notes can be found here).  Of course, ambient occlusion is heavily used in real-time applications as well.

In an interesting related development, eight separate Scientific and Engineering Awards and two Technical Achievement were given for achievements related to the digital intermediate process (digital scanning and processing of film data), many of them for look-up-table (LUT) based color correction (LUTs have also been used for color correction in games).  The Academy tends to batch up awards in this way for technologies whose “time has come” (two years ago there were a lot of fluid simulation awards).  Given that another of the Technical Achievement Awards was for a motion capture system, we can see how quickly digital technology has come to dominate the film industry.  As recently as 2005, most of the awards were for things like camera systems; this year only one of the awards (for a lens motor system) was for non-digital technology.

Congratulations to all the winners!

Sony Pictures Imageworks open source projects

In my HPG 2009 report, I mentioned that Sony Pictures Imageworks was releasing several of their projects as open source, most notably a shading language, OSL, tailored to ray-tracing. For a long time, there was no actual information available on OSL, but now (tipped off by a recent ompf post) I see that some has appeared.

OSL is hosted on Google Code, the main page is here, and an introductory document can be found here. The language has several features that seem well-designed for ray-tracing; someone with more knowledge in this area will have to weigh in on its usefulness.

US Gov Requests Feedback on Open Access – ACM Gets it Wrong (Again)

By Naty Hoffman

In 2008, legislation was passed requiring all NIH-funded researchers to submit their papers to an openly available repository within a year of publication.  Even this modest step towards full open access was immediately attacked by rent-seeking scientific publishers.

More recently the White House Office of Science and Technology Policy started to collect public feedback on expanding open access.  The first phase of this process ends on December 20th.

From ACM’s official comment, it is clearly joining the rent-seekers.  This is perhaps not surprising, considering the recent ACM take-down of Ke-Sen Huang’s paper link pages (Bernard Rous, who signed the comment, is also the person who issued the take-down).  In the paper link case ACM did eventually see reason.  At the time, I naively believed this marked a fundamental change in ACM’s approach; I have been proven wrong.

ACM’s comment can be found towards the bottom of this link; I will quote the salient parts here for comment.

ACM: “We think it is imperative that deposits be made in institutional repositories vs. a centralized repository…”

A centralized repository is more valuable than a scattering of papers on author’s institutional web pages.  ACM evidently agrees, given that it has gone to the trouble of setting up just such a repository (the Digital Library).  ACM’s only problem with a central, open access repository is that it would compete with its own (closed) one.  Since an open repository contributes far more value to the community than one locked behind a paywall, ACM appears to value its revenue streams over the good of the community it supposedly exists to serve.

ACM: “…essentially everything ACM publishes is freely available somewhere on the Web… In our community, as in others, voluntary posting is working.”

This is demonstrably false.  Almost every graphics conference has papers which are not openly available.  Many computing fields are even worse off.

Most infuriatingly, ACM presents a false balance between its own needs and the needs of the computing community:

ACM: “…there is a fundamental balance or compromise in how ACM and the community have approached this issue – a balance that serves both… We think it is imperative that any federally mandated open access policy maintain a similar balance… There is an approach to open access that allows the community immediate access to research results but also allows scholarly publishers like ACM to sustain their publishing programs. It is all about balance.”

What nonsense is this?  The ACM has no legitimate needs or interests other than those of its members!  How would U.S. voters react to a Senator claiming that a given piece of legislation (say, one reducing restrictions on campaign financing) “strikes a fundamental balance between the needs of the Senate and those of the United States of America”?  ACM has lost its way, profoundly and tragically.

As much as Mr. Rous would like to think otherwise, ACM’s publishing program is not an end in itself, but a means to an end.  ACM arguing that an open repository of papers would be harmful because it “undermines the unique value” of ACM’s closed repository is like the Salvation Army arguing that a food stamp program is harmful because it “undermines the unique value” of their soup kitchens.

If you are an ACM member, these statements were made in your name.  Regardless of membership, if you care at all about access to research publications please make your opinion known.  Read the OSTP blog post carefully, and post a polite, well-reasoned argument in the comments.  Note that first you need to register and log in – the DigitalKoans blog has the details:

Note: To post comments on the OSTP Blog, you must register and login. There are registration and login links on the sidebar of the blog home page at the bottom right (these links are not on individual blog postings).

Hurry!  The deadline for Phase I comments (which include the ACM comment) is December 20th, though you can make your opinion known in the other phases as well.