The evils of fps

I completely agree with this blog post by Humus on the uselessness of the performance numbers in most rendering papers.  This is something that often comes up when reviewing papers.  Frames-per-second (fps) numbers are less than useless, since they include extraneous information (the time taken to render parts of the scene not using the technique in question) and make it very difficult to do meaningful comparisons.  The performance measurement game developers care about is the time to execute the technique in milliseconds.

Some papers do get it right, for example this one.  The authors use milliseconds for detailed performance comparisons, only using fps to show how overall performance varies with camera and light position (which is a rare legitimate use of fps).

SIGGRAPH 2009 Posters

The complete list of accepted posters can be found here.  Posters are in a sense the “smallest” SIGGRAPH contribution – each one constitutes a single (large) page of description.  All the posters are in one large room, so it doesn’t take long to just walk past them and see what looks interesting.  There are also two sessions (each one hour long) where a presenter stands besides each poster and discusses it with anyone who is interested.

The poster list has no abstracts, just titles.  Judging from those, the ones that I find potentially interesting are:

  • Polygonal Functional Hybrids for Computer Animation and Games
  • The UnMousePad – The Future of Touch Sensing
  • Data-Driven Diffuse-Specular Separation of Spherical Gradient Illumination
  • Lace Curtain: Modeling and Rendering of Woven Structures Using BRDF/BTDF
  • Beyond Triangles: Gigavoxels Effects in Video Games
  • Cosine Lobe-Based Relighting From Gradient Illumination Photographs
  • Curvature-Dependent Local Illumination Approximation for Translucent Materials
  • Direct Illumination From Dynamic Area Lights
  • Gaussian Projection: A Novel PBR Algorithm for Real-Time Rendering
  • Interactive Lighting Manipulation Application on GPU
  • Reflection Model of Metallic Paints for Reflectance Acquisition
  • Variance Minimization Light-Probe Sampling

SIGGRAPH 2009 Birds of a Feather events

These events are proposed and organized by SIGGRAPH attendees, not the conference organizers (who simply approve them and provide rooms).  They range from large, elaborate presentations to small meetings.  The list of Birds of a Feather (BoF) events (with dates, times, and locations) is available here.

The OpenGL BoF is one of the largest and longest-running.  Each year, it is the premier event to hear about the latest developments in OpenGL.  This year it is joined (actually, preceded, by one day) by the OpenCL BoF, which discusses this new API for GPU-based general computations.

Although most render farms are used for film production, many game developers also have render farms which they use for lighting and visibility precomputations.  For this reason, some game developers may wish to attend the Renderfarming, Job Queueing, and Distributed Rendering Performance event.

The Computer Graphics for Simulation BoF and the Dynamic Simulation Birds of a Feather also seem relevant for game developers, although they are not specifically concerned with rendering.

The Interactive Ray Tracing BoF has already been mentioned by Eric, and is of interest to many readers of this blog.

One of the uses of real-time graphics with which I have very little familiarity is visualization of molecules, which has its own BoF event (the Molecular Graphics BoF).

Besides BoFs for people interested in specific graphics topics or products, there are also BoF events for particular groups within the graphics community, such as the Women in Animation BoF and SIGGIG: Gays in Graphics.  One of the most interesting of these is the Computer Graphics Pioneers Reception, which is intended for people who have been contributing to computer graphics for at least 20 years (if you fit this description and are interested in being a registered Computer Graphics Pioneer, the membership details are here).

Other BoFs are intended for people from particular regions or who went to certain schools, like the Taipei ACM SIGGRAPH Reunion, the ACCAD/OSU Alumni Gathering, the Tokyo ACM SIGGRAPH Chapter Party, the UNC SIGGRAPH Alumni Reception, Reuniao dos Brasileiros – Brazilian BoF, Purdue University Reunion, RIT Alumni Reception, and Reunion de los Mexicanos – Mexican BoF.

Finally, many regular SIGGRAPH attendees like to go to the Sake Barrel Opening Party BoF.  I haven’t attended one yet, but perhaps I will this year.

SIGGRAPH 2009 Exhibitor Tech Talks

These are sponsored talks, which are specific to a single companies’ products.  Even so, they often have good information.  The complete list of exhibitor tech talks is here; NVIDIA is giving so many talks that it rates a separate page.

AMD has a talk called Next-Generation Graphics: The Hardware and the APIs which from the abstract, seems to be about AMD’s DirectX11-level hardware and how to access its features using OpenGL extensions.  NVIDIA has two talks about using CUDA for non-traditional graphics: Alternative Rendering Pipelines on NVIDIA CUDA and Efficient Ray Tracing on NVIDIA GPUs.  Although both talks discuss ray tracing, the first also discusses a CUDA implementation of the REYES algorithm (which powers Pixar‘s RenderMan).  I think REYES is far more interesting for real-time use than ray tracing; similar algorithms have dominated film rendering for many years (although ray tracing is slowly gaining).

Another interesting NVIDIA talk (3D Vision Technology – Develop, Design, Play in 3D Stereo) discusses stereo rendering.  This is an area that has had many false starts over the last ten years, but now it seems like it might actually make it into the mainstream, driven by stereo film content and advances in home television displays.

Although it is not strictly about real-time rendering, AMD’s GPU-Accelerated Production Rendering is part of an interesting trend where GPUs are used not for real-time rendering, but to accelerate offline rendering.  Some of the techniques used here may inform future high-quality real-time rendering.

7 Things for July 20th

While at SIGGRAPH I like to look at new books at the booths. One you may wish to check out is Graphics Shaders: Theory and Practice, from AK Peters (or just use “Look Inside” on Amazon). I received a review copy and skimmed through it. If you’re interested in programming in GLSL 1.2 (part of OpenGL 2.1), consider looking at this one. A minor problem is that it’s not quite as up-to-date as the Orange Book (now on OpenGL 3.1), but the difference in core concepts between language versions is not large. The Graphics Shaders book is full color and comes with a lot of GLSL code examples. It has a bias towards scientific visualization, though not so much that it neglects the basics. I particularly enjoyed the chapter on noise, as it gave one of the clearest explanations I’ve seen on the differences between various types of basic interactive noise functions. One or two elements in the book are a little weak – the flowcharts for pipelines are often too small and difficult to read, for example – but all in all this looks like a solid contribution to the field. Don’t expect more elaborate effects, e.g., shadows are not touched upon. It does cover the basics, plus some additional topics like image post-processing (not normally covered in texts I’ve seen). One of the authors wrote a nice learning tool for GLSL, glman, free for download. If you find you like this tool, definitely consider the book.

Another book I noticed recently is Fluid Simulation for Computer Graphics. This is a topic I know little about, I was just interested to see that there’s any book at all. It looks pretty equation-filled, so is definitely for the serious practitioner.

Speaking of fluid simulation, Intel has an article on this topic for games. One of the chief strengths of any publication is that its staff makes a decision based on merit as to what is published and what is culled. So, I have to admit to being leery of anything that says, “Sponsored Feature”, as that means editorial review and decision-making are gone. I tend to err on the side of ignoring such articles (there’s plenty to read already). That said, Intel’s had quite a number of these articles recently, including such topics as instancing, ocean fog, FFT’s for image processing, and quite a few on parallelism.

In the “clearing the queue” category of links, I don’t think I ever pointed out this handy page, which presents all AMD/ATI and NVIDIA presentations at GDC 2009.

There’s now a (not very active, but at least it exists) Microsoft DirectX blog.

On the OpenGL front, NVIDIA has introduced bindless graphics to help avoid L2 cache misses. I will be interested to see how APIs evolve, as the elements in the current APIs that are bottlenecks are not so much CPU or GPU limitations as due to the API constructs themselves.

Thing for the day: an advertisement with interesting stippling.

7 Things for July 19th

Seven more:

  • Michael Abrash has an in-depth article on rasterization on Larrabee. Perhaps a little too in-depth at times; just skim past the assembly instructions. I also found myself asking, “why do that?” – the key is to just keep reading. He tries to make his examples simple and comprehensible, but at the cost of sometimes feeling like they’re oversolving the problem. They aren’t, it’s just that the solution is in fact used in different circumstances in order to be efficient.
  • SIGGRAPH has an interactive rendering event summary page. This page is more for the art production side of things, though; Naty’s coursetalks, and production sessions summaries are more comprehensive and more useful for programmer attendees.
  • NVIDIA has a number of events they’re involved in at SIGGRAPH 2009. Here’s the list.
  • I love this sort of madness: a business-card ray tracer that does depth of field.
  • Accumulated SSAO: the idea of reprojection, of using previous results by finding where they lie on this frame’s view, is one that seems a tad expensive for interactive rendering. It’s hard to know anything about performance and quality from this page, but I thought it was interesting to see.
  • I mentioned Processing in the last post. Another language-related resource for graphics and game programming is pygame, a set of Python modules for writing games. A friend said he found this system to be pretty great, that he could whip up a fairly involved game idea in a few hours.
  • Scribblenauts sounds like the coolest game that will ever come out, period. Even if it’s only 1/10th as good as the previews read, it looks to be pretty darn entertaining.

7 Things for July 18th

Well, I have 69 links stored up, wade through them here if you want unedited content. I’ve decided that getting 7 links out per post is a good round number, so here’s the first.

  • This is my screen-saver du jour: Pixel City (put the .scr file in your Windows directory). It’s fully described (along with source) in this great set of articles; if you’re too busy to read it all (though you should: it’s an fun read and he has some interesting insights), watch the video summary on that page. If you feel like researching the area of procedural modeling of cities more thoroughly, start here.
  • The book Real-Time Cameras, which is about camera control for games, now has a sample excerpt on Gamasutra.
  • NPR: Forrester Cole has two worthwhile GPU methods for deriving visible line segments for a set of edges (e.g., computing partial visibility of geometric lines). He’s put source code for his methods up at his site, the program “dpix“. Note: you’ll need Qt to compile & link.
  • The author of the Legalize Adulthood blog has recently had a number of posts on using DirectX10.
  • DirectX9 is still with us. Richard Thomson has a free draft of his book about DirectX 9 online. He knows what he’s about; witness his detailed pipeline posters. The bad news is that the book’s coverage of shaders is mostly about 1.X shaders (a walk down memory lane, if by “lane” you mean “horrifically complex assembly language”). The good news is that there’s some solid coverage of the theory and practice of vertex blending, for example. Anyway, grist for the mill – you might find something of use.
  • Around September I have 6 weeks off, so like every other programmer on the planet I’ve contemplated playing around with making a program for the iPhone. The economics are terrible for most developers, but I’d do it just for fun. It’s also interesting to see people thinking about what this new platform means for games. Naturally, Wolfenstein 3D, the “Hello World” of 3D games, has been ported. Andrew Glassner recommended this book for iPhone development, he said it’s the best one he found for beginners.
  • Speaking of Andrew, he pointed me at an interesting little language he’s been messing with, Processing. It’s essentially Java with a lot of built-in 2D (and to a lesser extent, 3D) graphics support: color, primitives, transforms, mouse control, lerps, window, etc., all right there and trivial to use. You can make fun little programs in just a page or two of code. That said, there are some very minor inconsistencies, like transparency not working against the background fill color. Pretty elaborate programs can be made, and it’s also handy for just drawing stuff easily via a program. Here’s a simple image I did in just a few lines, based on mouse moves:
    Processing output
That’s seven – ship it.

Interactive Ray Tracing BOF at SIGGRAPH 2009

Pete Shirley’s organizing an interactive ray tracing Birds of a Feather meeting at SIGGRAPH 2009. The details, as copied from here:

Interactive Ray Tracing
A variety of academic and industry leaders provide presentations and demos, with questions and discussions encouraged.

Tuesday, 5 – 6 pm
Sheraton New Orleans
Waterbury Ballroom
Peter Shirley
pshirley (at) nvidia.com

I’ll be there to help out. Pete’s already lined up demos from NVIDIA, Intel, Mental, an Imageworks affiliate, Breda University (Arauna), and Caustic. Right now we’re searching out academic groups or anyone else that want to show what they’re doing in the area. If you’ve got something to show or know someone that does, please contact Pete and me.

SIGGRAPH 2009 Production Sessions

Another part of SIGGRAPH I like are the big film production sessions – they are like a DVD “behind the scenes” on steroids.  They do tend to have long lines, though.  This year, the SIGGRAPH production sessions have been brought under the wing of the Computer Animation Festival.  A full list of production sessions can be found here.  They all look pretty interesting, actually, but I think the following ones are most noteworthy:

Big, Fast and Cool: Making the Art for Fight Night 4 & Gears of War 2: This is the first SIGGRAPH production session discussing game production rather than film production, and I hope to see many more like it in future years.

The Curious Case of Benjamin Button marked a watershed in digital character technology – the first time anyone had successfully rendered a photorealistic human character with significant onscreen presence.  The production session for this film spends a fair amount of time discussing the character, and also touches upon some other interesting bits of tech used in the film.

ILM was heavily involved with three big, flashy effects shows this year: Transformers: Revenge of the Fallen, Terminator Salvation, and Star TrekThe production session discussing all three is sure to be a lot of fun (unfortunately, there are also sure to be long lines).

Sony Pictures ImageworksCloudy with a Chance of Meatballs has some very unusual scenes (including spaghetti twisters and Jell-O mountains); it is also unusual in being fully ray-traced.  The production session discusses both of these aspects.

Although not directly relevant to real-time rendering, I am fascinated by the way in which 3D modeling and rapid prototyping were used for facial expressions in the stop-motion film Coraline (and I wrote about it in a previous blog post).  There is a production session about this very topic – anyone else who thinks this is an interesting use of technology might want to attend this one.

SIGGRAPH 2009 Talks

The full list of SIGGRAPH 2009 talks is finally up here.

Talks (formerly known as sketches) are one of my favorite parts of SIGGRAPH.  They always have a lot of interesting techniques from film production (CG animation and visual effects), many of which can be adapted for real-time rendering.  There are typically some research talks as well; most are “teasers” for papers from recent or upcoming conferences, and some are of interest for real-time rendering.  This year, SIGGRAPH also has a few talks by game developers – hopefully next year will have even more.  Unfortunately, talks have the least documentation of all SIGGRAPH programs (except perhaps panels) – just a one page abstract is published, so if you didn’t attend the talk you are usually out of luck.

The Cameras and Imaging talk session has a talk on the cameras used in Pixar‘s “Up” which may be relevant to developers of games with scripted cameras (such as God of War).

From Indie Jams to Professional Pipelines has two good game development talks: Houdini in a Games Pipeline by Paulus Bannink of Guerilla Games discusses how Houdini was used for procedural modeling in the development of Killzone 2.  Although this type of procedural modeling is fairly common in films, it is not typically employed in game development.  This is of particular interest since most developers are looking for ways to increase the productivity of their artists.  In the talk Spore API: Accessing a Unique Database of Player Creativity Shodhan Shalin, Dan Moskowitz and Michael Twardos discuss how the Spore team exposed a huge database of player-created assets to external applications via a public API.

The Splashing in Pipelines talk session has a talk by Ken Museth of Digital Domain about DB-Grid, an interesting data structure for volumetric effects; a GPU implementation of this could possibly be useful for real-time effects.  Another talk from this session, Underground Cave Sequence for “Land of the Lost” sounds like the kind of film talk which often has nuggets which can be adapted to real-time use.

Making it Move has another game development talk, Fight Night 4: Physics-Driven Animation and Visuals by Frank Vitz and Georges Taorres from Electronic Arts.  Fight Night 4 is a game with extremely realistic visuals; the physics-based animation system described here is sure to be of interest to many game developers.  The talk about rigging the “Bob” character from Monsters vs. Aliens also sounds interesting; the technical challenges behind the rig of such an amorphous – yet engaging – character must have been considerable.

Partly Cloudy was the short film accompanying Pixar’s 10th feature film, Up.  Like all of Pixar’s short films, Partly Cloudy was a creative and technical triumph.  The talk by the director, Peter Sohn, also includes a screening of the film.

Although film characters have more complex models, rigs, and shaders than game characters, there are many similarities in how a character translates from initial concept to the (big or small) screen.  The session Taking Care of Your Pet has two talks discussing this process for characters from the movie Up.  There is also a session dedicated to Character Animation and Rigging which may be of interest for similar reasons.

Another game development talk can be found in the Painterly Lighting session; Radially Symmetric Reflection Maps by Jonathan Stone of Double Fine Productions describes an intriguing twist on prefiltered environment maps used in the game Brutal Legend.  The two talks on stylized rendering methods (Applying Painterly Concepts in a CG Film and Painting with Polygons) also look interesting; the first of these discusses techniques used in the movie Bolt.

Real-time rendering has long used techniques borrowed from film rendering.  One way in which the field has “given back” is the increasing adoption of real-time pre-visualization techniques in film production.  In this talk, Steve Sullivan and Michael Sanders from Industrial Light & Magic discuss various film visualization techniques.

The session Two Bolts and a Button has two film lighting talks that look interesting; one on HDRI-mapped area lights in The Curious Case of Benjamin Button, and one on lighting effects with point clouds in Bolt.

The Capture and Display session has two research talks from Paul Debevec’s group.  As you would expect, they both deal with acquisition of computer models from real-world objects.  One discusses tracking correspondences between facial expressions to aid in 2D parametrization (UV mapping), the other describes a method for capturing per-pixel specular roughness parameters (e.g. Phong cosine power) and is more fully described in an EGSR 2009 paper.  Given the high cost of creating realistic and detailed art assets for games, model acquisition is important for game development and likely to become more so.

Flower is the second game from thatgamecompany (not a placeholder; that’s their real name), the creators of FlowFlower is visually stunning and thematically unusual; the talk describing the creation of its impressionistic rendering style will be of interest to many.

Flower was one of two games selected for the new real-time rendering section of the Computer Animation Festival’s Evening Theater (which used to be called the Electronic Theater and was sorely missed when it was skipped at last year’s SIGGRAPH).  Fight Night 4 was the other; these two are accompanied by real-time rendering demonstrations from AMD and Soka University.  Several other games and real-time demos were selected for other parts of the Computer Animation Festival, including Epic GamesGears of War 2 and Disney Interactive‘s Split Second.  These are demonstrated (and discussed) by some of their creators in the Real Time Live talk session.

The Effects Omelette session has been presented at SIGGRAPH for a few years running; it traditionally has interesting film visual effects work.  This year two of the talks look interesting for game developers: one on designing the character’s clothing in Up, and one on a modular pipeline used to collapse the Eiffel Tower in G.I. Joe: The Rise of Cobra.

Although most of the game content at SIGGRAPH is targeted at programmers and artists, there is at least one talk of interest to game designers: in Building Story in Games: No Cut Scenes Required Danny Bilson from THQ and Bob Nicoll from Electronic Arts discuss how interactive entertainment can be used to tell a story.

As one would expect, the Rendering session has at least one talk of interest to readers of this blog.  Multi-Layer, Dual-Resolution Screen-Space Ambient Occlusion by Louis Bavoil and Miguel Sainz of NVIDIA uses multiple depth layers and resolutions to improve SSAO.  Although not directly relevant to real-time rendering, I am also interested in the talk Practical Uses of a Ray Tracer for “Cloudy With a Chance of Meatballs” by Karl Herbst and Danny Dimian from Sony Pictures Imageworks.  For years, animation and VFX houses used rasterization-based renderers almost exclusively (Blue Sky Studios, creators of the Ice Age series, being a notable exception).  Recently, Sony Pictures Imageworks licensed the Arnold ray-tracing renderer and switched to using it for features; Cloudy with a Chance of Meatballs is the first result.  Another talk from this session I think is interesting: Rendering Volumes With Microvoxels by Andrew Clinton and Mark Elendt from Side Effects Software, makers of the procedural modeling tool Houdini.  The micropolygon-based REYES rendering system (on which Pixar’s Photorealistic Renderman is based) has fascinated me for some time; this talk discusses how to add microvoxels to this engine to render volumetric effects.

Above, I mentioned previsualization as one case where film rendering is informed by game rendering.  A more direct example is shown in the talk Making a Feature-Length Animated Movie With a Game Engine (by Alexis Casas, Pierre Augeard and Ali Hamdan from Delacave), in the Doing it with Game Engines session (which I am chairing).  They actually used a game engine to render their film, using it not as a real-time renderer, but as a very fast renderer enabling rapid art iteration times.

All of the talks in the Real Fast Rendering session are on the topic of real-time rendering, and are worth attending.  One of these is by game developers: Normal Mapping With Low-Frequency Precomputed Visibility by Michal Iwanicki of CD Projekt RED and Peter-Pike Sloan of Disney Interactive Studios describes an interesting PRT-like technique which encodes precomputed visibility in spherical harmonics.

Finally, the Rendering and Visualization session has a particularly interesting talk: Beyond Triangles: GigaVoxels Effects In Video Games by Cyril Crassin, Fabrice Neyret and Sylvain Lefebvre from INRIA, Miguel Sainz from NVIDIA and Elmar Eisemann from MPI Informatik.  Ray-casting into large voxel databases has aroused interest in the field since John Carmack made some intriguing comments on the topic (further borne out by Jon Olick’s presentation at SIGGRAPH last year).  The speakers at this talk have shown interesting work at I3D this year, and I look forward to seeing their latest advances.