Tag Archives: ambient occlusion

7 Things for February 6

With the excitement of Ground Hog’s Day and James Joyce’s birthday over, it’s time to take off the silly paper hats and get back to writing “7 things” columns. Here goes:

  • Jeremy Shopf gives a nice summary of recent ambient occlusion papers. AO is becoming the new Shadows—every conference must have a paper on the topic. Honestly, it’s amazing that some of these ideas haven’t popped up earlier, like the line integral method. If you accept the basic approximation of AO from the start, then it’s a matter of how to best integrate the hemisphere around the point. I’m not downplaying the contribution of this research. Just the opposite, it’s more along the lines of “d’oh, brilliant, and why didn’t anyone think of that earlier?” The answer is both, “because those guys are smart” and, “they actually tried it out, vs. thinking of an idea and not pursuing it.”
  • Thinking about C++ and looking at my old utilities post, I realized I forgot an add-on I use just about every day: Visual Assist X. This product makes Visual Studio much more usable for C++. Over the years it’s become indispensable to me, as more and more features get integrated into how I work. I started off small: there’s a great button that simply switches you between the .cpp and .h version of the file. Then I noticed that other button which takes a set of lines I’ve selected and comments them out in a single mouse press, and the other button that uncomments them back. Then I found I could add a control that lets me type in a few characters to find a code file, or find a class. On and on it goes… Anyway, there’s a free trial, and for individuals it’s an entirely reasonable (for what you get) $99 license. By the way, you really don’t need to get the maintenance renewal every year.
  • As you may know, MIT has had a mandate for a number of years to put all of its courses online in some form—there are now 1900 of them. The EE & CS department, naturally enough, has quite a selection. The third most visited course on the whole site is Introduction to Computer Science and Programming, from Fall 2008 (and I approve: they use Python!). There’s only one computer graphics course, from 2003, but it covers unchanging principles and concepts so the “ancient” date is a minor problem.
  • Naty pointed out this article about deferred rendering. He notes, “A nice description of a deferred rendering system used in a demo—of particular interest is the use of raytraced distance fields for rendering fluids, and the integration of this into the overall deferred system.”
  • A month and a half ago I listed some articles about reconstructing the position or linear z-depth in a shader. Here’s another.
  • It’s the ongoing debate, back again. No, not dark vs. milk chocolate, nor Ferrari vs. Porsche, but DirectX vs. OpenGL. My own feeling is “whatever, we support both”. By the way, the upcoming book GPU PRO (which also has a blog, and has just been listed on Amazon) includes an in-depth article on porting from DX9 to OpenGL 2.0. Mark Kilgard’s presentation also discusses the differences, including the coordinate space and window space conventions.
  • I love human pixels. The Arirang Festival in North Korea is a famous example, check out Google Images. But that’s just a card stunt, impressive as it is. This video shows a technique I hadn’t seen before (note that some of it is sped up—check the speed of the people on the field—but still fantastic). There are other videos, such as this and this.

2009 Academy Sci & Tech Awards

Oops – I forgot to include Christophe Hery in the point-based color bleeding award below.  This has now been fixed; apologies and congratulations to Christophe.  Many thanks to Margarita Bratkova for pointing out the error!

Last week, the Academy of Motion Pictures Arts and Sciences (most known for its annual Academy Awards, or “Oscars”) announced the winners of it’s 2009 Scientific & Technical Awards.  No Awards of Merit (the highest award level) were given this year – those are the ones that come with an “Oscar” statuette and are shown in the Academy Awards telecast (Renderman and Maya have won Awards of Merit in previous years).

Two computer graphics-related Scientific and Engineering Awards were given this year; these are the second-highest award level and come with a bronze tablet:

  • Per Christensen, Michael Bunnell and Christophe Hery for point-based indirect illumination; an an interesting inversion of usual practice, this fast approximate global illumination / ambient occlusion technique started out as a real-time GPU technique and ended up as an offline rendering CPU technique (first used in Pirates of the Caribbean: Dead Man’s Chest, it is now a standard part of Pixar’s Renderman).  A recent SIGGRAPH Asia paper describes a closely related technique.
  • Paul Debevec, Tim Hawkins, John Monos and Mark Sagar for Light Stage and image-based character relighting.  The work done by Paul Debevec and his team at USC’s Institute for Creative Technologies on image-based capture and lighting has been hugely influential, resulting in widespread adoption of light probes, multi-exposure HDR image capture, and many other techniques commonly used in games as well as film.

One of the Technical Achievement Awards (the third level, which comes with a certificate) is also of interest to readers of this blog:

  • Hayden Landis, Ken McGaugh and Hilmar Koch for ambient occlusion.  The pioneering work on ambient occlusion for film production was done by these guys at ILM; first publication was at the Renderman in Production course at SIGGRAPH 2002 (the relevant chapter of the course notes can be found here).  Of course, ambient occlusion is heavily used in real-time applications as well.

In an interesting related development, eight separate Scientific and Engineering Awards and two Technical Achievement were given for achievements related to the digital intermediate process (digital scanning and processing of film data), many of them for look-up-table (LUT) based color correction (LUTs have also been used for color correction in games).  The Academy tends to batch up awards in this way for technologies whose “time has come” (two years ago there were a lot of fluid simulation awards).  Given that another of the Technical Achievement Awards was for a motion capture system, we can see how quickly digital technology has come to dominate the film industry.  As recently as 2005, most of the awards were for things like camera systems; this year only one of the awards (for a lens motor system) was for non-digital technology.

Congratulations to all the winners!

7 things for December 22

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

This and That

I’ll someday run out of titles for these occasional summaries of new(ish) resources, but in the meantime, this one’s “This and That”.

Christer Ericson’s article on dealing with grouping and sorting objects for rendering is excellent. It mostly depends on input latency, but has concepts that can be applied in immediate mode.

An element that continues to renew the field of computer graphics is that the rules change. This article is about taking Quake 2 (from 1997) and moving it to a modern GPU.

If you haven’t seen it yet, Farbrausch’s demo “debris” is truly impressive. It’s only 183,462 bytes, and is absolutely packed with procedural content. Download here (last link works). Or be lazy and watch on YouTube.

NVIDIA’s pulled together its resources for shadow generation and ambient occlusion all onto one handy page (plus ray tracing – just one entry so far, but it’s a good one).

How to deal with various rendering paradigms on multiple platforms? GRAMPS looks intriguing.

Gamasutra put a useful Game Developer article online, all about commercial middleware game engines currently available.

OpenGL will always exist, since Macs and Linux need it. It’s easier to use in college courses because of its clarity and readability. But otherwise the pendulum’s swung far towards DirectX. Phil Taylor comments on and gives some historical context to the controversy around the latest release, OpenGL 3.0.

A nice trend for OpenGL is that people continue to write useful bits, such as GLee, which manages extensions.

New info on older effects: blur and glow, volumetric clouds, and particle systems.

The glorious teapot. I like “a wireframe view”. Yes, the real thing is taller than the synthetic model, as the model makers were compensating for non-square pixels.

“What’s the future hold?” is always a fun topic, one we’ve used each edition to end our book. I liked this presentation on SlideShare for its sheer “here are a hundred things that hurtle us towards the Singularity” feel, though I don’t buy it for a minute. SlideShare, where it is hosted, is a pleasant medium-attention-span kind of place, with all sorts of random and fun slidesets.

Finally, I am pleased to find that LittleBIGPlanet is just as gorgeous as it looked like it would be. I’ve played myself for only a bit, but walking by when my kids are playing I find I have to stop and stare.