Monthly Archives: April 2009

Eurographics Workshop on Natural Phenomena 2009

EWNP has had interesting papers in recent years, but when it skipped 2008 I thought it was gone.  However it came back in 2009 with five papers, all of which are online except for one:

Procedural Modeling of Leather Texture with Structural Elements:  Not currently available online, but judging from a previous paper by these authors this appears to be about procedural modeling of the cracks and bumps in leather surfaces.  Most real-time applications will use photographed or manually created textures for this, so it is probably not of wide interest to real-time developers.

Interactive Modeling of Virtual Ecosystems: Automatic modeling of plants taking lighting, obstacles, etc. into account.  Might be useful as an automatic modeling tool.

A Geometric Algorithm for Snow Distribution in Virtual Scenes: What the title says; might be useful for automated scene modeling, but probably not for runtime use.

Corotated SPH for Deformable Solids: Smoothed Particle Hydrodynamics (SPH) is commonly used in film production for liquids, smoke, etc.  This paper discusses how to extend the technique to model deformable solids.  Probably not real-time anytime soon.

Real-Time Open Water Environments with Interacting Objects:  This combines the Tessendorf FFT-based method for ambient waves with a different method for interactive waves (waves interacting with dynamic objects).  This is the most relevant paper for real-time rendering; worth a read.

Tessendorf’s FFT method is the current gold standard for non-interactive ocean waves, and is widely used in game and film production.  A description of it can be found on his publication page, under Simulating Ocean Surface.  Tessendorf’s publication page has many more papers of interest, including an algorithm (called iWave) for interactive waves and reports on particle and volume rendering for film production.

Insomniac have a particularly efficient and flexible implementation of a variant of Tessendorf’s method, which they extended to support interactive waves as well.  This method was used in the game Resistance 2, and Insomniac Games have kindly published not just a white paper on the technique, but actual working code! This is part of their admirable Noctural Initiative for technology sharing.  The Noctural Initiative website is highly recommended, as it includes code which has been used in successful game projects by one of the most highly-regarded studios in the industry.

Another interesting approach to interactive waves is Wave Particles, which is described here.

Graphics Interface 2009 papers

The list of papers accepted to Graphics Interface 2009 (with abstracts) is now online.  Graphics Interface has had some pretty good real-time rendering papers: here is a handful of examples from the last few years.  Judging from this year’s abstracts, the following look particularly interesting:

Fast Visualization of Complex 3D Models Using Displacement Mapping: This looks like a combination of the “sparse voxel ray casting” approach popularized by id software with “relief mapping” approaches.

Depth of Field Postprocessing for Layered Scenes Using Constant-Time Rectangle Spreading: This is closely related to one of my favorite I3D 2009 posters, “Faster Filter Spreading and Its Applications”.  The basic idea (which has also been discussed in this paper by Dan Piponi) is to “splat” rectangles in constant time (independent of the rectangle size!) by “splatting” just the corners into a buffer, from which a summed-area table is constructed (using existing fast methods), yielding the desired image.  This can be extended to more general splats.  Although there is no preprint yet, the tech report is available.

An Analytical Approach to Single Scattering for Anisotropic Media and Light Distributions:  A follow-on paper to one published in Eurographics 2009, it adds anisotropic phase functions and more general lighting.  The basic solution is similar to an earlier paper by Bo Sun and others, but using a slightly different approach that enables increased precision.

Rendering the Effect of Labradorescence: This is of interest to me as an optical reflectance geek, but I doubt anyone will be using it in a game anytime soon.  This paper presents a physically-based method of rendering a complex optical phenomena that exists in gems such as Labradorite and Spectrolite.

Ke-Sen Huang’s Graphics Interface 2009 page should be a good place to hunt for preprints of these papers as they appear.

Good list of classic graphics papers

Old graphics papers don’t get enough respect nowadays; for example, Porter and Duff’s original paper is still the best place to get a good understanding of alpha blending (which too many people get wrong nowadays). There are many more gems to be found in papers from the 70s and 80s.  A while ago, I pointed out Pixar’s online paper library, which includes a lot of “golden oldies” (as well as good new stuff).  I just saw this great list of old papers on the codersnotes blog. I heartily concur with Kayamon’s assessment of the value of an ACM digital library subscription, though I wish ACM would find a way to go the Open Access route.  It’s not just a matter of expense; the registration wall adds a huge amount of friction to the process of finding information.

Exploiting coherence at GDC 2009

A few months back, I wrote a blog post discussing techniques which exploit coherence, either spatial (like multiresolution rendering) or temporal (like reprojection caching).

Both of these were represented at GDC this year.  Jeremy Shopf presented a talk on Mixed Resolution Rendering, and the ambient occlusion technique presented in the talk Rendering Techniques in Gears of War 2 (available on the GDC Vault site) made use of both methods.  The ambient occlusion factors were rendered at a downsampled resolution. In addition, reprojection caching was used to reduce temporal aliasing.  This is the first use I have seen of reprojection caching in a shipping game.

In my previous blog post, I was skeptical of reprojection approaches, since it seemed to me that as an optimization method they did not address the worst case (where the camera angle changes abruptly).  Using such approaches to improve quality instead (as Epic did) makes more sense.

More GDC conference links

More material from GDC is coming online each day. We have already mentioned the tutorial slides, as well as Intel’s pageGDC’s Vault site has video which is only available to registered attendees (except for sponsored sessions), but the slide decks are available to everyone.  NVIDIA recently put up a new page with their material – even the material previously available from GDC’s own sites is worth getting from here, since the versions on NVIDIA’s page are significantly more up to date.  The videos for NVIDIA’s sponsored sessions are free for everyone and are linked from the NVIDIA page as well.

Lots of OpenGL and OpenCL stuff is available on the Khronos web site,  and Jeremy Shopf and Jim Tilander have their respective slides up as well. A Google Search for ‘”GDC 2009″ slides’ should turn up more as time goes by.

Fast and Furious

Given my last links post referenced “The Fast and the Furious”, I might as well call this post by the 4th movie in that series. Which is bizarrely titled by simply removing two “the”s from the original title. So the 5th movie will just be “Fast Furious”? I can imagine this subtract-a-the for other movies: “Fellowship of Ring”, “Silence of Lambs”, “Singin’ in Rain”, “Back to Future”. Anyway, the goal of this post is to whip through the rest of my links backlog.

I’m still catching up with reading the post-GDC flurry of resources and blog posts and whatnot – you’re on your own. Well, mostly. One or two things: watch the last half of the Unreal 3 new features demo – some nice-looking stuff. Also, the GDC tutorials are available for download; the first set of 7 are what you want. Lots of DirectX 10 and 11 material, from my quick skim. The third talk, by the DICE guys, looked to have some interesting things to say about cascaded shadow maps. Here’s another older presentation, about the Frostbite rendering engine, parallelism, software occlusion culling, ray tracing, and other nice tidbits. What’s also interesting about this one is that it uses a slide hosting site, SlideShare, to hold the presentation. Speaking of slidesets, there are also these from the parallel graphics computing course at SIGGRAPH Asia 2008. 

But, you found you’re required to attend a conference between GDC and SIGGRAPH (If so, I want your job). In that case, EGSR 2009 is coming up at the end of June, in Girona, Spain. This is the conference for rendering research in general. There’s still a week before abstracts are due, so get cracking.

In my last links post I asked for open source that loaded and exported a variety of model types and allowed mesh manipulation. Two people answered back: MeshLab. The blog about this package is also worth skimming through.

Also in the previous links article I mentioned the server-side graphics computation model presented by OnLive.  I should also mention AMD’s Fusion Render Cloud project, in the same space. Hmmm, maybe this really could work, with compression, and if you don’t mind some lag.

In Gamasutra is a “sponsored” article, but a good one, on Intel’s Threading Building Blocks. I can attest that this component truly does help you take advantage of multiple cores. Knowing at least a bit about TBB is worth your while. Also on Gamasutra is part two of the data alignment article.

There’s a nice rundown of Killzone 2’s graphical features on Brian Karis’ blog.

The sIBL site has some HDR environment maps and manipulation software for download.

Paul Merrell has made plugins available for Max and Blender for his city synthesis procedural modeling research at UNC Chapel Hill.

This diagram of Windows’ graphics makes me think, “it’s just that easy.”

NVScale is an OpenGL-based SDK that lets you use up to four GPUs to store and render extremely large models. It’s nice to see NVIDIA supporting this (non-gaming) area of rendering.

Well-produced tutorial on volume rendering, along with demo code, by Kyle Hayward: part 1, part 2.

There are lots of articles about XNA graphics programming getting put at Ziggyware.

Nothing to do with computer graphics, but this seems like the best computer science class ever.

When nerds and lace-making meet: fractal doilies.

 

Left-Handed vs. Right-Handed World Coordinates

Two years ago I read a blog entry by Pete Shirley about left-handed vs. right-handed coordinates. I started to have a go at explaining these as simply as I could, but kept putting it off, to avoid saying something stupid or confusing. Having just dealt with this issue yet again at work, it’s time to write down my mental model.

One problem in thinking about this area is that there are two places where we care about them: world coordinates (where stuff is in space) and view coordinates (what we use for the view and perspective matrices). So, this first post will be just about world coordinates, as a start. Basic, but let’s get it down to begin.

The way I think about RH vs. LH for world space is there’s an objective reality out there. You are trying to define where stuff is in that reality. You stand on a plane and decide that looking East is the X+ axis, looking North is the Y+ axis (typical Cartesian coordinates). For the Z+ axis you decide that altitudes are positive numbers. That’s an RH coordinate system, and that’s why it’s the one used by most modeling packages, AFAIK (please do let me know if there are any LH modelers). We all likely know about how the right hand is used to explain the counterclockwise twist the three axes form, the right-hand rule. I was also happy to see on this same page how to label your two fingers and thumb to show the coordinate system.

You meet with Marvin the Moleman. He likes Y+ North, X+ East, just like you, but Z+ for him is downwards, his numbers increase as he digs his holes. He’s LH. So he hands you a model of his mole-lair, fully modeled in 3D. Fine, the transform to the RH space you like is a Z axis reflection, i.e., negate the Z coordinate and, as needed, normal values. He also gives you a 2D textured rectangle showing the floor plan, a 2D object. Viewing his dataset from above, your and his XYZ coordinates (and UV coordinates) happen to exactly match, the Z flip does nothing to these coordinates since Z is 0. You flip only the rectangle’s normal direction.

There are an infinite number of ways to transform between LH and RH, not just negate Z. A plane with any orientation and location can be used to mirror the vertices of the model; some planes are just more useful and convenient than others. A quick rule is that negating one axis or swapping two axes transforms from one coordinate system to the other.

One interesting property of such conversions is that, even though the normals get flipped along some plane (or perhaps I should say, because the normals get flipped), clockwise order of the vertices is not affected by conversion between these two model coordinate systems. Which is counterintuitive, on the face of it: if, for example, you do a planar mirror transform of a model so that you can render it again as an object reflected in a mirror (p. 386 on in RTR3), the mirroring matrix most definitely does change the ordering of the vertices. A clock seen in a mirror is reversed in the direction its hands travel.

The point is simply this: LH and RH are indeed just two ways of describing the same underlying world space. Conversion between the two does not change the world. A clock in the real world will always have its hands move in a clockwise direction, regardless of how you describe that world. I find this “conversion that does not modify clockwise” operation to be like the Zen koan, “It is not the wind that moves. It is not the flag that moves. It is your mind that moves.”

One last bit I found interesting: latitude/longitude. Typically we describe a location on the earth as lat/long/altitude, with North positive for latitude, East positive for longitude. So lat/long/altitude is left-handed, assigning them in this XYZ order. But, I’ve also seen such coordinates listed in longitude/latitude order, e.g., TerraServer USA uses this order. Which is right-handed, since the X and Y coordinates are swapped. In this case all the values are the same, but it’s simply the ordering that changes the handedness.

My next posting on this subject will be about LH vs. RH for viewing vs. world coordinates, which is where the real confusion comes in.

Connections: Larrabee, Michael Abrash, Intel, Dr. Dobb’s and me

There has been a spate of Larrabee information during the last two weeks.  Two GDC talks (slides near the bottom of this page), a prototype library, and an article by Michael Abrash on the Dr. Dobb’s website.

Dr. Dobb’s Journal has been out of print since February, but for many years it was one of the leading software publications.  When initially published in 1976 (as Dr. Dobb’s Journal of Computer Calisthenics & Orthodontia) it was the first journal focusing on software development for microcomputers.  Michael Abrash wrote many articles for Dr. Dobb’s over the years, including a series on the Quake software renderer in the mid 90’s.  This series made a great impression on me; when it was published I was considering a career change from microprocessor design to graphics programming.  At the time, I was working on Intel’s P55C processor, publicly known as “Pentium with MMX Technology”.  This chip was notable both for being the first X86 processor with a SIMD (single instruction multiple data) instruction set, and for being the last CPU to use the in-order Pentium micro-architecture.

When Michael Abrash wrote the Quake articles game rendering was 100% software, mostly written in assembly language.  Abrash was the uber-game programmer, having worked on DOOM, written the Quake renderer, and published (in addition to his Dr. Dobb’s articles) many influential books about graphics programming, assembly and optimization (the last of which is available online).

Within a few years (around the time I finally made the jump from CPU design to game graphics programming), it seemed to many that graphics hardware and compiler improvements had made software rendering and hand-coded assembly obsolete.  This was mirrored by my own experience; I was hired to my first game industry job on the strength of a software rasterization demo (written mostly in assembler) and by the time the game shipped, it required graphics hardware and contained very little assembly (none written by me).  Abrash started applying his considerable skills to what he saw as the next unsolved hard problem: natural language processing.

But he couldn’t stay away from graphics for long; when Microsoft started working on the XBox console he got involved in its design.  In the early 2000’s, he figured out that there was a market for software renderers after all, mostly due to the mess of caps bits, unorthogonal feature support, and flaky compliance that characterized low-end graphics hardware at the time (Intel was among the greatest offenders; compounding the problem, its graphics chips sold very well so there were a lot of them out there).  With Mike Sartain (another XBox designer), he wrote Pixomatic, a software renderer published by RAD Game Tools (until then mostly known for the Miles sound library, perhaps the most widely-used middleware in the games industry).  Of course, he published another series of articles in Dr. Dobb’s about the experience, where he discussed how he made use of SIMD instruction sets such as MMX and SSE when optimizing Pixomatic.

I found this particularly interesting due to my personal involvement with these instruction sets.  After working on the first MMX hardware implementation I helped define its successor, which was twice as wide (128 bits instead of 64) and added support for floating-point SIMD.  This instruction set was at first called MMX2, then VX, and finally split into two separate instruction sets: SSE and SSE2.  By this time SIMD instruction set extensions were becoming quite popular; AMD had their own version called 3DNow!, and PowerPC had the AltiVec instruction set.  Intel kept on adding new SIMD extensions: SSE3, SSSE3, SSE4.1, SSE4.2, and AVX.

As Abrash details in the Larrabee article, Larrabee got started when he decided to talk to Intel about some ideas for SIMD instructions to accelerate software rasterization.  As a result, Larrabee includes a powerful set of SIMD instructions.  Much wider than previous instruction sets (512 bits instead of 128, or 256 in the case of AVX), Larrabee’s instruction set contains several instructions tailored to software rasterization.  It is also general enough to allow for automatic code vectorization of a wide variety of loops.  Abrash had a key role in the design of the instruction set, bringing software rasterization back into the mainstream.

Besides a good instruction set, Larrabee also needed an efficient hardware design with a large number of cores.  Each of these cores needed to be very efficient in terms of performance-per-Watt and per-transistor.  Since the Larrabee team started out as a skunkworks, they  couldn’t afford to design a brand-new core so they looked at previous Intel cores, and the old in-order Pentium core (almost the same one I used in  the P55C) was the one chosen.

What I find fascinating about this story is that Abrash managed to follow rasterization all the way around the Wheel of Reincarnation.  This term refers to the common process where a piece of computing functionality is first implemented in software, then moves to special-purpose hardware which gradually becomes more general until it rivals a CPU in complexity, at which point the functionality is folded back into software.  It was coined in a 1968 article by T. H, Myer and Ivan Sutherland (the latter is widely considered the father of computer graphics).