Tag Archives: SSAO

Seven things for July 25

Here we go:

  • Beautiful demo of various effects, the realtime hybrid raytracing demo RIGID GEMS. Do note there are controls. The foreground blurs for the depth-of-field are a little unconvincing, but the rest is lovely! (thanks to Steve Worley for the tip)
  • Books to check out at SIGGRAPH, or now (I’m sure there are more – let me know):  OpenGL Insights and Shadow Algorithms Data Miner. Five chapters of OpenGL Insights are free to read here. There are quite a few graphics books published since last SIGGRAPH, we have them listed here.
  • Scalable Ambient Obscurance looks worthwhile, and there’s even a demo and source.
  • I can’t say I grok it all yet, but Bringing AAA Graphics to Mobile Platforms (from GDC) has a lot of chewy information on what’s fast and slow on typical mobile hardware, as well as how it works. PDF version on the Unreal Engine site.
  • A somewhat older (a whole year or so old) article on changing resolution on the fly to maintain frame rate. (Thanks, Mauricio)
  • 3D printing opens up a wide range of legal issues, It Will Be Awesome if They Don’t Screw it Up gives an overview of some of these. There are a number of areas where the law hasn’t had to concern itself yet.
  • Echo chamber: stuff you should probably know about already, but just in case. 1) Ouya, a monster money-raiser Kickstarter project for an open console. Tim Lottes comments; my take is “Android games on a console? Weak.” but I’d love to see them succeed. 2) Source Filmmaker, a free film making system from Valve. People are getting busy with it.

To conclude, a photo that looks like a rendering bug; read about it here. If you like these sorts of things, see more at the “2 True 2B Good” collection.

I3D 2010 Report

From Mauricio Vives, our first guess blogger; I thank him for this valuable detailed report.

Written February 26, 2010.

This past weekend I attended the 2010 Symposium on Interactive 3D Graphics and Games, known more simply as “I3D.” It is sponsored by ACM SIGGRAPH, and was held this year in Bethesda, Maryland, just outside Washington. Disclaimer: I work for Autodesk, so much of this report comes from the perspective of a design software developer, but any opinions expressed are my own.

Overview

I3D is a small conference of about 100 people that covers computer graphics and interaction research, principally as it applies to games. I also attended the conference in 2008 near San Francisco, when it was co-chaired by my colleague at Autodesk, Eric Haines.

About half of the attendees are students or professors from universities all over the world, and the rest are from industry, typically game developers. As far as I could tell, I was the only attendee from the design software industry. NVIDIA was well represented both in attendees and presentations, and the other company with significant representation was Firaxis, a local game developer most well known for the Civilization series.

The program has a single track, with all presentations given in the same room. Unlike SIGGRAPH, this means that you can literally see everything the conference has to offer, though it is necessarily more focused. As you will see below, I was impressed with the quality and quantity of material presented.

Since this conference is mostly about games, all of the presented research has a focus on a real-time implementation, often for games running at 60 frames per second. Games have a very low tolerance for low frame rates, but they often have static environments and constrained movement which allows for precomputation and hence high performance and convincing results. Conversely, customers of design software like Autodesk’s products produce arbitrary and changing data, and want the most accurate possible results, so precomputation and approximations are less useful, though a frame rate as low as 5 or 10 fps is often tolerable.

However, an emerging trend in graphics research for games is to remove limitations while maintaining performance, and that was very evident at I3D. The papers and posters generally made a point to remove limitations, in particular so that geometry, lighting, and viewpoints can be fully dynamic, without lengthy precomputation. This is great news for leveraging these techniques beyond games.

In terms of technology, this is almost all about doing work on GPUs, preferably with parallel algorithms. NVIDIA’s CUDA was very well-represented for “GPGPU” techniques that could not use the normal graphics pipeline. With the wide availability of CUDA, a theme in problem-solving is to express as much as possible with uniform grids and throw a lot of threads at it! As far as I could tell, Larrabee was entirely absent from the conference. Direct3D 11 was mentioned only in passing; almost all of the papers used D3D9, D3D10, or OpenGL for rendering.

And a random statistic: a bit more than half of the conference budget was spent on food!

Links

The conference web site, which includes a list of papers and posters, is here.

The Real-Time Rendering blog has a recent post by Naty Hoffman that discusses many of the papers and has links to the relevant author web sites.

Photos from the conference are available at Flickr here. I also took photos at I3D 2008, held at the Redwood City campus of Electronic Arts, which you can find here.

Papers

The bulk of the conference program consisted of paper presentations, divided into a few sessions with particular themes. I have some comments on each paper below, with more on the ones of greater personal interest.

Physics Simulation

Fast Continuous Collision Detection using Deforming Non-Penetration Filters

There is discrete collision detection, where CD is evaluated at various time intervals, and continuous CD, where an exact, analytic result is computed. This paper is about quickly computing continuous CD using some simple expressions that vastly reduce the number of tests between primitives.

Interactive Fluid-Particle Simulation using Translating Eulerian Grids

This was authored by NVIDIA researchers. The goal is a fluid simulation that looks better as processors get more and faster cores, i.e., scalable physics. This is actually a combination of techniques implemented primarily with CUDA, and rendered with a particle system. It allows for very dense and detailed results, and uses a simple trick to have the results continue outside the simulation “box.”

Character Animation

Here there was definitely a theme of making it easier for artists to prepare and animate characters.

Learning Skeletons for Shape and Pose

This is about creating skeletons (bones and weights) automatically from a few starting poses and shapes. The author noted that this was likely the only paper developed almost entirely with MATLAB (!).

Frankenrigs: Building Character Rigs From Multiple Sources

This paper has a similar goal: use existing artist-created character rigs to automatically create rigs for new characters, with some artist control to adjust the results. This relies on a database of rigged parts that an art team probably already has, thus it is a data-driven solution for the time-consuming tasks in character rigging.

Synthesis and Editing of Personalized Stylistic Human Motion

This is about taking a walk animation for a single character, and using that to generate new walk animations for the same character, or transfer them to new characters.

Fast Rendering Representations

Real-Time Multi-Agent Path Planning on Arbitrary Surfaces

Path finding in games is a huge problem, but it is normally constrained to a planar surface. This paper implements path planning on any surface, and does it interactively on both the CPU and GPU using CUDA.

Efficient Sparse Voxel Octrees

Is it time for voxel rendering to make a comeback? These researchers at NVIDIA think so. Here they want to represent a 3D scene similar using voxels with as little memory as possible, and render it efficiently with ray casting. In this case, the voxels contain slabs (they call them contours) that better define the surface. Ray casting through the generated octree is done with using special coordinates and simple bit manipulation. LOD is pretty easy: voxels that are too small are skipped, or the smallest level is constrained, similar to MIP biasing.

This paper certainly had some of the most impressive results from the conference. The demo has a lot of detail, even for large environments, where you think voxels wouldn’t work that well. One of the statistics about storage was that the system uses 5-8 bytes per voxel, which means an area the size of a basketball court could be covered with 1 mm resolution on a high-end NVIDIA GPU. This comprises a lot of techniques that could be useful in other domains, like point cloud rendering. Anyway, I recommend looking at the demo video and if you want to know more, see the web site, which has code and the compiled demo.

On-the-Fly Decompression and Rendering of Multiresolution Terrain

This paper targets GIS and sci-vis applications that want lossless compression, instead of more-common lossy compression. The technique offers variable rate compression, with 3-12x compression in practice. The decoding is done entirely on the GPU, which means no bus bottleneck, and there are no conditionals on decoding, so it can be very parallel. Also of interest is that decoding is done right in the rendering path, in the geometry shader (not in a separate CUDA kernel), and it is thus simple to perform lighting with dynamically generated normals. This is another paper that has useful ideas, even if you aren’t necessarily dealing with terrain.

GPU Architectures & Techniques

A Programmable, Parallel Rendering Architecture for Efficient Multi-Fragment Effects

The problem here is rendering effects that require access to multiple fragments, especially order-independent transparency, which the current hardware graphics pipeline does not handle well. The solution is impressive: build a entirely new rendering pipeline using CUDA, including transforms, culling, clipping, rasterization, etc. (This is the sort of thing Larrabee has promised as well, except the system described here runs on available hardware.)

This pipeline is used to implement a multi-layer depth buffer and color buffer (A-buffer), both fixed size, where fragments are inserted in depth-sorted order. Compared to depth peeling, this method saves on rendering passes, so is much faster and has very similar results. The downside is that it is a slower than the normal pipeline for opaque rendering, and sorting is not efficient for scenes with high depth complexity. Overall, it is fast: the paper quoted frame rates in the several hundreds, but really they should be getting their benchmark conditions complex enough to measure below 100 fps, in order to make the results relevant.

Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU

The distance transform, used to build distance maps like Voronoi diagrams, is useful for a number of image processing and modeling tasks. This has already been computed approximately on GPUs, and exactly on CPUs. This claims to the first exact solution that runs entirely on GPUs. The big idea, as you might expect, is to implement all phases of the solution in a parallel way, so that it uses all available GPU threads. This uses CUDA, and the results are quite fast, even faster than the existing approximate algorithms.

Spatio-Temporal Upsampling on the GPU

The results of this paper are almost like magic, at least to my eyes. Upsampling is about rendering at a smaller resolution or fewer frames, and interpolating the in-between results somehow, because the original data is not available or slow to obtain. Commonly available 120 Hz / 240 Hz TVs now do this in the temporal space. There is a lot of existing research on leveraging temporal or spatial coherence, but this work uses both at once. It takes advantage of geometry correlation within images, e.g. using normals and depths, to generate the new useful information.

I didn’t follow all of the details, but the results were surprisingly free of artifacts, at least for the scenes demonstrated. This could be useful any place where you might want progressive rendering, real-time ray tracing, because rendering full-resolution is very expensive. This technique or some of the ones it references (like this one) could offer much better results than just rendering at a lower resolution and doing simple filtering like is often done for progressive rendering.

Scattering and Light Propagation

Cascaded Light Propagation Volumes for Real-Time Indirect Illumination

This paper almost certainly had the most “street cred” by virtue of being developed by game developer Crytek. Simply put, this is a lattice-based technique for real-time indirect lighting. The most important features are that it is fully dynamic, scalable, and costs around 5 ms per frame. A very quick overview of how it works: render reflective shadow map for each light, initialize the grid with this information to define many secondary light sources, then propagate light through the grid in 30 directions (faces) from each cell into the adjacent 6 cells, approximate the results with spherical harmonics, and render.

To manage performance and storage, this uses cascades (several levels of detail) relative to the viewer, hence the use of the term “cascaded” in the title. The same data and technique can be used to render secondary occlusion, multiple bounces, glossy reflections, participating media using ray marching… just a crazy amount of nice rendering stuff. The use of a lattice has some of its own quality limitations, which they discuss, but nothing too bad for a game. This was a lot to take in, and I did not follow all of the details, but the results were very inspiring. Apparently this will appear in the next version of their game engine, which means consumers will soon come to expect this. Crytek apparently also discussed this at SIGGRAPH last year.

Interactive Volume Caustics in Single-Scattering Media

Caustics is basically “light focusing,” and scattering media is basically “fog / smoke /water,” so this is about rendering them together interactively, e.g. stage lights at a concert with a fog machine, or sunlight under water.  It is fully dynamic, and offers surprisingly good quality under a variety of conditions. It is perhaps too slow for games, but would be fine for design software or a hardware renderer which can take a few seconds to render.

Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering

This paper has a really long title, but what it is trying to do is simple: render rays of light, a.k.a. “god rays.” Normally this is done with ray marching, but this is still too slow for reasonable images, and simple subsampling doesn’t represent the rays well. The authors observed that radiance along the ray “lines” don’t change much, except for occlusions, which leads to the very clever idea of the paper: construct the (epipolar) lines in 2D around the light source, and sparsely sample along the lines, adding more samples at depth changes. The sampling data is stored as a 2D texture, one row per line, with samples are in columns. It’s fast, and looks great.

NPR and Surface Enhancement

Interactive Painterly Stylization of Images, Videos and 3D Animations

This is another title that direct expresses its goal. Here the “painterly” results are built by a pipeline for stroke generation, with many thousands of strokes per image, which also leverages temporal coherence for animations. It can be used on videos or 3D models, and runs entirely on the GPU. If you are working with NPR, you should definitely look at their site, the demo video, and the referenced papers.

Simple Data-Driven Modeling of Brushes

A lot of drawing programs have 2D brushes, but real 3D brushes can represent and replace a large number of 2D brushes. However, geometrically modeling the brush directly can lead to bending extremes that you (as an artist) usually want to avoid. In this paper from Microsoft Research, the modeling is data-driven, based on measuring how real brushes deform in two key directions. The brush is geometrically modeled with only a few spines having a variable number of segments as bones.

This has some offline precomputation, but most of the implementation is computed at run time. This was one of the few papers with a live demo, using a Wacom tablet, and it was made available for attendees to play with. See an example from an attendee at the Flickr gallery here.

Radiance Scaling for Versatile Surface Enhancement

This is about rendering geometry in such a way the surface contours are not obscured by shading. This is the problem that techniques like the “Gooch” style try to solve. However, the technique in this paper does it without changing the perceived material, sort of like an advanced sharpening filter for 3D models.

It describes a scaling function based on curvature, reflectance, and some user controls, which is then trivially multiplied with the normally rendered image. The curvature part is from a previous paper by the authors, and reflectance is based on BRDF, where you can enhance BRDF components independently. You should definitely have a quick look at the results here.

Shadows and Transparency

Volumetric Obscurance

This is yet-another screen-space ambient occlusion (SSAO) technique. Instead of point sampling, it samples lines (or beams of area) to estimate the volume of sample spheres that are obscured by surrounding geometry. It claims to get smoother results than point sampling, without requiring expensive blurring, and with the performance (or even better) of point sampling. You can see some results at the author’s site here. While it has a few interesting ideas, this may or may not be much better than an existing SSAO implementation you may already have. I found the AO technique in one of the posters (see below) more compelling.

Stochastic Transparency

This was selected as the best paper of the conference. Like one of the earlier papers, this tries to deal with order-independent transparency, but it does it very differently. The author described it as “using random numbers to approximate order-independent transparency.” It has a nice overview of existing techniques (sorting, depth peeling, A-buffer). The new technique does away with any kind of sorting, is fast, and requires fixed memory, but is only approximate. It was demonstrated interactively on some very challenging scenes, e.g. thousands of transparent strands of hair and blades of grass.

The idea is to collect rough statistics about pixels, similar to variance shadow maps, using a combination of screen-door transparency, multisampling (MSAA), and random masks per fragment (with D3D 10.1). This can generate a lot of noise, so much of the presentation was devoted to mitigating that, such as using per-primitive random number seeding to look OK in motion. This is also extended to shadow maps for transparent shadows. Since this takes advantage of MSAA and is parallel, quality and performance will increase with normal trends in hardware. It was described as not quite fast enough for games (yet), but (again) it might be fast enough for other applications.

Fourier Opacity Mapping

The goal of this work is to add self-shadowing to smoke effects, but it needs to be simple to integrate, scalable, and execute in just a few milliseconds. The technique is based on opacity shadow mapping (2001), which stores a transmittance function per texel, but has significant visual artifacts. Here a Fourier basis is used to encode the function, and you can adjust the number of coefficients (samples) to determine the quality / performance tradeoff. Using just a few coefficients results in “ringing” of the function, but it turns out that OK for smoke and hair. The technique was apparently implemented successfully in last year’s Batman: Arkham Asylum.

Normals and Textures

Assisted Texture Assignment

This paper is about making it much easier and faster for artists to assign textures to game environments (levels). It is an ambiguous problem, with limited input to make decisions. The solution relies on adjacency and shape similarities, e.g. two surfaces that are parallel are likely to have the same texture. The artist picks a surface, and related surfaces are automatically chosen. After a few textures are assigned, the system produces a list of candidate textures based on previous choices. There is some preprocessing that has to be performed, but once ready, the system seems to work great. Ultimately this is not about textures; rather, this is an advanced selection system.

LEAN Mapping

“LEAN” is a long acronym for what is essentially antialiasing of bump maps. Without proper filtering, minified bump maps provide incorrect specular highlights: the highlights change intensity and shape as the bump maps gets small in screen space. The paper implements a technique for filtering bump maps using some additional data on the distribution of bumped normals, that can be filtered like color textures. The math to derive this is not trivial, but the implementation is simple and inexpensive.

The results look great in motion, at glancing angles, minified, magnified, and with layered maps. It also has the distinctive property of turning grooves into anisotropy under minification, something I have never seen before.

Efficient Irradiance Normal Mapping

There are a few well-known techniques in games for combining light mapping and normal mapping, but they are very rough approximations of the “ground truth” results. This paper introduces an extension based on spherical harmonics, but only over a hemisphere, that significantly improves the quality of irradiance normal mapping. Strangely, no mention was made of performance, so I would have to assume that it runs as fast as the existing techniques, just with different math.

Posters

The posters session was preceded by a brief “fast forward” presentation with each author having a minute to describe their work. There were about 20 posters total, and I have comments on a few of them.

Ambient Occlusion Volumes (link)

This is a geometric solution to the problem of rendering convincing ambient occlusion, compared to the screen-space (SSAO) techniques which are faster, but less accurate. The results are very close to ray-traced results, and while it appears to be too slow for games right now (about 30 ms to render), that will change with faster hardware.

Real Time Ray Tracing of Point-based Models (link)

The title says it all. I didn’t look into this too much, but I wanted to highlight it because it is getting cheaper to get point cloud data, and it would be great to be able to render that data with better materials and lighting.

Asynchronous Rendering

This poster has an awfully generic name, but it is really about splitting rendering work between a server and a low-spec client, like a mobile phone. In this case, the author demonstrated precomputed radiance transfer (PRT) for high-quality global illumination, where the heavy processing was done on the server, while still allowing the client (here it was an iPhone) to render the results and allow for interactive lighting adjustments. For me the idea alone was interesting: instead of just having the server or client do all the work, split it in a way that leverages the strengths of each.

Speakers

A few academic and industry speakers were invited to give 90-minute presentations.

Biomechanical and Artificial Life Simulation of Humans for Computer Animation and Games

The keynote address was given by Demetri Terzopoulos of UCLA. I was not previously familiar with his work, but apparently he has a very long resume of work in computer graphics, including one of the most cited papers ever. The talk was an overview of his research from the last 15 years on modeling human geometry, motion, and behavior. He started with the face, then the neck, and then the entire body, each modeled in extensive detail. His most recent model has 75 bones, 846 muscles, and 354,000 soft tissue elements.

The more recent work is in developing intelligent agents in urban settings, each with a set of social behaviors and goals, though with necessarily simple physical models. The eventual and very long-term goal is to have a full-detail physical model coupled with convincing and fully autonomous behavior.

Interactive Realism: A Call to Arms

The dinner talk was given by Peter Shirley of NVIDIA. This was the “motivational” talk, with his intended goal of having computer graphics that are both pleasing and predictive. Some may think that we have already reached the point of graphics that are “good enough,” but he disagrees. He referenced recent games and research to point out the areas that he feels needs the most work. From his slides, these are:

  • Volume lighting / shadowing
  • Indoor-outdoor algorithms
  • Coarse / fine lighting
  • Artist / designer-in-the-loop
  • Motion blur and defocus blur
  • Material models
  • Polarization
  • Tone mapping

He concluded with some action items for the attendees, which includes reforming the way computer graphics research is done, and lobbying for more funding. From the talk and subsequent Q&A, it looks like a lot of people are not happy with the way SIGGRAPH handles papers, a world I know very little about.

The Evolution of Precomputed Lighting for Games

The capstone address was given by Peter-Pike Sloan of Disney Interactive Studios. He presented essentially a history of precomputed lighting for games from Quake, to Halo 3, and beyond. Such lighting trades off flexibility for quality and performance, i.e. you can get very convincing and fast lighting with some important restrictions. This turned out to be a surprisingly large topic, split mostly between techniques for static and dynamic elements, like environments and characters, respectively.

You may wonder why this is relevant beyond a history lesson, the trend in research being for techniques to not require precomputation, and that includes lighting. But precomputed lighting is still relevant for low-end hardware, like mobile devices, and cases where artist control is more important than automated results.

Wrap It Up!

Thanks for making it this far. As you can see, it was a very busy weekend! Like the 2008 conference, this was a great opportunity to see the state-of-the-art in computer graphics and interaction research in a more intimate setting. I hope this was useful, and please reply here if you have any comments.

Shadow Survey from SIGGRAPH Asia 2009 course

I asked Daniel Scherzer about my post about his book. He said it’s about right (and the long subtitle is indeed a Verlag decision).

The cool thing that turned up: their upcoming STAR survey on hard shadows will be more theoretical and detailed than his thesis’ survey. It will be similar to the hard shadow section in the SIGGRAPH Asia 2009 course Casting Shadows in Real Time (which has a solid 90 pages on direct illumination shadow algorithms, plus more on related methods).

I’ve updated the original post with this information, but wanted to make a separate post so that people wouldn’t miss this valuable overview on shadows and ambient occlusion (just a tiny bit on the latter).

Clearing the Queue

I’ve have a goal this week (it should be clear by now) of clearing my queue of stored-up RTR links by my birthday, today! (Hint: I want a pony.) So excuse the excessively-long list o’ links. Next task on my list, update the main RTR page itself.

  • StructureSynth. This looks pretty cool, and I love procedural models (my ancient SPD package was all about this, back in the days when downloading models was oppressively slow). I do wish they just provided an executable – building looks like a pain.
  • That previous link was on Meshula.net, which also blogs about Pixel Bender Fractals. Great stuff, sort of steampunk computer graphics: you must click this link, if no other on this page, and look on in awe.
  • Shapeways has a blog, and it’s not just dull company announcements. I’m glad they find people as pixels as interesting as I do. They also cover exporting Spore characters to Collada files (which is a great addition to Spore) and creating physical models from these.
  • In related news, The Economist has a reasonable summary of some trends in 3D printing. Their Technology Quarterly also has articles on Augmented Reality, 3D displays, and CAPTCHAs, among other topics.
  • This is one more reason the Internet is great: an in-depth article on normal compression techniques, weighing the pros and cons of each. This sort of article would probably not see the light of day in traditional publications, even Game Developer – too long for them, but all the info presented here is worthwhile for a developer making this decision. Aras’ blog has other nice bits such as packing a float into RGBA and SSAO blurring.
  • I need to add a link to the article itself to the object intersection page, but Morgan McGuire recently verified that he found this ray/box algorithm super-fast in SIMD. Code’s downloadable from that page, free version of article is downloadable here. Morgan uses this test in the ray tracer for his cool photon mapping paper at HPG 2009; if nothing else, you should at least see the video.
  • In related news, I am happy to see that AK Peters is beginning to put past journal of graphics tools articles online. At $15 each, the price of an article is quite high for individuals (or at least this individual), but current journal of graphics (gpu, & game) tools subscribers have full access to this archive for free. The mechanism to get access is a little clunky right now: if you’re a subscriber, you need to register with Metapress, then tell AK Peters your userid and they’ll provide you access.
  • Related to this, I hope Google Books conquers the world (or anyone else doing similar work, as long as it isn’t Apple or Amazon or other overcharging closed-box “we’re just protecting the authors, who get 10% or less for a purely digital sale with nil physical cost to us per unit” retailers – rant over, and I do understand there are fixed start-up costs for the retailer/publisher/etc., but really…). Google Books is so darn handy to look for short articles in books at Google’s repository, such as this one giving a clean way to build an orthonormal basis given a vector, from Graphics Tools: The JGT Editors’ Choice.
  • Humus provides a whole slew of new cubemaps he captured, if you’re getting tired of Grace Cathedral.
  • CUDA itself (vs. others) may or may not be a critical technology, but what it shows about the underlying GPU architecture is fascinating.
  • It should be mentioned: August 2009 DirectX SDK is available. Includes the first official release of DirectX 11.
  • This is hilarious, and possibly even useful!
  • I love seeing things like this: build your own multitouch display. Not that I’ll ever do it, but I hope others will.
  • You might be sick of Larrabee news (ship one, already!), but I found Phil Taylor’s article pleasantly hype-free and informative.
  • ATI’s Eyefinity (cute marketing name, I must admit – now I want to use the word everywhere) seems to me to solve a problem that rarely occurs: too much GPU for too few screens. Still, it’s nice to have the option. Eyefinity allows up to six monitors to be driven by a single GPU. I guess Eyefinity is useful when running older flight simulator programs on newer GPUs; otherwise, Eyefinity is pretty irrelevant. Eyefinity, eyefinity, eyefinity. At work I find two displays is plenty, one to run, one to debug. Anyway, the sweet spot for the monitor:GPU ratio is 13:1, as can be seen here:
    Flight Simulator - living the dream
  • There’s an article on instancing animated grass using DX10 on Gamasutra.
  • Humus’ summary of z interpolation is a good summary of the topic. He gives some of the key tricks, e.g., if you’re using floating point, use a near=1.0 and far=0.0 to help preserve precision.
  • Here’s a basic tutorial on different projection methods used in videogames, with lots of visual examples (add “Zaxxon” and it’s complete, for me). The one new tidbit I learnt from it was about reverse perspective, an effect I’ve made myself once every now and then when I screw up a projection matrix.
  • While I’ve been on break (one of the reasons I’ve been posting so much – Autodesk gives wonderful 6 week “sabbaticals”, aka “long vacations”, to U.S. employees every four years you’re there; it’s like being French or Swedish every fourth year), the rest of the company’s been busy: this new sketch application for the iPhone looks pretty cool, at the usual $2.99 “cup of coffee” type price.
  • Caustics can be dangerous. I can attest to this myself; a goofy award Andrew Glassner gave me long ago sat on my windowsill for years (I moved once, as you should discern from the picture), until I noticed what was happening to the base:
    caustics
  • I usually don’t have time to keep up with Slashdot, but SeenOnSlash, the funny bits of SlashDot, is sometimes entertaining. Graphics-related example: AMD’s latest chip.

7 Things for July 19th

Seven more:

  • Michael Abrash has an in-depth article on rasterization on Larrabee. Perhaps a little too in-depth at times; just skim past the assembly instructions. I also found myself asking, “why do that?” – the key is to just keep reading. He tries to make his examples simple and comprehensible, but at the cost of sometimes feeling like they’re oversolving the problem. They aren’t, it’s just that the solution is in fact used in different circumstances in order to be efficient.
  • SIGGRAPH has an interactive rendering event summary page. This page is more for the art production side of things, though; Naty’s coursetalks, and production sessions summaries are more comprehensive and more useful for programmer attendees.
  • NVIDIA has a number of events they’re involved in at SIGGRAPH 2009. Here’s the list.
  • I love this sort of madness: a business-card ray tracer that does depth of field.
  • Accumulated SSAO: the idea of reprojection, of using previous results by finding where they lie on this frame’s view, is one that seems a tad expensive for interactive rendering. It’s hard to know anything about performance and quality from this page, but I thought it was interesting to see.
  • I mentioned Processing in the last post. Another language-related resource for graphics and game programming is pygame, a set of Python modules for writing games. A friend said he found this system to be pretty great, that he could whip up a fairly involved game idea in a few hours.
  • Scribblenauts sounds like the coolest game that will ever come out, period. Even if it’s only 1/10th as good as the previews read, it looks to be pretty darn entertaining.

More Statistics

One followup to Naty’s article (below): Ke-Sen Huang’s page has submission and acceptance stats for many recent conferences.

If you have five minutes to kill, it’s fun to search on various phrases at the Google Trends site. Buzzwords like “cloud computing” have trackable data, but most graphics terms don’t have enough traffic to be worth recording. Here are some examples of graphics-related terms that have sufficient hit-counts:

  • Ray Tracing – I like how Google Trends points out relevant articles for various spikes.
  • SSAO – some definite spikes there, and what’s with all the traffic from Brazil? Is this the end of some word in Portugese? But there aren’t really hits before 2007, so I guess it’s real…
  • Collision detection, SIGGRAPH, and computer graphics – is interest in these areas waning, or are they simply established and not newsworthy? But then, GPU is going up.
  • Companies and products are fun to try: Larrabee, NVIDIA, Crytek.
  • You can also compare various terms. Here’s “DirectX programming, OpenGL programming, iPhone programming“. Pretty easy to guess which one is going up. Surprisingly un-spikey for DirectX and OpenGL.
  • And of course, Real-Time Rendering – Various random spikes; South Korea loves us.
Happy hunting, and please do comment if you find any interesting results.

Yet Another Free Book

So I had such plans for all the things I’d get done during the holiday break. Well, at least I fixed our bathtub faucet, and kept the world safe from/for zombies in Left4Dead versus mode.

In contrast, Wolfgang Engel, Jack Hoxley, Ralf Kornmann, Niko Suni, and Jason Zink did something nice for the world: gave us a DirectX 10 book for free online. There’s more information about it at the site hosting it, gamedev.net. To quote Jack Hoxley, “It’s more of a hands-on guide to the API at a fairly introductory/intermediate level so doesn’t really break any new ground or introduce any never-seen-before cool new tricks, but it should bump up the amount of D3D10 content is available for free online.”

There are some great topics covered, including a thorough treatment of shading models, lots about post-processing effects, and an SSAO implementation (which I disagree with their specific implementation a bit in theory – convex objects shouldn’t really have self-shadowing ever, that’s why you usually ignore half the samples that are obscured, as a start – but SSAO is so hacky that it should be considered an artistic effect as much as photorealistic one). Lots of chewy stuff here.

Don’t be fooled because the book is only on the web, by the way. This is a high-quality effort: well-illustrated, the sections I sampled were readable and worthwhile, and there are solid code snippets throughout. The authors didn’t work out a print deal that they liked, so released the book to the web. You can see its original listing on Amazon. To quote Jack again, “We’re all glad it’s now out … for everyone to use.”

If you find errors or problems in the book, please let the authors know – the whole book is on a wiki, so you can add discussion notes (note: I found the wiki doesn’t work well with Chrome, but Internet Explorer worked fine). As the gamedev.net article notes, this release may form the basis of a book on DirectX 11, so could be considered something of a free beta. Please do reward the authors for their hard work by contributing feedback to them.

Update: Do keep in mind that this is a first draft (i.e., cut them some slack). Reading more bits, quality varies by section. I trust the authors will read and fix each others’ work as time goes on. I do like the wiki element. For example, there are some comments from Greg Ward in the corresponding discussion page for the implementation of the Ward shading model that should help improve their text.

2009 Conference Paper Preprints

The ever-amazing Ke-Sen Huang already has paper pages up for I3D 2009 and Eurographics 2009. Both conferences are currently in that twilight zone where the authors have been notified (and are putting notifications and preprints on their web pages) but the official paper list has not yet been published.

There are already several interesting papers there: Approximating Dynamic Global Illumination in Image Space (available here) extends the popular SSAO (screen-space ambient occlusion) technique to support directional occlusion and single-bounce diffuse reflection. Automatic Linearization of Nonlinear Skinning (available here) introduces a method to automatically place virtual bones, resulting in quality similar to dual quaternion skinning but using traditional linear skinning. Multiresolution Splatting for Indirect Illumination (available here) speeds up reflective shadow maps by using a multiresolution data structure. Bounding volume hierarchies are important for many algorithms (including ray tracing), so a method to rapidly construct them on the fly is useful. Such a method is detailed in Fast BVH Construction on GPUs (paper web page here). The final paper has a somewhat self-explanatory title: Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye (available here).

Two papers, although lacking preprints as of yet, have particularly interesting titles, and I look forward to reading them: Soft Irregular Shadow Mapping: Fast, High-Quality, and Robust Soft Shadows and Real-Time Fluid Simulation using Discrete Sine/Cosine Transforms.

Interesting bits

I’ve been collecting links via del.icio.us of things for the blog. Let’s go:

Antialiasing thick lines by using textures is an old technique. Areakkusu’s site is nice in that it has good examples and code.

The Level of Detail blog has a great pointer to Slisesix’s amazing demo. “Demo” as in “Demoscene,” where his program is a mere 4k bytes in size. It’s not animated, not real-time, but shows how distance fields could be used for ambient occlusion approximation. Definitely check out all the links: Alex Evan (of LittleBIGPlanet) has a worthwhile talk, and Iñigo’s presentation is even better: good technical content and real-time programs running inside the slides.

I’d rather avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at least allow a wider range of people to experiment and explore. There are a lot of worthwhile followup comments on this thread.

Oogst has a clever trick he calls interior mapping, for rendering walls, floors, and ceilings for buildings seen from the outside. Define a texture to be used for each interior element, and have the pixel shader compute from the eye direction what would be seen inside. There’s no actual geometry, it’s all just computing the ray intersection using (wait for it) a floor function. Humus has demo code available for this technique, using DirectX 10. Admittedly, the various tiles repeat and there are other limits, but actual interiors are vastly superior to the usual dirty or reflective windows currently used in games, with no extra geometry added.

Bavoil and Sainz have a new approach for Screen-Space Ambient Occlusion, using a more elaborate form of horizon mapping: http://developer.nvidia.com/object/siggraph-2008-HBAO.html. Code’s available in NVIDIA’s DX 10 SDK.

If you missed Jon Olick’s talk at SIGGRAPH about voxel octree representation, Timothy Farrar has a summary. Personally, I think Jon’s research is very much that-research, not something that is immediately practical-but I love seeing how changing capabilities and increased flexibility can lead to different approaches.

On Amazon: 4 graphics books for the price of 2, minus the papery bits. Pharr and Humphrey’s “Physically Based Rendering” (PBR) and Luebke’s “Level of Detail for 3D Graphics” are certainly worthwhile, the other two I don’t know about (though look worthwhile and are well-rated). I don’t know a thing about the electronic media used; I’m guessing the books are DRM’ed, not naked PDFs. Searchable is certainly nice. While it’s too bad you can’t just buy the ones you want (I smell a marketing department having some “what can we get them to pay for what bundle?” meetings, given the negligible physical cost), I did notice an interesting thing on Amazon I hadn’t seen before for each book except PBR: “Upgrade this book for $18.39 more, and you can read, search, and annotate every page online.” You can also upgrade books you’ve previously purchased on Amazon.

On Gamasutra, an article summarizing DirectX 11. I liked it: to the point, and with some useful figures.

Every once in awhile someone will say he has a new graphics rendering method that’s awesome, but won’t explain it because of some reason (usually involving money or fame). Here’s one, from Sunfish Studio: no micropolygons, no point sampling. OK, so that leaves-what?-voxels? If anyone knows what this is about, please comment; I’m curious.

GameDeveloperTools.com is a new site that tracks news and has users rate books. To be honest, a lot more voting needs to happen to make the ratings useful-I’d stick with Amazon for now. The main use is that you can look at specific categories, which are a bit better than Amazon’s somewhat random sorting of graphics books (e.g., our book is in three categories on Amazon, competing against artists’ books on using mental ray and RenderMan).

Finally, this, well, this is not interactive graphics, but is just so cool: parking signs understandable from only certain locations.