Monthly Archives: September 2008

Latency

Herb Sutter’s site has some interesting material about CPU architectures. His article “The Free Lunch is Over” is a bit dated-everyone should know by now that multicore is upon us-but does a good job pounding home that concurrency is the way of the future (i.e., like, now). It also has some memorable lines, like, “Cache is King,” and “Andy Giveth, and Bill Taketh Away.” I contemplate the latter every time I open up a Word document and it takes 25 seconds to appear.

What I noticed today, due to Eric Preisz’s new indexbuffer site, was that Herb has a newer presentation available, “Machine Architecture: Things Your Programming Language Never Told You.” This covers in-depth the topic of latency and how the CPU attempts to hide it. It’s worth a look if you’re at all interested in the topic; there’s material here that I hadn’t seen presented before. I skimmed over the odd things that compilers might do to code, I must admit, but overall I found it worthwhile.

Face and Skin Papers at SIGGRAPH Asia 2008

Ke-Sen Huang has recently added three papers relating to human face and skin rendering to his excellent list of SIGGRAPH Asia 2008 papers. Human faces are among the hardest objects to render realistically, since people are used to examining faces very closely.

The first two papers focus on modeling the effect of human skin layers on reflectance. The authors of the first paper, “Practical Modeling and Acquisition of Layered Facial Reflectance” work in Paul Debevec’s group at the USC Institue for Creative Technologies, which has done a lot of influential work on acquisition of reflectance from human faces (the results of which are now being offered as a commercial product). Previous work focused on polarization to separate reflectance into specular and diffuse. Here diffuse is further separated into single scattering, shallow multiple scattering, and deep multiple scattering (using structured light). Specular and diffuse albedo are captured per-pixel. Unfortunately specular roughness (lobe width) is only captured for each of several regions and not per-pixel, but since normals are captured at very high resolution they could presumably be used to generate per-pixel roughness values which could be useful in rendering at lower resolutions, as we discuss in Section 7.8.1 of Real-Time Rendering. The scattering model is based on the dipole approximation of subsurface scattering introduced by Henrik Wann Jensen and others. NVIDIA have shown real-time rendering of such models using multiple texture-space diffusion passes.

The authors of the second paper, “A Layered, Heterogeneous Reflectance Model for Acquiring and Rendering Human Skin” have also written several important papers on skin reflectance, focused more on simulating physical processes from first principles. They model human skin as a collection of heterogeneous scattering layers separated by infinitesimally thin heterogeneous absorbing layers. They design their model for efficient GPU evaluation, similar to NVIDIA’s approach mentioned above (one of this paper’s authors also worked on the NVIDIA skin demo). “Efficient” here is a relative term, since their model is too complex to be real-time on current hardware, and as presented is probably too complicated for game use. However, ideas gleaned from this paper are likely to be useful for skin rendering in games. The authors also present a protocol for measuring the parameters of their model

The third paper, “Facial Performance Synthesis Using Deformation-Driven Polynomial Displacement Maps” is also from Debevec’s USC group and focuses on animation rather than reflectance. They use the same facial capturing setup, but with different software to capture animated facial deformations instead of reflectance (this too has been turned into a commercial product). This paper is interesting because it extends previous coarse / fine deformation approaches to multiple scales, and uses a novel method to relate the different scales to each other. They use a polynomial displacement map, which uses the same form as Polynomial Texture Mapping (an interesting technique in its own right) but for deformation rather than shading. This method also bears some resemblance to the wrinkle map approach used by AMD for their Ruby Whiteout demo, which they presented at SIGGRAPH 2007.

Tim Sweeney Interview

Tim Sweeney is a cofounder of Epic Games and lead developer behind the graphics engines for the Unreal series of games. Jon Stokes has a meaty interview with him, up on Ars Technica; go read it!

fourth edition might be C++?Tim talks about how the GPU has become general enough that we will soon be able to get away from rasterization as the only rendering algorithm. Back ten years ago, dealing with an API to do all interactive graphics was limiting. Widening it out with programmable shaders gives more flexibility, but at the cost of complexity of managing the programming environment. Nowadays you’re programming two separate computers that talk to each other. The shift to parallel programming is already a major change in how we need to think about computers, one that hasn’t become a core concept for most of us, yet (myself included; I’m doing my best to wrap my head around Intel’s Threading Building Blocks, for example). Doing such programming in a few different languages is a “feature” we’d all love to see go away.

With Larrabee, CUDA, and compute shaders, the trends of more flexibility continue, though in different flavors. It seems unlikely to me that the pipeline model itself for rendering will fade in popularity any time soon, though rasterization (traditional GPUs) vs. tiling (Larrabee, handhelds) will continue to be a debate. Tim mentions voxel rendering techniques (really, heightfield, in the old games) as something that died once the GPU took over. True. Such techniques are making a return on the GPU even today, via relief mapping and adaptive tessellation. We’re also seeing volume rendering by marching along rays; if an algorithm can be refit to work on a GPU, it will find some use.

So I agree, the increase in flexibility will be all to the good in letting programmers again do much more than render textured opaque triangles via a Z-buffer really fast and most everything else not-so-fast. Frankly, I believe much of the buzz about interactive ray tracing is more an expression of yearning by us graphics programmers that we could actually program again, vs. calling an API. The April Fool’s Day spoof about ray tracing in DirectX 11 fooled a number of people I know, I believe because they wished it were true. Having hacked my fair share of rendering algorithms, I certainly see the appeal.

I think Tim’s a bit overoptimistic on the time frame in which such changes will occur. First, everyone needs to get this future hardware. Sure, NVIDIA points out there are 70 million CUDA-capable graphics cards out there today, but no one is floating CUDA-based programs as alternative interactive renderers at this point (though NVIDIA’s experiments with CUDA ray tracing are wonderful to see). DirectX 9 graphics cards will be around for years to come. As significant, making such techniques part of the normal development toolchain also takes awhile. I think of how long normal (dot-product) bump mapping, introduced around 2001, took to become a feature that was used in games: first most GPUs had to support it, then tools had to generate and manage the maps, then artists had to be trained to use the tools, etc.

When the second edition of our book came out, it was a few hundred pages longer than the first. I held out the hope to Tomas that our third edition would be shorter. My logic was that, with programmable shaders coming to the fore, we wouldn’t have to cover all the little variants that were possible, but rather could just present pure algorithms and not worry about the implementation details.

This came true to some extent. For example, we could cut out chunks of text about extremely specific ways to efficiently compute the Fresnel term, or give examples showing how assembly instructions are packed together in a pixel shader. There was now plenty of space on the GPU for shader instructions, so such detail was nonsensical. It would be like a programming languages book listing all the programs that could be written in the language. We still do have to spend time dealing with the vagaries of the APIs, such as the relatively space-inefficient ways in which triangles are fed through the pipeline (e.g. a “compact” representation of a cube must use 24 separate vertices, when all that is really needed are 8 points and 6 normals).

Counterbalancing such cuts in text, we found we had many more algorithms to write about. With both the increase in abilities in each successive generation of GPUs and APIs, coupled with research into ways to efficiently map algorithms onto new architectures, the book became considerably longer (and certainly heavier, since each illustration’s atoms now each needed 3 bytes instead of 1). So, I’m not holding out much hope for a shorter edition next time around-there’s just so much cool stuff that we can now do, and more yet to come.

Incidentally, we had asked Tim for a pithy quote for our new Hardware chapter. He said he didn’t have anything, but passed on one from Bily Zelsnack. This quote was tempting, but instead we used it in our last chapter: “Pretty soon, computers will be fast,” which I just love for some reason. It may sometimes take 20 seconds to open a file folder on Windows today, but I remain hopeful that someday, someday…

GPU REYES Implementation

Pixar’s Renderman rendering package is based on the REYES rendering pipeline (an acronym for the humble phrase “Render Everything You Ever Saw”). Most film studios use Pixar’s Renderman, and many others use renderers operating on similar principles. A close reading of the original REYES paper shows a pipeline which was designed to be extremely efficient (it had to be, to run on 1980’s hardware!) and produce very high quality images. I have long thought that this pipeline is a good fit for graphics hardware (given some minor changes or an increase in generality), and is perhaps a better fit to today’s dense scenes than the traditional triangle pipeline. A paper to be published in SIGGRAPH Asia this year describes a GPU implementation of the subdivision stages of the REYES pipeline, which is a key step towards a full GPU REYES implementation. They use CUDA for the subdivision stages, and then pass the resulting micropolygons to a traditional rendering pass. Although combining CUDA and traditional rendering in this manner introduces performance problems, newer APIs such as DX11 compute shaders have been designed to perform well under such conditions. Of course, this algorithm would be a great fit for Larrabee.

Anyone interested in the implementation details of the REYES algorithm should also read “How PhotoRealistic RenderMan Works”, which is available as a chapter in the book Advanced Renderman and in the SIGGRAPH 2000 Renderman course notes.

I found this paper on Ke-Sen Huang‘s SIGGRAPH Asia preprint page. Ke-Sen performs an invaluable service to the community by providing links to preprints of papers from all the major graphics-related conferences. This preprint page is all the more impressive when you realize that SIGGRAPH Asia has not even published a list of accepted papers yet!

At long last, in stock

Lately I’ve been looking at Amazon’s listing of our book daily, to see if it’s in stock. Finally, today, it is, for the first time ever, a mere 40 days after its release. Not our publisher’s fault at all (A.K. Peters rules, OK?), and the book’s not that popular (AFAIK); it evidently just takes awhile for the books delivered to percolate out into Amazon’s system. Amazon under-ordered, so I believe by the time the books they first ordered made it to the distribution centers, they were already sold out, making the book again out of stock. Lather, rinse, repeat. So maybe I should be sad that it’s now in stock.

Anyway, the amusing part of visiting each day has been looking at the discount given on the book. It’s nice to see a discount at all, as Amazon didn’t discount our previous book for the first few years. With the current 28% discount, it means our new edition is effectively $5 less than the previous edition’s original price. Which cheers me up, as I like to imagine that students are saving money; my older son will be in college next year, and any royalties I make from our book will effectively get recycled over the next four years in buying his texts. His one book for a summer course this year was a black & white softbound book, 567 pages, and cost an astounding (to me) $115, and that was “discounted” from $128.95. I’m now encouraging my younger son to skip college and go into the lucrative field of transistor repair.

Amazon’s discount has varied like a random walk among four values: 0%, 22%, 28%, and 33%. Originally, in July, it was list price, then the discount was set at 33% (so Amazon was paying more for the book than they were selling it for), then back to normal, then 33%. Around August 14th I started checking once a week or so and also looking at Associates sales (a program I recommend if you’re a book author, as it’s found money – it pays for this website). Again the book went back to no discount, then on August 20th started at 0%, went to 22% off, then 33% off, all in the same day. The next day there was no discount, then the day after it went back to 33%. August 28, when I checked again, it was at 22%, and this discount held through the end of the month. On September 1st it went up to 28% off, and there it’s been for a whole 9 days.

The oddest bit was that, in searching around for prices (Amazon’s is indeed the best, at least as of today), I noticed that the first edition of our book, from 1999, sells used for twice as much or more than our new book. Funny world.

By the way, if you are looking to write a book and want to understand royalties and going rates a little bit better, see my old article on this topic. Really, it’s not my article, it’s a collection of responses from authors I know. Some of it’s a bit confrontational and might make you a little paranoid, but I think it’s worth a read. If you’re writing technical books to get rich, you’re fooling yourself, but on the other hand there’s no reason to let someone take advantage of you. My favorite author joke, from Michael Cohen via John Wallace, is that there are dozens of dollars to be made writing a book, dozens I tell you. It can be a bit better than that if you’re lucky, but still comes out to about minimum wage when divided by the time spent. But for me it’s a lot more fun and educational work than flipping burgers, and the money is not why we wrote our book. We did it for the wild parties and glamorous lifestyle.

Update: heh, that didn’t last long. I wrote this entry Sept. 9th. As of the 10th, the book is (a) out of stock again and (b) down to a 2% discount. 2%?! Truly obscure.

Gamefest presentations and other links

Christer Ericson points out in a recent blog post that Microsoft has uploaded the Gamefest 2008 slides. These include a lot of relevant information, especially in relation to Direct3D 11. Christer’s post has many other links to interesting stuff – I particularly liked Iñigo Quilez’s slides on raycasting distance fields. Distance fields (sometimes referred to as Euclidean distance transforms, though that properly refers to the process of creating such a distance field) are very useful data structures. As Valve showed at SIGGRAPH last year, they can also be used for cheap vector shapes (the basic form of their technique is a better way to generate data for alpha testing, with zero runtime cost!).

Bézier, Gouraud, Fresnel

Vincent Scheib’s terminology rant included how to pronounce “SIGGRAPH” (i.e., like “sigma”), a pet peeve of mine. This reminded me of the following.

While writing the book, we wanted to give phonetic spellings of various common graphics terms – after hearing someone pronounce “Gouraud” as “Goo-raude” I thought it worth the time. In searching around, I realized that people seemed to somewhat disagree about Bézier, so finally I asked a few people from France. Frédo Durand gave the best response, sending an audio clip of him pronouncing the words. So, without further ado, here’s the audio clip. Now you know.

Update: here’s a nice article about pronunciation of many other computer graphics related terms.

I3D 2009 CFP

I3D is a symposium (fancy word for “small conference”) that is a great way to spend three days with academics and industry people in the fields of interactive 3D graphics and games. The program is a single track, consisting primarily of original research papers, but with other events such as keynotes, posters, roundtables, etc. See 2008’s program for an example.

I’m papers cochair this year with Morgan McGuire, so I’m biased, but I love the venue. Instead of the 28,000+ people of SIGGRAPH, you have around 100 or so people at a hotel or campus (2008’s was at EA’s headquarters). The best part is that you are likely to have a good conversation with anyone you meet, since they’re all there for the same reason. There is also plenty of time in the evening to socialize. I thoroughly enjoyed organizing the pub quiz for 2008.

If a paper is too daunting a task, consider submitting a poster or demo; it’s an easy way to get your idea out there and get feedback from others in the field.

The call for participation (CFP) is at http://i3dsymposium.org.

The quick summary:

Conference Feb. 27 – March 1, 2009
Location: Radisson Hotel Boston, Boston, MA

Paper submissions: October 24, 2008
Poster and demo submissions: December 19, 2008

A cool stat I learned of recently is that I3D is #25 of all computer science journals in its impact; the only graphics publication of more impact is SIGGRAPH itself, at #9: http://citeseer.ist.psu.edu/impact.html (study’s from 2003, I’d love to find a newer one if anyone knows).

Hope to see you there.