Tag Archives: facial animation

This, That, and the Other

Time to clear the collection of links and tidbits.

First, two new graphics books have come to my attention: Essentials of Interactive Computer Graphics and Computer Facial Animation, Second Edition. The first is an introductory textbook for teaching, well, just that. Real-Time Rendering was never meant as an introduction to the field of interactive graphics, we’ve always seen it as the book to hit after you know the basics. The Essentials book is squarely focused on these basics, and is more event-oriented and application-driven: GUIs and MFC, instancing and scene graphs, the transformation pipeline. It’s truly aimed at computer graphics in general, not 3D lit scenes. Shading is barely mentioned, for example. The book comes with a CD of software libraries developed in the latter half of the book. See the book’s website for much more information and supplemental materials (e.g. Powerpoint slidesets for teaching from the book!).

Computer Facial Animation is an area I know little about. Which makes this book intriguing to page through -how much there is to know! The first few chapters are dedicated to anatomy and early ways of recording facial expressions. The rest covers all sorts of areas: speech synchronization, hair modeling, face tracking, muscle simulation, skin textures, even photographic lighting techniques. This is one I’ll leave on my desk and hope to pick up at lunch now and again (along with those other books on my desk that beg to be read, like Color Imaging – I need more lunches).

Which reminds me of this nice talk by Kevin Bjorke: Beautiful Women of the Future. The first half is more aesthetic with some interesting fact nuggets, the last half is a worthwhile overview of interactive skin and hair rendering techniques.

It’s worth noting that there are many computer graphics books excerpted on Google Books. Our portal page, item #6, lists a few good ones.

Game Developer Magazine’s Front-Line Award Winners have been announced. Our book was nominated, but to be honest I’m not terribly upset it didn’t win (our second edition won it before); instead, a new book on (video)game design got the honors in the book category, The Art of Game Design. The rest of the award winners are (almost) no doubt deserving, but the winner list provides little new information. It’s the usual suspects: Photoshop CS3, Havok, Torque, Visual Studio 2008 (really? I’d go with Visual Assist X, which adds a bunch of useful bits to VS 2008 to make it more usable). I haven’t seen the Game Developer article itself, which should be more interesting to see the list of runner-ups.

Update: it’s a day later, and the Front Line awards article is available online. Good deal!

I just noticed that Jeremy Birn has been having lighting contests for synthetic scenes. Meant more for the mental ray users of the world, I like it just because there are some nice models to load up in my test applications.

We mentioned SIGGRAPH Asia before; see the papers collection here and some GPU-specific presentations here.

A fair bit going on in the blogosphere:

  • Christer Ericson has an article on optimizing particle system display. I hadn’t considered some of these techniques before.
  • Bill Mill has a worthwhile rant on publishing code along with research results. This often isn’t done, because there’s little benefit to the author. Some researchers will do it anyway, for various reasons (altruism, fame, etc.), but I wish the research system was structured to require such code. It’s certainly encouraged for the journal of graphics tools, for example, but even then the frequency is not that high.
  • Wolfgang Engel has lots of posts about programming for the iPhone & Touch; I was more interested in his comments about caching shadow maps.

Everyone should know about the Steam Hardware Survey. The cool thing is that they recently started adding a history for some stats and, dare I dream it?, pie charts to the site. Much easier to grok at a glance.

Tutorials galore:

Need a huge (or medium, or small), free texture of the whole earth? Go here.

Google’s knol project collects short articles on various topics. Here’s a reasonable sample: a short history of theories of vision. To be honest, though, the site overall seems a bit of a dumping ground. This sort of lameness is proof why editorial supervision (either a single person or a wiki community) is a good thing.

DirectX 10 corrects a long-standing “feature” of previous versions of DirectX: the half-pixel offset. OpenGL’s always had it right (and there really is a right answer, as far as I’m concerned). I was happy to find this full explanation of the DirectX 9 problem on Microsoft’s website.

Our book had a little review in the February 2009 issue of PC-Gamer, by Logan Decker, executive editor, on page 80. I liked the first sentence: “I don’t know why I didn’t immediately set fire to this reference for graphics professionals the moment I saw all the equations. But I actually read it, and if you skip the math bits as I did, you’ll get brilliantly lucid explanations of concepts like vertex morphing and variance shadow mapping—as well as a new respect for the incredible craftsmanship that goes into today’s PC games.”

This one’s made the rounds, but just in case: the Mona Lisa with 50 semi-transparent polygons, evolved (sort-of). Here’s a little eye candy (two links). Plus, panoramas galore.

Finally, guard your dreams.

Face and Skin Papers at SIGGRAPH Asia 2008

Ke-Sen Huang has recently added three papers relating to human face and skin rendering to his excellent list of SIGGRAPH Asia 2008 papers. Human faces are among the hardest objects to render realistically, since people are used to examining faces very closely.

The first two papers focus on modeling the effect of human skin layers on reflectance. The authors of the first paper, “Practical Modeling and Acquisition of Layered Facial Reflectance” work in Paul Debevec’s group at the USC Institue for Creative Technologies, which has done a lot of influential work on acquisition of reflectance from human faces (the results of which are now being offered as a commercial product). Previous work focused on polarization to separate reflectance into specular and diffuse. Here diffuse is further separated into single scattering, shallow multiple scattering, and deep multiple scattering (using structured light). Specular and diffuse albedo are captured per-pixel. Unfortunately specular roughness (lobe width) is only captured for each of several regions and not per-pixel, but since normals are captured at very high resolution they could presumably be used to generate per-pixel roughness values which could be useful in rendering at lower resolutions, as we discuss in Section 7.8.1 of Real-Time Rendering. The scattering model is based on the dipole approximation of subsurface scattering introduced by Henrik Wann Jensen and others. NVIDIA have shown real-time rendering of such models using multiple texture-space diffusion passes.

The authors of the second paper, “A Layered, Heterogeneous Reflectance Model for Acquiring and Rendering Human Skin” have also written several important papers on skin reflectance, focused more on simulating physical processes from first principles. They model human skin as a collection of heterogeneous scattering layers separated by infinitesimally thin heterogeneous absorbing layers. They design their model for efficient GPU evaluation, similar to NVIDIA’s approach mentioned above (one of this paper’s authors also worked on the NVIDIA skin demo). “Efficient” here is a relative term, since their model is too complex to be real-time on current hardware, and as presented is probably too complicated for game use. However, ideas gleaned from this paper are likely to be useful for skin rendering in games. The authors also present a protocol for measuring the parameters of their model

The third paper, “Facial Performance Synthesis Using Deformation-Driven Polynomial Displacement Maps” is also from Debevec’s USC group and focuses on animation rather than reflectance. They use the same facial capturing setup, but with different software to capture animated facial deformations instead of reflectance (this too has been turned into a commercial product). This paper is interesting because it extends previous coarse / fine deformation approaches to multiple scales, and uses a novel method to relate the different scales to each other. They use a polynomial displacement map, which uses the same form as Polynomial Texture Mapping (an interesting technique in its own right) but for deformation rather than shading. This method also bears some resemblance to the wrinkle map approach used by AMD for their Ruby Whiteout demo, which they presented at SIGGRAPH 2007.