Book

You are currently browsing articles tagged Book.

Short version: the Interactive 3D Graphics course is now entirely out, the last five units have been added: Lights, Cameras, Texturing, Shader Programming, Animation. Massive (22K people registered so far), worldwide (around 128 countries, > 70% students from outside U.S.). Uses three.js atop WebGL. Start at any time, work at your own pace, only basic programming skills needed. Free.

That’s the elevator talk, Twitterized (well, maybe 3 tweets worth). I won’t blab on and on about it, just a few things.

First, it’s so cool to be able to show a student a video, then give a quiz, then let them interact with a demo, then have them write some code for an exercise, all in the browser. Udacity rocketh, both the web programmers and video editors.

Second, I’m very happy about how a whole bunch of lessons turned out. The tough part in all this is trying to not lose your audience. I think I push a bit hard at times, but some of my explanations I like a lot. Mipmapping, antialiasing, gamma correction – a number of the later lectures in particular felt quite good to me, and I thought things hung together well. Shhh, don’t tell me otherwise. Really, it’s not pride so much; I’m just happy to have figured out good ways to explain some things simply.

Third, I wrote a book, basically: it’s about 850 full-sized pages and about 145,000 words. It’s free to download, along with the videos and code. I think of this course as the precursor to Real-Time Rendering, sort of like “Star Wars: Episode 1″, except it’s good. I should really say “we wrote a book”: Gundega Dekena, Patrick Cozzi, Mauricio Vives, and near the end Branislav Ulicny (AlteredQualia) offered a huge amount of help in reviewing, catching various mistakes and suggesting numerous improvements. Many others kindly helped with video clips, interviews, permission to show demos, on and on it goes. Thanks all of you!

Fourth, I love that the demos from the course are online for anyone to point at and click on. Some of these demos are not absolutely fascinating, but each (once you know what you’re looking at) is handy in its own way for explaining some graphics phenomenon. The code’s all downloadable, so others can use them as a basis to make better ones. I’ve wanted this sort of thing for 16 years – took awhile to arrive, but now it’s finally here.

Fifth, working with students from around the world is wonderful! I love helping people on the forums with just a bit of effort on my end. Also, I just noticed a study group starting up. I’ve also enjoyed seeing contest entries, e.g.,  here are the drinking bird entries, click a pic to see it in WebGL:

 

What’s making a MOOC itself like? See John Owens’ excellent article - my experience is pretty much the same.

A close-up in the recording studio, my little world for a few weeks:

Tags: , , ,

RTR in 3D

All of Google Books are in 3D today, even the excerpts from our second edition:

RTR in 3D

Tags: , ,

A demo of the game Just Cause 2 is available on Steam today. What’s interesting is that this is the third DirectX 10-only game to be released. There have been any number of DirectX 10 enhanced games, but until a few months ago there was just one DirectX 10-only game release, Stormrise, a mediocre game released in March 2009. Shattered Horizon then came out in November from Futuremark, who are known more for their graphics benchmarks. Just Cause 2 is a sequel, and distributed by a well-known publisher. Humus describes the logic in going DirectX 10-only.

I’m looking forward to see how DirectX 11′s DirectCompute gets used in commercial applications. Perhaps the day there’s a DirectX 11-only game of any significance is the day we need to start writing a fourth edition. Let’s see: DirectX 10 was released November 2006 with Vista, so it took about three and a quarter years for an anticipated game to be released that was DirectX 10-only (and even now it’s considered dangerous by many to do so). DirectX 11 was released in October 2009, so if the same rule holds, then we’ll need to start writing in February 2013. Pre-order today!

Even now, 13% of Steam gamers have only SM 2.0. Games like World of Warcraft and Left 4 Dead 2 don’t require more, for example. So what’s the magic percentage where the AAA games decide to set the minimum level to the next shader model? I don’t recall it being much of a deal between shader model 2.0 and 3.0 games; there was a little hype, but I think this was because going from SM 2.0 to 3.0 involved just a card upgrade, vs. an OS upgrade. Which is funny, in that an OS upgrade is usually cheaper than a new GPU, but I think it’s also because it’s more critical, like a heart transplant vs. a cornea transplant.

Poking around, I found the interesting graphs below. I’m sure games have been left off, and some are miscategorized, e.g. Cryostatis is the only one under SM 4.0, and it doesn’t require DirectX 10. But, let’s assume this data is semi-reasonable; I’m guessing the games are categorized more by a “recommended configuration” than a minimum. So Shader Model 1.x game releases (and remember, 1.x was pretty darn limited) peaked in 2006, 2.0 peaked in 2007 but outnumbered 3.0 until 2009. SM 3.0 hasn’t peaked yet, I’d say (ignore 2010 and 2011 graph values at this point, of course). Remember that SM 2.0 hardware came out around 2002, so it peaked 5-6 years later and still was strong 7 years later (and perhaps longer, we’ll see). SM 3.0 came out in 2004, and seems likely to continue to be strong through 2010 and into 2011. 4.0 came out in 2006, so I’d go with it peaking in 2011-2012 from just staring at these charts. Which entirely ignores the swirl of other data—Vista and Windows 7, Xbox trends, GPU trends, blah-di-blah—but it’ll be interesting to see if this prediction is about right. (Click on a graph for the lists of games for that shader model.)

Shader Model 1.x

Shader Model 2.0

Shader Model 3.0

Tags: , , , , ,

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

Tags: , , , , , , , , , ,

… at least judging from an email received by Phil Dutre which he passed on. Key excerpt follows:

Dear Amazon.com Customer,

As someone who has purchased or rated Real-Time Rendering by Tomas Moller, you might like to know that Online Interviews in Real Time will be released on December 1, 2009.  You can pre-order yours by following the link below.

With a title-finding algorithm of this quality, Amazon appears to be in need of more CS majors.

Don’t fret, by the way, I’ll be back to pointing out resources come the holidays; things are just a bit busy right now. In the meantime, you can contemplate Morgan McGuire’s gallery of real photos that appear to have rendering artifacts or look like computer graphics. It’s small right now – send him contributions!

Tags: ,

A professor contacted us about whether we had digital copies of our figures available for use on her course web pages for students. Well, we certainly should (and our publisher agrees), and would have done this awhile ago if we had thought of it. So, after a few hours of copying and saving with MWSnap, I’ve made an archive of most of the figures in Real-Time Rendering, 3rd edition. It’s a 34 Mb download:

http://www.realtimerendering.com/downloads/RTR3figures.zip

This archive should make preparation a lot more pleasant and less time-consuming for instructors, vs. scanning in pages of our book or redrawing figures from scratch. Here’s the top of the README.html file in this archive:

These figures and tables from the book are copyright A.K. Peters Ltd. We have provided these images for use under United States Fair Use doctrine (or similar laws of other countries), e.g., by professors for use in their classes. All figures in the book are not included; only those created by the authors (directly, or by use of free demonstration programs, as listed below) or from public sources (e.g., NASA) are available here. Other images in the book may be reused under Fair Use, but are not part of this collection. It is good practice to acknowledge the sources of any images reused – a link to http://www.realtimerendering.com we suspect would be useful to students, and we have listed relevant primary sources below for citation. If you have questions about reuse, please contact A.K. Peters at [email protected].

I’ve added a link to this archive at the top of our main page. I should also mention that Tomas’ Powerpoint slidesets for a course he taught based on the second edition of our book are still available for download. The slides are a bit dated in spots, but are a good place to start. If you have made a relevant teaching aid available, please do comment and let others know.

Tags: , ,

I’m at I3D 2009; tonight at the dinner Austin Robison at NVIDIA announced NVIRT, which is NVIDIA’s ray-casting engine. I say “casting” as the idea is that you feed it objects, hand it a ray generator and it gives you back the ray intersections desired. Certainly it can be used for ray traced rendering, and the constructs presented make it clear they have thought through this aspect: rays can terminate on the first intersection found (useful for shadow rays), or can return the closest intersection point (eye/reflection/refraction rays). Rays can continue on when a fully transparent object is hit. Objects can be put in any efficiency structure you wish, and structures could be contained by other structures (Jim Arvo’s metahierachies idea). For example, you could put static geometry in a k-d tree, which is highly efficient but expensive to update, while placing dynamic objects in a bounding volume hierarchy, which usually can be updated more easily (though losing efficiency over time) by growing bounds. You have control over what efficiency methods are used.

They’re thinking of this SDK in more general ray-casting terms: collision detection, AI queries, and baking illumination or other characteristics onto surfaces. I can certainly imagine uses for engineering simulation. It runs on CUDA, but hides CUDA programming from the user. By the way, the switching time between CUDA and the graphics API will someday soon be a lot less that it is now.

This SDK will be released sometime this Spring (it will also be incorporated with NVIDIA’s NVSG scene graph SDK, as a separate release). The SDK will come with lots of samples, including source fo a basic ray-tracing renderer. All in all, an interesting development. The catch is, of course, that CUDA does not run on anything but NVIDIA hardware. Nonetheless, this is a fascinating first step. Austin says this effort is a serious attempt by NVIDIA to put this sort of engine in the hands of developers, not some “let’s see if this research sticks” half-baked release. Hearing him talk about the bits of inside information their group learnt about the operation of the GPU, and the corresponding boosts in performance, makes me wonder if other GPU-based ray tracers out there will be able to get near their performance.

I have a bunch of links saved up, which I’ll dump here someday soon, as well as more about I3D 2009 (see Jeremy Schopf’s blog in the meantime). For now I’ll just mention one quick link: Morgan McGuire’s twitter blog. No, it’s not a “I’m drinking a latte and using my iPhone” twitter blog. I like the idea a lot: it’s where he simply puts any great links he’s run across, with a quick description for each. Low maintenance, minimal effort, and useful & interesting, at least to me. It’s about game design and related topics (and unrelated ones) as much as graphics. This is one of those “everyone who finds cool stuff on the internet should do this” concepts, as far as I’m concerned. Sure, there’s del.icio.us and similar social bookmarking sites, but a blog lets me know when there’s something new from someone I respect.

Morgan is one of those uncommon people who has considerable industry experience (e.g. “Titan Quest”) while also being in the academic world. He’s a coauthor of the new book Creating Games, which I had been jumping around inside and sampling snippets, and am now sitting down and reading for real. It is aimed at being a book for teaching a college course on making games, both board- and video-, giving a number of schedules for 3 to 4 week projects and worksheets for these. However, these are appendices; the focus of the book is well-informed surveys of a wide range of game design and creation practices. The first chapter has a great startup project for small groups in a class: “here are some dice and pieces of different colors, some paper – go, make a game in 7 minutes.” Anyway, not graphics related per se, but there’s certainly a lot about the computer games industry inside, much of it technical and practical. My favorite illustration so far is the dependence graph amongst the art assets for Spiderman 3, Figure 3.8 – daunting. You can look inside at Amazon. Me, I’m an avid boardgamer (I was up too late last night playing Dominion with Morgan and Naty Hoffman – consider me entirely biased), so I’m enjoying reading it and thinking maybe I should try to design a game…

Tags: , , , ,

Graphics Gems Excerpts

I noticed (or maybe re-noticed) recently that 4 of the 5 Graphics Gems books are excerpted on Google Books. I’ve added links to the excerpts from the Graphics Gems repository. Which made me wonder, can you look inside these books on Amazon? Indeed you can. So I’ve also just added links to Amazon’s Look Inside pages. Between these two resources you can now pretty much read any article from these books online, one way or the other. Handy.

Tags: ,

Time to clear the collection of links and tidbits.

First, two new graphics books have come to my attention: Essentials of Interactive Computer Graphics and Computer Facial Animation, Second Edition. The first is an introductory textbook for teaching, well, just that. Real-Time Rendering was never meant as an introduction to the field of interactive graphics, we’ve always seen it as the book to hit after you know the basics. The Essentials book is squarely focused on these basics, and is more event-oriented and application-driven: GUIs and MFC, instancing and scene graphs, the transformation pipeline. It’s truly aimed at computer graphics in general, not 3D lit scenes. Shading is barely mentioned, for example. The book comes with a CD of software libraries developed in the latter half of the book. See the book’s website for much more information and supplemental materials (e.g. Powerpoint slidesets for teaching from the book!).

Computer Facial Animation is an area I know little about. Which makes this book intriguing to page through -how much there is to know! The first few chapters are dedicated to anatomy and early ways of recording facial expressions. The rest covers all sorts of areas: speech synchronization, hair modeling, face tracking, muscle simulation, skin textures, even photographic lighting techniques. This is one I’ll leave on my desk and hope to pick up at lunch now and again (along with those other books on my desk that beg to be read, like Color Imaging - I need more lunches).

Which reminds me of this nice talk by Kevin Bjorke: Beautiful Women of the Future. The first half is more aesthetic with some interesting fact nuggets, the last half is a worthwhile overview of interactive skin and hair rendering techniques.

It’s worth noting that there are many computer graphics books excerpted on Google Books. Our portal page, item #6, lists a few good ones.

Game Developer Magazine’s Front-Line Award Winners have been announced. Our book was nominated, but to be honest I’m not terribly upset it didn’t win (our second edition won it before); instead, a new book on (video)game design got the honors in the book category, The Art of Game Design. The rest of the award winners are (almost) no doubt deserving, but the winner list provides little new information. It’s the usual suspects: Photoshop CS3, Havok, Torque, Visual Studio 2008 (really? I’d go with Visual Assist X, which adds a bunch of useful bits to VS 2008 to make it more usable). I haven’t seen the Game Developer article itself, which should be more interesting to see the list of runner-ups.

Update: it’s a day later, and the Front Line awards article is available online. Good deal!

I just noticed that Jeremy Birn has been having lighting contests for synthetic scenes. Meant more for the mental ray users of the world, I like it just because there are some nice models to load up in my test applications.

We mentioned SIGGRAPH Asia before; see the papers collection here and some GPU-specific presentations here.

A fair bit going on in the blogosphere:

  • Christer Ericson has an article on optimizing particle system display. I hadn’t considered some of these techniques before.
  • Bill Mill has a worthwhile rant on publishing code along with research results. This often isn’t done, because there’s little benefit to the author. Some researchers will do it anyway, for various reasons (altruism, fame, etc.), but I wish the research system was structured to require such code. It’s certainly encouraged for the journal of graphics tools, for example, but even then the frequency is not that high.
  • Wolfgang Engel has lots of posts about programming for the iPhone & Touch; I was more interested in his comments about caching shadow maps.

Everyone should know about the Steam Hardware Survey. The cool thing is that they recently started adding a history for some stats and, dare I dream it?, pie charts to the site. Much easier to grok at a glance.

Tutorials galore:

Need a huge (or medium, or small), free texture of the whole earth? Go here.

Google’s knol project collects short articles on various topics. Here’s a reasonable sample: a short history of theories of vision. To be honest, though, the site overall seems a bit of a dumping ground. This sort of lameness is proof why editorial supervision (either a single person or a wiki community) is a good thing.

DirectX 10 corrects a long-standing “feature” of previous versions of DirectX: the half-pixel offset. OpenGL’s always had it right (and there really is a right answer, as far as I’m concerned). I was happy to find this full explanation of the DirectX 9 problem on Microsoft’s website.

Our book had a little review in the February 2009 issue of PC-Gamer, by Logan Decker, executive editor, on page 80. I liked the first sentence: “I don’t know why I didn’t immediately set fire to this reference for graphics professionals the moment I saw all the equations. But I actually read it, and if you skip the math bits as I did, you’ll get brilliantly lucid explanations of concepts like vertex morphing and variance shadow mapping—as well as a new respect for the incredible craftsmanship that goes into today’s PC games.”

This one’s made the rounds, but just in case: the Mona Lisa with 50 semi-transparent polygons, evolved (sort-of). Here’s a little eye candy (two links). Plus, panoramas galore.

Finally, guard your dreams.

Tags: , , ,

So I had such plans for all the things I’d get done during the holiday break. Well, at least I fixed our bathtub faucet, and kept the world safe from/for zombies in Left4Dead versus mode.

In contrast, Wolfgang Engel, Jack Hoxley, Ralf Kornmann, Niko Suni, and Jason Zink did something nice for the world: gave us a DirectX 10 book for free online. There’s more information about it at the site hosting it, gamedev.net. To quote Jack Hoxley, “It’s more of a hands-on guide to the API at a fairly introductory/intermediate level so doesn’t really break any new ground or introduce any never-seen-before cool new tricks, but it should bump up the amount of D3D10 content is available for free online.”

There are some great topics covered, including a thorough treatment of shading models, lots about post-processing effects, and an SSAO implementation (which I disagree with their specific implementation a bit in theory – convex objects shouldn’t really have self-shadowing ever, that’s why you usually ignore half the samples that are obscured, as a start – but SSAO is so hacky that it should be considered an artistic effect as much as photorealistic one). Lots of chewy stuff here.

Don’t be fooled because the book is only on the web, by the way. This is a high-quality effort: well-illustrated, the sections I sampled were readable and worthwhile, and there are solid code snippets throughout. The authors didn’t work out a print deal that they liked, so released the book to the web. You can see its original listing on Amazon. To quote Jack again, “We’re all glad it’s now out … for everyone to use.”

If you find errors or problems in the book, please let the authors know – the whole book is on a wiki, so you can add discussion notes (note: I found the wiki doesn’t work well with Chrome, but Internet Explorer worked fine). As the gamedev.net article notes, this release may form the basis of a book on DirectX 11, so could be considered something of a free beta. Please do reward the authors for their hard work by contributing feedback to them.

Update: Do keep in mind that this is a first draft (i.e., cut them some slack). Reading more bits, quality varies by section. I trust the authors will read and fix each others’ work as time goes on. I do like the wiki element. For example, there are some comments from Greg Ward in the corresponding discussion page for the implementation of the Ward shading model that should help improve their text.

Tags: , , ,

« Older entries