Book

You are currently browsing articles tagged Book.

The recently and sadly departed Game Developer magazine had a great post-mortem article format of “5 things that went right/went wrong” with some videogame, by its creators. I thought I’d try one myself for the MOOC “Interactive 3D Graphics” that I helped develop. I promise my next posts will not be about MOOCs, really. The payoff, not to be missed, is the demo at the end – click that picture below if you want to skip the words part and want dessert now.

Good Points

Three.js: This layer on top of WebGL meant I could initially hide details critical to WebGL but overwhelming for beginners, such as shader programming. The massive number of additional resources and libraries available were a huge help: there’s a keyframing library, a collision detection library, a post-processing library, on and on. Documentation: often lacking; stability: sketchy – interfaces change from release to release; usefulness: incredible – it saved me tons of time, and the course wouldn’t have gone a third as far as it did if I used just vanilla WebGL.

Web Stuff: I didn’t have to handle any of the web programming, and I’m still astounded at how much was possible, thanks to Gundega Dekena (the assistant instructor) and the rest of the Udacity web programmers. Being able to show a video, then let a student try out a demo, then ask him or her a question, then provide a programming exercise, all in a near-seamless flow, is stunning to me. Going into this course we didn’t know this system was going to work at all; a year later WebGL is now more stable and accepted, e.g., Internet Explorer is now finally going to support it. The bits that seem peripheral to the course matter a lot: Udacity’s forum is nicely integrated, with students’ postings about particular lessons directly linked from those pages. It’s lovely having a website that lets students download all videos (YouTube is slow or banned in various places), scripts, and code used in the course.

Course Format: Video has some advantages over text. The simple ability to point at things in a figure while talking through them is a huge benefit. Letting the student try out some graphics algorithm and get a sense of what it does is fantastic. Once he or she has some intuition as to what’s going on, we can then dig into details. I wanted to get stuff students could sensibly control (triangles, materials) on the screen early on.  Most graphics books and courses focus on dreary transforms and matrices early on. I was able to put off these “eat your green beans” lessons until nearly halfway through the course, as three.js gave enough support that the small bits of code relating to lights and cameras could be ignored for a time. Before transforms, students learned a bit about materials, a topic I think is more immediately engaging.

Reviewers and Contributors: I had lots of help from Autodesk co-workers, of course. Outside of that, every person I asked “can I show your cool demo in a lesson?” said yes – I love the graphics community. Most critical of all, I had great reviewers who caught a bunch of problems and contributed some excellent ideas and revisions. Particular kudos to Gundega Dekena, Mauricio Vives, Patrick Cozzi, and at the end, Branislav Ulicny (AlteredQualia). I owe them each like a house or something.

Creative Control: I’m happy with how most of the lessons came out. I overreached with a few lessons (“Frames” comes to mind), and a few lines I delivered in some videos make me groan when I hear them. However, the content itself of many of the recordings are the best I’ve ever explained some topics, definite improvements on Real-Time Rendering. That book is good, but is not meant as an introductory text. I think of this course as the prequel to that volume, sort of like the Star Wars prequels, only good. The scripts for all the lessons add up to about 850 full-sized sheets of paper, about 145,000 words. It’s a book, and I’m happy with it overall.

Some Bad Points

Automatic Grading: A huge boon on one level, since grading individual projects would have been a never-ending treadmill for us humans. Quick stats: the course has well over 30,000 enrollments, with about 1500 people active in any given week, 71% outside the U.S. But, it meant that some of the fun of computer graphics – making cool projects such as Rube Goldberg devices or little games or you name it – couldn’t really be part of the core course. We made up for this to some extent by creating contests for students. Some entries from the first contest are quite nice. Some from the second are just plain cool. But, the contests are over now, with no new ones on the horizon. My consolation is that anyone who is self-motivated enough to work their way through this course is probably going to go off and do interesting things anyway, not just say, “Computer graphics, check, now I know that – on to basket weaving” (though I guess that’s fine, too).

Difficulty in Debugging: The cool thing about JavaScript is that you can debug simple programs in the browser, e.g. in Chrome just hit F12. The bad news is that this debugger doesn’t work well with the in-browser code development system Udacity made. The workarounds are to perform JSHint on any code in the browser, which catches simple typos, and to provide the course code on Github; developing the code locally on your machine means you can use the debugger. Still, a fully in-browser solution with debugging available would have been better.

Videos: Some people like Salman Khan can give a lecture and draw at the same time, in a single take. That’s not my skill set, and thankfully the video editors did a lot to clean up my recordings and fix mistakes as found. However, a few bugs still slipped through or were difficult to correct without me re-recording the lesson. We point these out in the Instructor Notes, but re-recording is a lot of time and effort on all our parts, and involves cross-country travel for me. Text or code is easy to fix and rearrange, videos are not. I expect this limitation is something our kids will someday laugh or scratch their heads about. As far as the format itself goes, it seems like a pain to me to watch a video and later scrub through it to find some code bit needed in an upcoming exercise. I think it’s important to have the PDF scripts of the videos available to students, though I suspect most students don’t use them or even know about them. I believe students cope by having two browser windows open side-by-side, one with the paused video, one with the exercise they’re working on.

Out of Time: Towards the end of the course some of the lessons become (relatively) long lectures and are less interactive; I’m looking at you, Unit 8. This happened mostly because I was running out of time – it was quicker for me to just talk than to think up interesting questions or program up worthwhile exercises. Also, the nature of the material was more general, less feature-oriented, which made for more traditional lectures that were tougher to simply quiz about. Still, having a deadline focused my efforts (even if I did miss the deadline by a month or so), and it’s good there was a deadline, otherwise I’d endlessly fiddle with improving bits of the course. I think my presentation style improved overall as the lessons go on; the flip side is that the earlier lessons are rougher in some ways, which may have put students off. Looking back on the first unit, I see a bunch of things I’d love to redo. I’d make more in-browser demos, for starters – at the beginning I didn’t realize that was even possible.

Hollow Halls: MOOCs can be divided into two types by how they’re offered. One approach is self-paced, such as this MOOC. The other has a limited duration, often mirroring a real-world class’s progression. The self-paced approach has a bunch of obvious advantages for students: no waiting to start, take it at your own speed, skip over lessons you don’t care about, etc. The advantages of a launched course are community and a deadline. On the forum you’re all at the same lesson and so study groups form and discussions take place. Community and a fixed pace can help motivate students to stick it through until the end (though of course can lose other students entirely, who can then never finish). The other downside of self-pacing is that, for the instructor(s), the course is always-on, there’s no break! I’m pretty responsible and like answering forum posts, but it’s about a half hour out of my day, every day, and the time piles up if I’m on vacation for a week. Looking this morning, there are nine forum posts to check out… gotta go!

But it all works out, I’m a little freaked out. For some reason that song went through my head a lot while recording, and gave a title to this post.

Below is one of the contest entries for the course. Click on the image to run the demo; more about the project on the Udacity forums. You may need to refresh to get things in sync. A more reliable solution is to pick another song, which almost always causes syncing to occur. See other winners here, and the chess game is also one I enjoyed.

Musical Turk

 

Tags: , , , , ,

Short version: the Interactive 3D Graphics course is now entirely out, the last five units have been added: Lights, Cameras, Texturing, Shader Programming, Animation. Massive (22K people registered so far), worldwide (around 128 countries, > 70% students from outside U.S.). Uses three.js atop WebGL. Start at any time, work at your own pace, only basic programming skills needed. Free.

That’s the elevator talk, Twitterized (well, maybe 3 tweets worth). I won’t blab on and on about it, just a few things.

First, it’s so cool to be able to show a student a video, then give a quiz, then let them interact with a demo, then have them write some code for an exercise, all in the browser. Udacity rocketh, both the web programmers and video editors.

Second, I’m very happy about how a whole bunch of lessons turned out. The tough part in all this is trying to not lose your audience. I think I push a bit hard at times, but some of my explanations I like a lot. Mipmapping, antialiasing, gamma correction – a number of the later lectures in particular felt quite good to me, and I thought things hung together well. Shhh, don’t tell me otherwise. Really, it’s not pride so much; I’m just happy to have figured out good ways to explain some things simply.

Third, I wrote a book, basically: it’s about 850 full-sized pages and about 145,000 words. It’s free to download, along with the videos and code. I think of this course as the precursor to Real-Time Rendering, sort of like “Star Wars: Episode 1″, except it’s good. I should really say “we wrote a book”: Gundega Dekena, Patrick Cozzi, Mauricio Vives, and near the end Branislav Ulicny (AlteredQualia) offered a huge amount of help in reviewing, catching various mistakes and suggesting numerous improvements. Many others kindly helped with video clips, interviews, permission to show demos, on and on it goes. Thanks all of you!

Fourth, I love that the demos from the course are online for anyone to point at and click on. Some of these demos are not absolutely fascinating, but each (once you know what you’re looking at) is handy in its own way for explaining some graphics phenomenon. The code’s all downloadable, so others can use them as a basis to make better ones. I’ve wanted this sort of thing for 16 years – took awhile to arrive, but now it’s finally here.

Fifth, working with students from around the world is wonderful! I love helping people on the forums with just a bit of effort on my end. Also, I just noticed a study group starting up. I’ve also enjoyed seeing contest entries, e.g.,  here are the drinking bird entries, click a pic to see it in WebGL:

 

What’s making a MOOC itself like? See John Owens’ excellent article - my experience is pretty much the same.

A close-up in the recording studio, my little world for a few weeks:

Tags: , , ,

RTR in 3D

All of Google Books are in 3D today, even the excerpts from our second edition:

RTR in 3D

Tags: , ,

A demo of the game Just Cause 2 is available on Steam today. What’s interesting is that this is the third DirectX 10-only game to be released. There have been any number of DirectX 10 enhanced games, but until a few months ago there was just one DirectX 10-only game release, Stormrise, a mediocre game released in March 2009. Shattered Horizon then came out in November from Futuremark, who are known more for their graphics benchmarks. Just Cause 2 is a sequel, and distributed by a well-known publisher. Humus describes the logic in going DirectX 10-only.

I’m looking forward to see how DirectX 11′s DirectCompute gets used in commercial applications. Perhaps the day there’s a DirectX 11-only game of any significance is the day we need to start writing a fourth edition. Let’s see: DirectX 10 was released November 2006 with Vista, so it took about three and a quarter years for an anticipated game to be released that was DirectX 10-only (and even now it’s considered dangerous by many to do so). DirectX 11 was released in October 2009, so if the same rule holds, then we’ll need to start writing in February 2013. Pre-order today!

Even now, 13% of Steam gamers have only SM 2.0. Games like World of Warcraft and Left 4 Dead 2 don’t require more, for example. So what’s the magic percentage where the AAA games decide to set the minimum level to the next shader model? I don’t recall it being much of a deal between shader model 2.0 and 3.0 games; there was a little hype, but I think this was because going from SM 2.0 to 3.0 involved just a card upgrade, vs. an OS upgrade. Which is funny, in that an OS upgrade is usually cheaper than a new GPU, but I think it’s also because it’s more critical, like a heart transplant vs. a cornea transplant.

Poking around, I found the interesting graphs below. I’m sure games have been left off, and some are miscategorized, e.g. Cryostatis is the only one under SM 4.0, and it doesn’t require DirectX 10. But, let’s assume this data is semi-reasonable; I’m guessing the games are categorized more by a “recommended configuration” than a minimum. So Shader Model 1.x game releases (and remember, 1.x was pretty darn limited) peaked in 2006, 2.0 peaked in 2007 but outnumbered 3.0 until 2009. SM 3.0 hasn’t peaked yet, I’d say (ignore 2010 and 2011 graph values at this point, of course). Remember that SM 2.0 hardware came out around 2002, so it peaked 5-6 years later and still was strong 7 years later (and perhaps longer, we’ll see). SM 3.0 came out in 2004, and seems likely to continue to be strong through 2010 and into 2011. 4.0 came out in 2006, so I’d go with it peaking in 2011-2012 from just staring at these charts. Which entirely ignores the swirl of other data—Vista and Windows 7, Xbox trends, GPU trends, blah-di-blah—but it’ll be interesting to see if this prediction is about right. (Click on a graph for the lists of games for that shader model.)

Shader Model 1.x

Shader Model 2.0

Shader Model 3.0

Tags: , , , , ,

Some great bits have accumulated. Here they are:

  • I3D 2010 paper titles are up! Most “how would that work?!” type of title: “Stochastic Transparency”.
  • Eurographics 2010 paper titles are up! Most intriguing title: “Printed Patterns for Enhanced Shape Perception of Papercraft Models”.
  • An article in The Economist discusses how consumer technologies are being used by military forces. There are minor examples, like Xbox controllers being used to control robotic reconnaissance vehicles. I was interested to see that BAE Systems (a company that isn’t NVIDIA) talk about how using GPUs can replace other computing equipment for simulation at 1/100th the price. Of course, Iraq knew this 9 years ago.
  • I wish I had noticed this page a week ago, in time for Xmas (where X equals, nevermind): Christer Ericson’s recommended book page. I know of many of the titles, but hadn’t heard of The New Turing Omnibus before – this sounds like the perfect holiday gift for any budding computer science nerd, and something I think I’d enjoy, too. Aha, hmmm, wait, Amazon has two-day shipping… done!
  • A problem with the z-buffer, when used with a perspective view, is that the z-depths do not linearly correspond to actual world distances along the camera’s view direction. This article and this one (oh, and this is related) give ways to get back to this linear space. Why get the linear view-space depth? Two reasons immediately come to mind: proper computation of atmospheric effects, and edge detection due to z-depth changes for non-photorealistic rendering.
  • Wolfgang Engel (along with comments by others) has a great summary of order-independent transparency algorithms to date. I wonder when the day will come that we can store some number of layers per pixel without any concern about memory costs and access methods. Transparency is what kills algorithms like deferred shading, because all the layers are not there at the time when shading is resolved. Larrabee could have handled that… ah, well, someday.
  • Morgan McGuire has a paper on Ambient Occlusion Volumes (motto: shadow volumes for ambient light). I’ll be interested to see how this compares with Volumetric Obscurance in I3D 2010 (not up yet for download).

Amazon Stock Market update: one nice thing about having an Amazon Associates account is that prices at various dates are visible. The random walk that is Amazon’s pricing structure becomes apparent for our book: December 1st: $71.20, December 11-14: $75.65, December 18-22: $61.68. Discounted for the holidays? If so, Amazon’s marketing is aiming at a much different family demographic than I’m used to. “Oh, daddy, Principia Mathematica? How did you know? I’ve been wanting it for ever so long!”

Tags: , , , , , , , , , ,

… at least judging from an email received by Phil Dutre which he passed on. Key excerpt follows:

Dear Amazon.com Customer,

As someone who has purchased or rated Real-Time Rendering by Tomas Moller, you might like to know that Online Interviews in Real Time will be released on December 1, 2009.  You can pre-order yours by following the link below.

With a title-finding algorithm of this quality, Amazon appears to be in need of more CS majors.

Don’t fret, by the way, I’ll be back to pointing out resources come the holidays; things are just a bit busy right now. In the meantime, you can contemplate Morgan McGuire’s gallery of real photos that appear to have rendering artifacts or look like computer graphics. It’s small right now – send him contributions!

Tags: ,

A professor contacted us about whether we had digital copies of our figures available for use on her course web pages for students. Well, we certainly should (and our publisher agrees), and would have done this awhile ago if we had thought of it. So, after a few hours of copying and saving with MWSnap, I’ve made an archive of most of the figures in Real-Time Rendering, 3rd edition. It’s a 34 Mb download:

http://www.realtimerendering.com/downloads/RTR3figures.zip

This archive should make preparation a lot more pleasant and less time-consuming for instructors, vs. scanning in pages of our book or redrawing figures from scratch. Here’s the top of the README.html file in this archive:

These figures and tables from the book are copyright A.K. Peters Ltd. We have provided these images for use under United States Fair Use doctrine (or similar laws of other countries), e.g., by professors for use in their classes. All figures in the book are not included; only those created by the authors (directly, or by use of free demonstration programs, as listed below) or from public sources (e.g., NASA) are available here. Other images in the book may be reused under Fair Use, but are not part of this collection. It is good practice to acknowledge the sources of any images reused – a link to http://www.realtimerendering.com we suspect would be useful to students, and we have listed relevant primary sources below for citation. If you have questions about reuse, please contact A.K. Peters at [email protected].

I’ve added a link to this archive at the top of our main page. I should also mention that Tomas’ Powerpoint slidesets for a course he taught based on the second edition of our book are still available for download. The slides are a bit dated in spots, but are a good place to start. If you have made a relevant teaching aid available, please do comment and let others know.

Tags: , ,

I’m at I3D 2009; tonight at the dinner Austin Robison at NVIDIA announced NVIRT, which is NVIDIA’s ray-casting engine. I say “casting” as the idea is that you feed it objects, hand it a ray generator and it gives you back the ray intersections desired. Certainly it can be used for ray traced rendering, and the constructs presented make it clear they have thought through this aspect: rays can terminate on the first intersection found (useful for shadow rays), or can return the closest intersection point (eye/reflection/refraction rays). Rays can continue on when a fully transparent object is hit. Objects can be put in any efficiency structure you wish, and structures could be contained by other structures (Jim Arvo’s metahierachies idea). For example, you could put static geometry in a k-d tree, which is highly efficient but expensive to update, while placing dynamic objects in a bounding volume hierarchy, which usually can be updated more easily (though losing efficiency over time) by growing bounds. You have control over what efficiency methods are used.

They’re thinking of this SDK in more general ray-casting terms: collision detection, AI queries, and baking illumination or other characteristics onto surfaces. I can certainly imagine uses for engineering simulation. It runs on CUDA, but hides CUDA programming from the user. By the way, the switching time between CUDA and the graphics API will someday soon be a lot less that it is now.

This SDK will be released sometime this Spring (it will also be incorporated with NVIDIA’s NVSG scene graph SDK, as a separate release). The SDK will come with lots of samples, including source fo a basic ray-tracing renderer. All in all, an interesting development. The catch is, of course, that CUDA does not run on anything but NVIDIA hardware. Nonetheless, this is a fascinating first step. Austin says this effort is a serious attempt by NVIDIA to put this sort of engine in the hands of developers, not some “let’s see if this research sticks” half-baked release. Hearing him talk about the bits of inside information their group learnt about the operation of the GPU, and the corresponding boosts in performance, makes me wonder if other GPU-based ray tracers out there will be able to get near their performance.

I have a bunch of links saved up, which I’ll dump here someday soon, as well as more about I3D 2009 (see Jeremy Schopf’s blog in the meantime). For now I’ll just mention one quick link: Morgan McGuire’s twitter blog. No, it’s not a “I’m drinking a latte and using my iPhone” twitter blog. I like the idea a lot: it’s where he simply puts any great links he’s run across, with a quick description for each. Low maintenance, minimal effort, and useful & interesting, at least to me. It’s about game design and related topics (and unrelated ones) as much as graphics. This is one of those “everyone who finds cool stuff on the internet should do this” concepts, as far as I’m concerned. Sure, there’s del.icio.us and similar social bookmarking sites, but a blog lets me know when there’s something new from someone I respect.

Morgan is one of those uncommon people who has considerable industry experience (e.g. “Titan Quest”) while also being in the academic world. He’s a coauthor of the new book Creating Games, which I had been jumping around inside and sampling snippets, and am now sitting down and reading for real. It is aimed at being a book for teaching a college course on making games, both board- and video-, giving a number of schedules for 3 to 4 week projects and worksheets for these. However, these are appendices; the focus of the book is well-informed surveys of a wide range of game design and creation practices. The first chapter has a great startup project for small groups in a class: “here are some dice and pieces of different colors, some paper – go, make a game in 7 minutes.” Anyway, not graphics related per se, but there’s certainly a lot about the computer games industry inside, much of it technical and practical. My favorite illustration so far is the dependence graph amongst the art assets for Spiderman 3, Figure 3.8 – daunting. You can look inside at Amazon. Me, I’m an avid boardgamer (I was up too late last night playing Dominion with Morgan and Naty Hoffman – consider me entirely biased), so I’m enjoying reading it and thinking maybe I should try to design a game…

Tags: , , , ,

I noticed (or maybe re-noticed) recently that 4 of the 5 Graphics Gems books are excerpted on Google Books. I’ve added links to the excerpts from the Graphics Gems repository. Which made me wonder, can you look inside these books on Amazon? Indeed you can. So I’ve also just added links to Amazon’s Look Inside pages. Between these two resources you can now pretty much read any article from these books online, one way or the other. Handy.

Tags: ,

Time to clear the collection of links and tidbits.

First, two new graphics books have come to my attention: Essentials of Interactive Computer Graphics and Computer Facial Animation, Second Edition. The first is an introductory textbook for teaching, well, just that. Real-Time Rendering was never meant as an introduction to the field of interactive graphics, we’ve always seen it as the book to hit after you know the basics. The Essentials book is squarely focused on these basics, and is more event-oriented and application-driven: GUIs and MFC, instancing and scene graphs, the transformation pipeline. It’s truly aimed at computer graphics in general, not 3D lit scenes. Shading is barely mentioned, for example. The book comes with a CD of software libraries developed in the latter half of the book. See the book’s website for much more information and supplemental materials (e.g. Powerpoint slidesets for teaching from the book!).

Computer Facial Animation is an area I know little about. Which makes this book intriguing to page through -how much there is to know! The first few chapters are dedicated to anatomy and early ways of recording facial expressions. The rest covers all sorts of areas: speech synchronization, hair modeling, face tracking, muscle simulation, skin textures, even photographic lighting techniques. This is one I’ll leave on my desk and hope to pick up at lunch now and again (along with those other books on my desk that beg to be read, like Color Imaging - I need more lunches).

Which reminds me of this nice talk by Kevin Bjorke: Beautiful Women of the Future. The first half is more aesthetic with some interesting fact nuggets, the last half is a worthwhile overview of interactive skin and hair rendering techniques.

It’s worth noting that there are many computer graphics books excerpted on Google Books. Our portal page, item #6, lists a few good ones.

Game Developer Magazine’s Front-Line Award Winners have been announced. Our book was nominated, but to be honest I’m not terribly upset it didn’t win (our second edition won it before); instead, a new book on (video)game design got the honors in the book category, The Art of Game Design. The rest of the award winners are (almost) no doubt deserving, but the winner list provides little new information. It’s the usual suspects: Photoshop CS3, Havok, Torque, Visual Studio 2008 (really? I’d go with Visual Assist X, which adds a bunch of useful bits to VS 2008 to make it more usable). I haven’t seen the Game Developer article itself, which should be more interesting to see the list of runner-ups.

Update: it’s a day later, and the Front Line awards article is available online. Good deal!

I just noticed that Jeremy Birn has been having lighting contests for synthetic scenes. Meant more for the mental ray users of the world, I like it just because there are some nice models to load up in my test applications.

We mentioned SIGGRAPH Asia before; see the papers collection here and some GPU-specific presentations here.

A fair bit going on in the blogosphere:

  • Christer Ericson has an article on optimizing particle system display. I hadn’t considered some of these techniques before.
  • Bill Mill has a worthwhile rant on publishing code along with research results. This often isn’t done, because there’s little benefit to the author. Some researchers will do it anyway, for various reasons (altruism, fame, etc.), but I wish the research system was structured to require such code. It’s certainly encouraged for the journal of graphics tools, for example, but even then the frequency is not that high.
  • Wolfgang Engel has lots of posts about programming for the iPhone & Touch; I was more interested in his comments about caching shadow maps.

Everyone should know about the Steam Hardware Survey. The cool thing is that they recently started adding a history for some stats and, dare I dream it?, pie charts to the site. Much easier to grok at a glance.

Tutorials galore:

Need a huge (or medium, or small), free texture of the whole earth? Go here.

Google’s knol project collects short articles on various topics. Here’s a reasonable sample: a short history of theories of vision. To be honest, though, the site overall seems a bit of a dumping ground. This sort of lameness is proof why editorial supervision (either a single person or a wiki community) is a good thing.

DirectX 10 corrects a long-standing “feature” of previous versions of DirectX: the half-pixel offset. OpenGL’s always had it right (and there really is a right answer, as far as I’m concerned). I was happy to find this full explanation of the DirectX 9 problem on Microsoft’s website.

Our book had a little review in the February 2009 issue of PC-Gamer, by Logan Decker, executive editor, on page 80. I liked the first sentence: “I don’t know why I didn’t immediately set fire to this reference for graphics professionals the moment I saw all the equations. But I actually read it, and if you skip the math bits as I did, you’ll get brilliantly lucid explanations of concepts like vertex morphing and variance shadow mapping—as well as a new respect for the incredible craftsmanship that goes into today’s PC games.”

This one’s made the rounds, but just in case: the Mona Lisa with 50 semi-transparent polygons, evolved (sort-of). Here’s a little eye candy (two links). Plus, panoramas galore.

Finally, guard your dreams.

Tags: , , ,

« Older entries