Monthly Archives: January 2010

I3D 2010 Papers

The Symposium on Interactive 3D Graphics and Games (I3D) has been a great little conference since its genesis in the mid-80s, featuring many influential papers over this period.  You can think of it as a much smaller SIGGRAPH, focused on topics of interest to readers of this blog.  This year, the I3D papers program is especially strong.

Most of the papers have online preprints (accessible from Ke-Sen Huang’s I3D 2010 paper page), so I can now do a proper survey.  Unfortunately, I was able to read two of the papers only under condition of non-disclosure (Stochastic Transparency and LEAN Mapping).  Both papers are very good; I look forward to being able to discuss them publicly (at the latest, when I3D starts on February 19th).

Other papers of interest:

  • Fourier Opacity Mapping riffs off the basic concept of Variance Shadow Maps, Exponential Shadow Maps (see also here) and Convolution Shadow Maps.  These techniques store a compact statistical depth distribution at each texel of a shadow map; here, the quantity stored is opacity as a function of depth, similarly to the Deep Shadow Maps technique commonly used in film rendering.  This is applied to shadows from volumetric effects (such as smoke), including self-shadowing.  This paper is particularly notable in that the technique it describes has been used in a highly regarded game (Batman: Arkham Asylum).
  • Volumetric Obscurance improves upon the SSAO technique by making better use of each depth buffer sample; instead of treating them as point samples (with a simple binary comparison between the depth buffer and the sampled depth), each sample is treated as a line sample (taking full account of the difference between the two values).  It is similar to a concurrently developed paper (Volumetric Ambient Occlusion); the techniques from either of these papers can be applied to most SSAO implementations to improve quality or increase performance.  The Volumetric Obscurance paper also includes the option to extend the idea further and perform area samples; this can produce a simple crease shading effect with a single sample, but does not scale well to multiple samples.
  • Spatio-Temporal Upsampling on the GPU – games commonly use cross-bilateral filtering to upsample quantities computed at low spatial resolutions.  There have also been several recent papers about temporal reprojection (reprojecting values from previous frames for reuse in the current frame); Gears of War 2 used this technique to improve the quality of its ambient occlusion effects. The paper Spatio-Temporal Upsampling on the GPU combines both of these techniques, filtering samples across both space and time.
  • Efficient Irradiance Normal Mapping – at GDC 2004, Valve introduced their “Irradiance Normal Mapping” technique for combining a low-resolution precomputed lightmap with a higher-resolution normal map.  Similar techniques are now common in games, e.g. spherical harmonics (used in Halo 3), and directional lightmaps (used in Far Cry).  Efficient Irradiance Normal Mapping proposes a new basis, similar to spherical harmonics (SH) but covering the hemisphere rather than the entire sphere.  The authors show that the new basis produces superior results to previous “hemispherical harmonics” work.  Is it better than plain spherical harmonics?  The answer depends on the desired quality level; with four coefficients, both produce similar results.  However, with six coefficients the new basis performs almost as well as quadratic SH (nine coefficients), making it a good choice for high-frequency lighting data.
  • Interactive Volume Caustics in Single-Scattering Media – I see real-time caustics as more of an item to check off a laundry list of optical phenomena than something that games really need, but they may be important for other real-time applications.  This paper handles the even more exotic combination of caustics with participating media (I do think participating media in themselves are important for games).  From a brief scan of the technique, it seems to involve drawing lines in screen space to render the volumetric caustics.  They do show one practical application for caustics in participating media – underwater rendering.  If this case is important to your application, by all means give this paper a read.
  • Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU – I’m a big fan of Valve’s work on using signed distance fields to improve font rendering and alpha testing.  These distance fields are typically computed offline (a process referred to as “computing a distance transform”, sometimes “an Euclidian distance transform”).  For this reason, brute-force methods are commonly employed, though there has been a lot of work on more efficient algorithms.  This paper gives a GPU-accelerated method which could be useful if you are looking to speed up your offline tools (or if you need to compute alpha silhouettes on the fly for some reason).  Distance fields have other uses (e.g. collision detection), so there may very well be other applications for this paper.  Notably, the paper project page includes links to source code.
  • A Programmable, Parallel Rendering Architecture for Efficient Multi-Fragment Effects – one of the touted advantages of  Larrabee was the promise of flexible graphics pipelines supporting stuff like multi-fragment effects (A-buffer-like things like order independent transparency and rendering to deep shadow maps).  Despite a massive software engineering effort (and an instruction set tailored to help), Larrabee has not yet been able to demonstrate software rasterization and blending running at speeds comparable to dedicated hardware.  The authors of this paper attempt to do the same on off-the-shelf NVIDIA hardware using CUDA – a very aggressive target!  Do they succeed?  it’s hard to say.  They do show performance which is pretty close to the same scene rendering through OpenGL on the same hardware, but until I have time to read the paper more carefully (with an eye on caveats and limitations) I reserve judgment.  I’d be curious to hear what other people have to say on this one.
  • On-the-Fly Decompression and Rendering of Multiresolution Terrain (link is to an earlier version of the paper) – the title pretty much says it all.  They get compression ratios between 3:1 and 12:1, which isn’t bad for on-the-fly GPU decompression.  A lot of water has gone under the terrain rendering bridge since I last worked on one, so it’s hard for me to judge how it compares to previous work; if you’re into terrain rendering give it a read.
  • Radiance Scaling for Versatile Surface Enhancement – this could be thought of as an NPR technique, but it’s a lot more subtle than painterly techniques.  It’s more like a “hyper-real” or “enhanced reality” technique, like ambient occlusion (which darkens creases a lot more than a correct global illumination solution, but often looks better; 3D Unsharp Masking achieves a more extreme version of this look).  Radiance Scaling for Versatile Surface Enhancement is a follow-on to a similar paper by the same authors, Light Warping for Enhanced Surface Depiction.  Light warping changes illumination directions based on curvature, while radiance scaling scales the illumination instead, which enables cheaper implementations and increased flexibility.  With some simplifications and optimizations, the technique should be fast enough for most games, making this paper useful to game developers trying to give their game a slightly stylized or “hyper-real” look.
  • Cascaded Light Propagation Volumes for Real-time Indirect Illumination – this appears to be an updated (and hopefully extended) version of the CryEngine 3 technique presented by Crytek at a SIGGRAPH 2009 course (see slides and course notes).  This technique, which computes dynamic approximate global illumination by propagating spherical harmonics coefficients through a 3D grid, was very well-received, and I look forward to reading the paper when it is available.
  • Efficient Sparse Voxel Octrees – there has been a lot of excited speculation around raycasting sparse voxel octrees since John Carmack first hinted that the next version of id software‘s rendering engine might be based on this technology.  A SIGGRAPH 2008 presentation by Jon Olick (then at id) raised the excitement further (demo video with unfortunate soundtrack here).  The Gigavoxels paper is another example of recent work in this area.  Efficient Sparse Voxel Octrees promises to extend this work in interesting directions (according to the abstract – no preprint yet unfortunately).
  • Assisted Texture Assignment – the highly labor-intensive (and thus expensive) nature of art asset creation is one of the primary problems facing game development.  According to its abstract (no preprint yet), this paper proposes a solution to part of this problem – assigning textures to surfaces.  There is also a teaser posted by one of the authors, which looks promising.
  • Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering – volumetric effects such as smoke, shafts of light (also called “god rays” or crepuscular rays) and volumetric shadows are important in film rendering, but usually missing (or coarsely approximated) in games.  Unfortunately, nothing is known about this paper except its title and the identities of its authors.  I’ll read it (and pass judgement on whether the technique seems practical) when a preprint becomes available (hopefully soon).

The remaining papers are outside my area of expertise, so it’s hard for me to judge their usefulness:

Tools for Teaching

Today’s question: what tools are there for teaching about computer graphics and/or computer games? I don’t have a definitive answer, but I have a little experience with a few resources and know of others. That said, I haven’t sat down with more than two of them for any serious amount of time. Comments are most welcome, especially for pointers to better overviews than this!

I’ll list these roughly from the more basic to those for budding programmers and indies. That said, I think some languages, e.g. Processing, are easier to get into than the UI-driven systems, but some understanding of programming is needed. I’ve also ignored some famous examples like Logo, as it feels a bit crusty and limited to me. But, that’s me—let me know if you have a great counterexample.

Game Maker: My experience with this one is trying it with my younger son, and later Cornell using it in a 20 hour digital game design workshop for high-school students. It’s much more about games than graphics, graphical elements are 2D sprites and backdrops. Most of the focus is on events and constraints, as you might guess. Very UI oriented, no programming language involved per se – the UI controls are essentially a programming language of a sort. I have The Game Maker’s Apprentice, which is good in that it gets the person involved quickly, but bad in that little understanding gets transmitted early on (“now set this UI control to do this, now load that file there, now hit ‘run’; if it doesn’t run, walk through all the instructions carefully again”). From Amazon reviews, the book Getting Started in Game Maker might be better. There’s lots of example games and web support. Cost: free Lite version and trial Pro version, cost for Pro version (which you want) is $20.

Multimedia Fusion 2: this is an expanded and rebranded version of the company’s “Games Factory 2” product that includes all its functionality. It looks similar to Game Maker, but targeted for a somewhat more professional audience (and at a higher cost) overall. That said, there is a book for it, Game Creation for Teens. Cost: free trial download from 2008, lowest street price I saw for Multimedia Fusion 2 was $83.

Flash: Adobe’s popular animation & game programming system, famously unsupported by the iPad. Back some years ago the local science center used Macromedia Flash to teach grade-school kids about basic Flash animation. Now Adobe Flash Professional CS4 is used to create Flash. From what little I’ve read, the interface is daunting, but it’s actually pretty easy for beginners to get going. Animation is something that can be done purely with graphical tools, Actionscript 3 is a language for serious interactive applications. There’s of course a huge number of books, forums, and other online support. Cost: as low as $200.

Flixel: I’ve heard this mentioned twice as a worthwhile development resource for Flash. It provides some useful base classes in Actionscript 3 for making games, along with tutorials, a forum, etc. Cost: free.

Pygame: This is the first of a few language-oriented resources. Pygame is a bunch of modules to help in writing games in Python. One friend said he got a simple game running in less than two hours from download (but that’s him…), another acquaintance wasn’t so wild about it as he hit its limitations. 2D oriented, though there’s a few 3D experiments. This sounds like a pretty good, and super-cheap (free!), way to introduce kids to programming. I recall an article in CACM or IEEE Computer a year or so ago about Python, and it made an excellent point: Python is one of those great languages that is made for people who have never programmed before. Like Perl, it provides the ability to program and get something done quick without a lot of clutter, and so is a better candidate for the first programming language taught, not Java. Let the programmer have fun getting things done, then teach them about more elaborate computer science concepts and why these are useful. Python is pretty easy to learn: Andrew Glassner has a great introductory page – in a few hours you’ll know the basics. But I digress… Anyway, Pygame has lots of users, many of the projects are open source, there’s even a second edition of a free professionally-produced book about Pygame. Cost: free.

Processing: OK, if you’re sick of hearing about game programming resources, here you go. This is a little language suited for making cool 2D images. Key basic graphics concepts—transforms, curves, image manipulation—are encapsulated. Basic interaction is also a snap. It has a 3D component, but that part is fairly weak. I’ve played with it, it’s fun. Not meant for games; nonetheless, it’s possible. Much online support, and there are two popular books, here and here (the second has lots of additional material online). Cost: free.

Blitz Basic: I’ve heard of this one from a few people but have no direct experience. Definitely about programming, it uses BASIC, but enhanced with functions and types. It looks pretty full-featured, e.g. there’s joystick and networking support. BlitzPlus is the Windows 2D version, BlitzMax also runs on Macs and Linux, Blitz3D adds 3D support: camera, lighting, texturing (including bumps), CLOD terrain, etc. There’s an SDK for interfacing the 3D engine with C++, C#, etc.  Manuals are online, and there looks to be a healthy user community. There’s a German version of the website. Cost: free trials of all, prices range from $60 to $100 per product, with more for add-ons.

XNA: This is something of the 800-pound gorilla for programming-related graphics education. As an example, some teen programmers in the Cornell workshop were using XNA as the next step beyond Game Maker. A Microsoft initiative, it’s partially aimed at students and hobbyists. The base language is C# (Java on steroids, if you haven’t used it). There are lots of element and audiences for XNA, Wikipedia also covers the area well. There are many XNA resource sites and books out there (though I was sad to see Ziggyware is no more). Cost: free for the most part, depending on what you’re doing.

Ogre: Begun about a decade ago, this is from all reports a pretty nice graphics engine. It supports a huge number of effects and areas (I know developers who have consulted the codebase for ideas). There are lots of add-on libraries. It’s great for deep-down serious graphics education; Ogre is entirely open source (MIT license), so everything can be examined. Cost: free, and free licensing.

Unity: You’ve now definitely entered the Indie game developer zone. A 3D games development system I’ve heard mentioned by others as a way of learning. Multiplatform, now including the iPhone. Cost: free base version and trial of Pro version, which costs $1499; other pricing for iPhone version. There’s a book. Licensing for Indies has become free.

UDK: The Unreal Development Kit is the most popular game engine used for game development. Cost: free to download full version, 530 Mb of fun. Licensing cost: if you have to ask, you can’t afford it.

Torque: Torque is the second most popular development platform for game creators. For Torque, two versions exist, 2D and 3D. Three books are available on this engine. Cost: $250 for 2D, $1000 for 3D version, but educational pricing can be arranged.

Whew! OK, what did I forget? (Make sure to read the comments—some excellent additions there.)

Update 2/6/2010: Kodu, from Microsoft. For grade schoolers, it uses a visual language. Surprisingly, it’s in 3D, with a funky chiclet terrain system. Another interesting graphics programming tool is NodeBox 2, now in beta. It uses a node graph-based approached, see some examples here.

You May Want to Own Your Own Images

Now that the SIGGRAPH 2010 paper deadline is over, I thought it worth mentioning ways in which you can retain full use of your own images, should you be fortunate enough to have your work accepted for publication. This isn’t meant as an “ACM’s copyright policy is bad” article, rather it presents some possible workarounds while waiting for the policy to be improved. Think of these ideas as code patches.

A number of graphics people were talking about the ACM’s copyright policy. James O’Brien wrote:

I also am bothered by the fact that ACM claims to own images used in a publication. For example, if I render an image and use it to illustrate a paper, ACM now claims to own the copyright on the image and I am limited in what I can use that image for in the future. I’d like included images and other non-text content to be treated similarly to how 3rd party images are currently treated so that the authors retain copyright to the images and only grant ACM unlimited permission to use.

Larry Gritz replied:

James, why are you more bothered by “I painted the image, now they claim ownership” than “I wrote the words, now they claim ownership”? Aren’t they essentially the same situation?

James responded:

Not really, at least not to me. The images often represent a huge amount of work to demonstrate some algorithm. The words I wrote in an afternoon and I can always write some more words that say roughly the same thing if I had to. The images also have uses beyond the paper. For example, if “Time” magazine writes an article about me, they will want to run the images, or if a textbook author decides to talk about my algorithms s/he may want the images to illustrate the book. I also don’t see the argument for why ACM would benefit by owning the images. It’s a case where it costs the author something but gains ACM nothing, so why not change the policy to maximize everyone’s benefit?

In further discussions, we identified a few different ways to be able to use your own images. Mine is one that was first mentioned in the Ray Tracing News in 2005:

My advice (worth exactly nothing in court) to anyone publishing nowadays is to make two images of any image to be published, one from a fairly different view, etc. In this way you can reuse and grant rights to the second, unpublished image as you wish. That said, there’s an area of law where you compare one photo with another and if they match by 80% (by some eyeballing metric), then they’re considered the same photo for purposes of copyright. Usually this is meant to protect one photographer’s composition from being reused by another. What it means to 3D computer graphics, where it’s easy to change the view, etc., remains to be seen. Still, ACM’s rights to your work are less clear for a new, different image. This sort of thing is small potatoes, but taking action so that you have images and videos you fully own then removes the hassle-factor of granting permission to others wanting to use your work.

James O’Brien said the following:

I’ve bumped into this copyright issue with images a few times. The first was when a book author wanted to use an image of mine in her text. I said yes, but she was subsequently told by ACM that she needed ACM’s permissions and she had to pay a fee and include a notice crediting ACM rather than me.

If you are willing to be persistent, you can keep ownership of your copyright for your whole paper and just grant ACM unlimited permission. I did this in 2005 and if you download “Animating Gases with Hybrid Meshes,” SIGGRAPH 2005, from the DL you will see the copyright notice says “copyright held by author”. That was inserted by them instead of the regular notice after several days of discussion on the phone. It was very unclear what the motivation was for the ACM to insist on owning the images.

If the images are owned by a 3rd party they can only ask you to get permission. After 2005, I did a few papers where I included a note that the images were all copyright by UC Berkeley and used with permission. It’s not clear if that sort of note means anything.

The latest version of the ACM copyright form I’ve seen requires you to fill out an addendum listing 3rd-party-owned components and you have to get a separate permission form for them. My paper in SCA this summer required this form (images owned by Lucas Arts). It was a hassle to get Lucas to sign off on the permissions. But that’s not ACM’s fault… in fact Stephen Spencer was very flexible.

An anonymous person wrote:

Another option would be for people concerned about this to set up an organization, call it Digital Images LLC, that you assign the copyright to as soon as you generate the image. (That will likely require the permission of your university or employer, since the image is arguably a work-made-for-hire under the copyright law and therefore owned by the employer.)

Digital Images LLC then licenses its copyright in the images so that you can use it in papers, books, or other works. As far as ACM is concerned, it’s just like if you used a figure from another source with permission. The ACM policy makes that clear:

The author’s copyright transfer applies only to the work as a whole, and not to any embedded objects owned by third parties. An author who embeds an object, such as an art image that is copyrighted by a third party, must obtain that party’s permission to include the object, with the understanding that the entire work may be distributed as a unit in any medium.

So, there are at least three ways where you can retain full rights to your own images. Mine is “make another”, James’ is “request an exception”, and there’s finally “create an LLC”. If you have another, have information about the use of any of these, or just plain have an opinion, please comment.

7 things for January 22

There’s been some great stuff lately:

  • Gustavo Oliveira has an article in Gamasutra about writing an efficient cross-platform SIMD vector library and the tradeoffs involved. The last page was of particular interest, as I had wondered how effective the Intel C++ Compiler (ICC) was vs. Microsoft’s. He also provides downloadable source code and in-depth statistics.
  • NVIDIA has given some information abour Fermi, their next GPU. Warning: their page will automatically start some audio – annoying. You could just skip to the white paper. One big deal about Fermi is its support of doubles, which means it can be used for more science & engineering number-crunching. The Tech Report has a good overview article of other interesting features, and also presents benchmarking results.
  • Tests of OpenCL, the platform-independent parallel programming standard, have started to appear for AMD and NVIDIA GPUs.
  • Speaking of NVIDIA, their PhysX engine is getting some attention. The first video clip in this article gives a sense of the sorts of effects it can add. Pretty stuff, but the funny thing about PhysX is that it must accelerate computations that do not actually affect gameplay (i.e. it should not move around any objects in the scene differently than non-PhysX machines). This limits its use to particle systems and other eye candy. Not a diss—heck, most game graphics are about eye candy—but something to keep in mind.
  • Naty pointed out an article about how increasing the number of megapixels in a camera is just salesmanship and gains no actual benefit. The author later gives more explanation of his argument, which is that diffraction puts a physical limit on the useful size of a pixel for a given camera size.
  • Sony Pictures Imageworks has released a draft describing their Open Shading Language (OSL). While aimed at high-end rendering for films, it’s interesting to see what is built-in (e.g. deferred ray tracing) and what they consider important. Read the introduction for more information, or the draft itself.
  • My favorite infographic of the week: Avatar vs. Modern Warfare 2. Ignore the weird chartjunk concentric circles, focus on the numbers. The most amazing stat to me is the $200M advertising budget for MW2.

… and that’s seven; more later.

2009 Academy Sci & Tech Awards

Oops – I forgot to include Christophe Hery in the point-based color bleeding award below.  This has now been fixed; apologies and congratulations to Christophe.  Many thanks to Margarita Bratkova for pointing out the error!

Last week, the Academy of Motion Pictures Arts and Sciences (most known for its annual Academy Awards, or “Oscars”) announced the winners of it’s 2009 Scientific & Technical Awards.  No Awards of Merit (the highest award level) were given this year – those are the ones that come with an “Oscar” statuette and are shown in the Academy Awards telecast (Renderman and Maya have won Awards of Merit in previous years).

Two computer graphics-related Scientific and Engineering Awards were given this year; these are the second-highest award level and come with a bronze tablet:

  • Per Christensen, Michael Bunnell and Christophe Hery for point-based indirect illumination; an an interesting inversion of usual practice, this fast approximate global illumination / ambient occlusion technique started out as a real-time GPU technique and ended up as an offline rendering CPU technique (first used in Pirates of the Caribbean: Dead Man’s Chest, it is now a standard part of Pixar’s Renderman).  A recent SIGGRAPH Asia paper describes a closely related technique.
  • Paul Debevec, Tim Hawkins, John Monos and Mark Sagar for Light Stage and image-based character relighting.  The work done by Paul Debevec and his team at USC’s Institute for Creative Technologies on image-based capture and lighting has been hugely influential, resulting in widespread adoption of light probes, multi-exposure HDR image capture, and many other techniques commonly used in games as well as film.

One of the Technical Achievement Awards (the third level, which comes with a certificate) is also of interest to readers of this blog:

  • Hayden Landis, Ken McGaugh and Hilmar Koch for ambient occlusion.  The pioneering work on ambient occlusion for film production was done by these guys at ILM; first publication was at the Renderman in Production course at SIGGRAPH 2002 (the relevant chapter of the course notes can be found here).  Of course, ambient occlusion is heavily used in real-time applications as well.

In an interesting related development, eight separate Scientific and Engineering Awards and two Technical Achievement were given for achievements related to the digital intermediate process (digital scanning and processing of film data), many of them for look-up-table (LUT) based color correction (LUTs have also been used for color correction in games).  The Academy tends to batch up awards in this way for technologies whose “time has come” (two years ago there were a lot of fluid simulation awards).  Given that another of the Technical Achievement Awards was for a motion capture system, we can see how quickly digital technology has come to dominate the film industry.  As recently as 2005, most of the awards were for things like camera systems; this year only one of the awards (for a lens motor system) was for non-digital technology.

Congratulations to all the winners!

Sony Pictures Imageworks open source projects

In my HPG 2009 report, I mentioned that Sony Pictures Imageworks was releasing several of their projects as open source, most notably a shading language, OSL, tailored to ray-tracing. For a long time, there was no actual information available on OSL, but now (tipped off by a recent ompf post) I see that some has appeared.

OSL is hosted on Google Code, the main page is here, and an introductory document can be found here. The language has several features that seem well-designed for ray-tracing; someone with more knowledge in this area will have to weigh in on its usefulness.

Some Actual Larrabee Information

Tom Forsyth, one of the many programmers and engineers on Larrabee, passed on this link to a lecture he gave at Stanford on January 6 for their weekly Computer System Colloquium class. At the beginning he gives a bit about Intel’s view of Larrabee and the effect of “cancellation”, i.e., it’s not cancelled, just the first hardware release is off. He notes the day-to-day work of most Larrabee developers is unaffected. I appreciating him walking through the Intel position, as I haven’t been able to find any hard information (press releases, etc.) on their site. In retrospect, rumor-mill articles like this one (which we passed on earlier, lacking any sound data) appear to have extremely little resemblance to reality.

The rest of his lecture is about Larrabee itself. Early on he talks about the new instructions in Larrabee, something like Abrash’s article but more entertaining. Around minute 37 he gets more into graphics rendering per se. I’ve been listening to it in bits, in the background.

I3D 2010 Registration Open

I3D 2010 is located just north of Washington, DC this year, during the weekend of February 19-21 (Friday through Sunday). It will be at the Bethesda Hyatt Regency, which is conveniently located right on the Metro Red Line.

The early registration deadline for I3D itself is January 20th; hotel registration at the conference discount rate of $115 is available until January 19th.

Ke-Sen Huang has added a few paper links since we last mentioned his page, though the majority are still not available from authors’ pages. Somewhat surprising, given that December 14 was the camera-ready deadline, but perhaps some people are still returning back to their universities & colleges and haven’t gotten around to putting theirs up. That said, conferences like I3D are only partly about the papers and posters themselves. They also offer a unique and wonderful opportunity to meet and talk with leading and up-and-coming researchers and practitioners. It’s a fantastic feeling to be in an area for a few days where just about everyone there is working on ideas that are of interest to you. Anyone you meet knows something you don’t, and vice versa, and most people talk freely about what works and what doesn’t. Energizing and useful. Plus, they’re just fun people to be around, at least for this nerd.

7 things for January 4th

First day of work, so here are a few from coworkers and others:

  • Naty passed on this blog post about RGBD, a compact way of storing HDR environment map colors.
  • Gamasutra has an excerpt from Game Engine Architecture, a book we’ve mentioned before. Added bonus info on the author, Jason Gregory: he was a lead programmer on Uncharted 2 (which my older son loves, as do many others).
  • Manny Ko mentioned the free program Mendeley, which he swears by for organizing his PDF collection of graphics papers. I’ll look into it once I’ve reloaded everything after my Windows 7 upgrade.
  • Physics in graphics? Here’s one person’s extensive collection of abstracts through 2005.
  • From Nicholas Wilt, interesting to hear how one brokerage firm is now using GPUs to run complex simulations for bond prices. That GPU Gems chapter on options pricing was prescient.
  • Speaking of brokers and lots of GPUs, there’s this article. I’m a little skeptical of a GPU cloud for graphics (vs. running OpenCL), since graphics cards are not quite interchangeable parts at this point. Also, CPUs don’t normally need driver updates, GPUs do. OTOY I’m super-skeptical about, I have to admit, though I’d love to see them pull it off. Anyway, fun to think about situations where network bandwidth > graphics compute power and cloud cost < local cost.
  • One more from the demoscene, Farbrausch’s The Cube – interesting effects, what looks like procedural clips and procedural surfaces using interior mapping. At least, that’s my guess. I wish they would spend a little time explaining what they did, though maybe that would ruin the magic.