Author Archives: Eric

So what am I missing?

My schedule for SIGGRAPH so far (sans social gatherings), using this technology where you can put everything on this incredibly light-weight portable screen with an extremely high battery life (though the erase feature sucks if you use the high-contrast “ink” display mode):

SIGGRAPH 2012

I’ve tried various apps over the years and this is what works for me. On the back is plenty of room for quick notes on things to follow-up on after SIGGRAPH, if I write small enough.

Oh, and yes, Emil Persson’s talk is going to happen twice (not his fault, and I consider this A Very Good Thing), as, apparently, is the Processing 2.0 talk, also. Ah, wait, I just heard back from Andres, and the second Processing talk (on Tuesday) is cancelled.

Edits: added Fast Forward (thanks, Hanspeter). Also, I entirely forgot to look at the Exhibitor Talks, which have a few things of interest.

Oh, and here’s a neat Google Calendar thingy for SIGGRAPH 2012 that Dan Wexler pointed me at: http://skitten.org/2012/07/siggraph-2012-google-calendars/

Seven Things for June 22nd

Here goes:

  • The Journal of Computer Graphics Techniques has published its first accepted article: “Importance Sampling of Reflection from Hair Fibers”. Free for download, of course.
  • Mauricio Vives pointed out that the thirteenth article in a series going back to 2010 is now up: Fluid Simulation for Video Games. Not my particular interest, but my gosh this collection is impressive, it’s practically book-length at this point, and includes code snippets and a demo with source (you’ll need to install TBB to compile).
  • A new book has been announced, coming out in October: The CUDA Handbook. The author, Nicholas Wilt, was a software architect at NVIDIA who worked on CUDA since its inception.
  • John Owens pointed out an interesting way to search Google Scholar, by publications with the word graphics. This gives an interesting weighing of influence – no real surprises, and note that some conferences are not included (I highly doubt that EGSR, I3D, and HPG don’t make the cut).
  • Sebastien Lagarde gives an in-depth analysis showing how the Phong and Blinn specular highlight models are related by a factor of 4 in the power. This is an old result from Fisher & Woo in Graphics Gems IV, but it’s nice to see an independent verification and analysis.
  • Thinking of Graphics Gems, I wanted to mention this old piece of news now: Jim Arvo (editor of Graphics Gems II) passed away back on October 19th of last year. He did seminal work in ray tracing, Monte Carlo sampling, light transport, and many other areas, and was also just a great guy. See his homepage while it still is there.
  • There are mutant women who live amongst us with a fourth type of cone cell. There’s more information on Wikipedia.

SIGGRAPH 2012 early registration deadline is today

If you’re reading this after June 18th, oh well…

Registration page is here.

Me, I wouldn’t rate SIGGRAPH the premier interactive rendering research conference any more: I3D or HPG publish far more relevant results overall. SIGGRAPH still has a lot of other great stuff going on, and there are enough things of interest to me this year that I’m happy to be attending:

  • I guess the courses are the main draw for me right now, and some of these have become informal venues for interactive rendering R&D presentations (e.g. the Advances course).
  • SIGGRAPH Mobile could be interesting. Given the huge profit margins of GPUs for mobile vs. PCs, it’s where the market has moved. It feels a little “back to the future”, with GPU speeds getting reset about a decade vs. PC performance, but there’s some interesting research being done, e.g. this paper (not at SIGGRAPH but at HPG, I noticed it today on Morgan McGuire’s Twitter feed and thought it was fascinating).
  • I was thinking of arriving Sunday afternoon, but then noticed some interesting talks in the Game Worlds talks on Sunday, 2-3:30 pm.
  • Other talks will be of interest, I’ll need to wade through the list.
  • Emerging Technologies and the Exhibition Floor usually have something that grabs my attention (if nothing else, I can browse through new books), and I maybe should give Real Time Live a visit.
  • And, meeting people, of course – it’s inspiring and fun to hear what others are up to. Sometimes a little chance conversation will later have great value.

Why submit when you can blog?

I was cleaning up the RTR portal page today. Of all the links on this page, I often use those linked in the first three items. I used to have about 30 blogs listed. Trying them all today, 5 have disappeared forever (being replaced by junk like this and this), and 10 more are essentially dead (no postings in more than a year). Understandable: blogs usually don’t live that long. One survey gives 126 days for the average lifetime of a typical blog. Another notes even the top 100 blogs last an average of less than 3 years.

Seeing good blogs disappear forever is sad for me. If I’m desperate, I can try finding them using the Wayback Machine, but sometimes will find only bits and pieces, if that. This goes for websites, too. If I see some article I like, I try to save a copy locally. Even then, such pages are hard to find later – I’m not that organized. Other people are entirely out of luck, of course.

My takeaway: feel free to start a blog, sure. But if you have some useful programming technique you’ve explained, and you want people to know about it for some time to come, then also submit it to a journal. One blog I mentioned last post, Morten Mikkelsen’s, shows one way to mix the two: he shows new results and experiments on his blog, and submits solid ideas to a journal. I of course strongly suggest the (new, yet old) Journal of Computer Graphics Techniques (JCGT), the spiritual successor to the journal of graphics tools (as noted earlier, all the editors have left the old journal). Papers on concise, practical techniques and ideas are what it’s for, just the sorts of thing I see on many graphics blogs. Now that is journal is able to quickly publish ideas, I dearly want to see more short, Graphics Gems-like papers. If and when you decide to quit blogging/get hit by an asteroid/have a kid, if prior to this you also submitted your work to a journal and had it accepted, you then have something permanent to show for it all, something that others can benefit from years later. It’s not that hard, honestly – just do it. JCGT prides itself on working with authors to help polish their work and bring out the best, but there are plenty of other venues, ranging from SIGGRAPH talks, Gamasutra articles, and GPU Pro submissions to full-blown ACM TOG papers.

Oh, I should also note that JCGT is fine with work that is not necessarily new, but fills a gap in the literature, explains an improved way of performing an algorithm, gives implementation advice, etc. Citing sources is important – don’t claim work that isn’t your own – but otherwise the goal is simple: present techniques useful for computer graphics programmers.

By the way, if you do run a website of any sort, here are my three top pet peeves, so please don’t do them:

  • Moving page locations and leaving no forwarding page at the old page’s location (I’m looking at you, NVIDIA and AMD) – you don’t care if someone directs traffic to your site?
  • Giving no contact email address or other feedback mechanism on your web pages – you don’t want to know when something’s broken?
  • Giving no “last updated on” date on your web pages – you don’t want people to know how fresh the info is?

Seven Things for June 7th

I’ll be gone this weekend, so my dream of catching up on resources by posting every day is slowed a bit. Here’s today’s seven:

  • The free Process Explorer has a lot more functionality than its name implies. One very cool feature is that it actually shows GPU usage. Run it, right-click a process that’s running and select Properties, then go to the GPU Graph tab to watch memory use and GPU load.
  • If you are seriously involved in implementing bump maps, parallax occlusion maps, etc., Morten Mikkelsen’s blog has a lot of chewy information, along with demos and source. He’s doing a lot of interesting work on autogenerating and blending mappings.
  • The game itself is no great shakes, but Google’s Cube has some lovely 3D rendering going on via javascript.
  • Another “3D in the browser” experiment (with WebGL) is sketchPatch. It’s not as simple as advertised, but I like the idea of an interpreted language you just type and see in the same window.
  • There are lots of reasons Unreal Engine 3 is the most popular commercial 3D engine for games. Here’s some nice eye candy from their tutorial on image reflection, which is also just plain educational.
  • Some cool results here using cone tracing for global illumination effects. Seeing these effects for dynamic objects at interactive rates is great stuff, especially since they’re having to update octrees on the fly.
  • I love the colored Japanese woodcuts of classic videogames that Jed Henry has been making:

Seven Things for June 6th

It’s D-Day and it’s been awhile, so let’s get going. This is a LIFO of the 486 backlogged links I’ve collected for this blog:

  • GPUView looks like an interesting profiling tool from some students at Stanford (done as interns at Microsoft, which has a more official page), though I’ve heard it’s a bit of work to set up. If you’ve used it, how did you find it?
  • Open source code for a fast and scalable GLSL GPU implementation of the Perlin noise with functions, not textures.
  • NV Path Rendering is not what you might think, it’s about rendering text and 2D paths with quite a bit of elaboration available (think SVG or other 2D vector descriptions). GTC presentation here.
  • The book “Physically Based Rendering” is now in eBook form, including PDF (so I assume no DRM?). Annoyingly, it costs considerably more than the physical book on Amazon, but that’s the publisher’s doing.
  • Proland looked intriguing, a procedural terrain generator that creates based on view. Appears fairly elaborate, and a quick way to get some plausible-looking terrain data.
  • Geekbench is a cross-platform benchmarking system; from what I’ve heard, mobile platforms kind of set the clock back a fair number of year in terms of performance. Still, 3D is doable (it certainly was in 2002); here’s a starter list of 3D CAD apps for Android (many are on the iPad, too). I need to search out more, I’m interested in what’s out there.
  • Finally, in the category “this looks like a painting but is reality”, a photo taken in Namibia:

Author-Izer, and what do publishers provide

There’s a new service provided by ACM’s Digital Library: Author-Izer. Short version: if you have published something with the ACM, and you have a preprint of the paper on your own or your company’s website, you can provide the ACM DL this link for your article and they’ll put it with the article reference. This is fairly sporting of the ACM. If you’re an author it’s worth this bit of effort to give your work wider dissemination. Linking also can provide the ACM with download statistics from your site and so give a better sense of the impact of your paper (or at least inflate your statistics compared to people not using Author-Izer).

As a reader without an ACM DL subscription, it’s still better to go to Ke-Sen Huang’s site or Google Scholar, where these external author sites have been collected without each author’s effort. For example, free preprints of 95% of SIGGRAPH 2011 papers are linked on Ke-Sen’s page. In a perfect world, the ACM would simply hire Ke-Sen for a few days and have him add all his external links of authors’ sites to their database. I’d personally toss in $20 towards that effort. I suspect there are 18 reasons given why this would not be OK – “we want individual authors to control their links” (but why not give a default link if the author has not provided one?), “we’re not comfortable having a third party provide this information” (so it’s better to have no information than potentially incorrect information leading you, at worst, to a dead link?), or the catch-all “that’s not how we do things” (clearly).

As Bernie Rous discusses, there’s a tension at the ACM between researchers, who want the widest dissemination of their work, and professional staff, who are concerned about the financial health of the organization. Author-Izer helps researchers, but there’s little direct benefit to the ACM’s bottom line. Unfortunately, currently the Author-Izer service seems to be virtually unknown. For example, the SIGGRAPH 2011 table of contents appears to have no Author-Izer links, though perhaps I’m missing them. I hope this post will help publicize this service a bit.

It’s nice that the ACM allows authors to self-archive, where they can provide preprints of their own work on their website or their institution’s. Most scholarly journals allow this archiving of preprints – more than 90%, according to one writer (and more that 60% allow self-archiving of the refereed final draft, which the ACM does not allow). For authors at academic institutions with such archives, great, easily done; for authors at games companies, film companies, self-employed, etc., it’s catch-as-catch-can. If the author hosts his own work and is hit by a meteor, or just loses interest, his website eventually fades away and the article is then available only behind a paywall. One understandable reason for ACM’s “must be hosted by the author or his institution” clause is that it disallows lower-cost paywalls from competing. But why not just specify that? I can see a non-compete clause like “the author will not host his preprint behind a paywall” (a restriction the ACM doesn’t currently have), but otherwise who cares where the article is hosted, as long as it’s free to download?

This restriction feels like a business model founded on being a PITA: instead of uploading his article to some central free access site and never thinking about it again, each author needs to keep track of access and deal with job changes, server reorganizations and redirects, company bankruptcy or purchase, Author-Izer updates, and anything else that can make his website go off the radar. Pose this problem to a thousand authors and the free system will be inherently weak and ineffectual, making the pay version more desirable.

I believe that many people in the ACM have their hearts in the right place, there’s no conspiracy here. However, the tension of running a paywall service like the Digital Library gives a “one hand tied behind my back” feel to efforts at more open access. If there were no economic constraints, clearly the ACM DL would be free and there would be no real point to Author-Izer. Right now there still are these financial concerns, very real ones.

A journal publisher used to offer:

  • Physical journal printing and binding
  • Copy editing, illustrations, and layout
  • Peer review and professional editors
  • Archiving
  • Distribution to subscribers and institutions
  • Reputation

The physical artifact of the journal itself is becoming rarer, and authors now do copy editing, illustrations, and most to all of the layout. The technical editors and reviewers are all unpaid, so their contributions are separate from the publisher itself – many journals have abandoned their publishers, as the recent Elsevier boycott has highlighted. So what is left that publishers provide?

Another way to look at it: what if publishers suddenly disappeared? Different systems would supplant their services, for good or ill: Google, for instance, might provide archiving for free (they already do this for magazines like Popular Science). Distribution is as simple as “get on the mailing list.” Reputation is probably the one with most long-term value. I don’t think I’d instead want to have a reddit up/down vote system, given the various problems it has. CiteSeer and Google Scholar are pretty good at determining reputation by citation count. You can even check your own citation count for free. There are ways of determining a paper’s impact beyond simple citation counts, lots of people think about this.

I can imagine a few answers for why publishers matter – these disconnected solutions I mentioned are not necessarily the best answers. However, the burden of proof is on the publisher, both commercial and non-profit, to justify its continued existence. It will be interesting how the various open access initiatives play out and how they affect publishers.

SIGGRAPH 2012 Hotel Registration Open

Go get your hotel reservation for SIGGRAPH 2012 now. Reservations can be cancelled with no cost up to July 19th, so if you think there’s the slightest chance you’ll go, grab a room now.

The HQ hotel and the Figueroa are already gone. The Ritz Milner is certainly cheap and pretty close, but some TripAdvisor reviews eventually scared me off. I then switched to the Luxe City Center (used to be a Holiday Inn), as I prefer being close to the convention center. But I recall complaints about things like the WIFI stinking at the old Holiday Inn, and the place is even pricier now. So I rebooked yet again into The Sheraton, as a pretty good compromise among the factors of cost, quality, and location – being more downtown can be good, as you’re closer to restaurants and other nighttime activities.