You are currently browsing the archive for the Miscellaneous category.

I mentioned in a post last week that I expected interest in ray-tracing to increase. So, there actually does appear to have been an uptick in Google searches on the term “ray-tracing,” looking at Google Trends. The last time there was as much interest was March 2010 (though other months in between have come close).

It’s a funny area to explore: South Korea seems the most interested, by far. Under “Related topics” is “NVIDIA – Company,” which is not surprising. What’s funny is that if you click that topic, you find that NVIDIA is of strongest interest in Romania, followed by Czechia, Estonia, Hungary, then Russia. I assumed the explanation is “Bitcoin,” but that’s not quite right. According to NVIDIA’s CEO, it’s actually Ethereum mining, as Bitcoins are most profitably mined by custom ASICs at this point. Such a world.

Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!


The article collection GPU Zen was a ridiculously good deal at $10 for the electronic version of the book. A call for participation for GPU Zen 2 is now out. First important date: March 30th for submitting proposals (i.e., not the first draft, which is due August 3rd).

Just because I wanted to have a title with a series of 3 letter bits, I wrote out the Two. I recently read some little tidbit about some old book passage with the longest-known (at least, to him) string of 3 letter words in a row, that someone found from analyzing a huge pile of Project Gutenberg texts or similar. Can’t find the article now, thought it was at the Futility Closet site, but maybe not. Which is my roundabout way of saying that site is sometimes entertaining, it has an odd historical oddities & mathematical recreations bent to it.

To continue to ramble, in memory of the first anniversary of his death (and LAA), I’ll end with this quote from the wonderful Raymond Smullyan: “I understand that a computer has been invented that is so remarkably intelligent that if you put it into communication with either a computer or a human, it can’t tell the difference!”

I mostly wanted to pass on the word that High-Performance Graphics 2018 has their call for participation up. Due date for papers is April 12th. HPG 2018 is co-located with SIGGRAPH 2018 in Vancouver in August.

Also, let’s talk about hyphens. See Rule 1: Generally, hyphenate two or more words when they come before a noun they modify and act as a single idea. This is called a compound adjective.

Update: John Owens wrote and said “Go read Rule 3,” which is: An often overlooked rule for hyphens: The adverb very and adverbs ending in ly are not hyphenated.

So, he’s right! The hyphen is indeed NOT needed, my mistake! I didn’t do all the work, reading through all eleven rules and noting that “physically” is indeed an adverb.

Here’s the rest of my incorrect post, for the record. I guess I’m in good company – about a quarter of authors get this wrong, judging from the list of publications below.

The phrase “High-Performance Graphics” is good to go; “Real-Time Rendering” is also fine. Writing “Physically Based Rendering,” as seen on Wikipedia and elsewhere, not quite [I’m wrong]. The world doesn’t end if the hyphen’s not there, especially in a title of just the phrase itself. Adding the hyphen just helps the reader know what to expect: Is the word “based” going to be a noun or part of a compound adjective? If you read the rest of Rule 1, note you don’t normally add the hyphen if the adjective is after the noun. So:

“Physically-based [that’s wrong] rendering is better than rendering that is spiritually based.”

is correct, “spiritually based” should not be hyphenated. Google came up with no direct hits for “spiritually-based rendering” that I could find – it’s an untapped field.

Not a big deal by any stretch, but we definitely noticed that “no hyphen” was the norm for a lot of authors for this particular phrase [and rightfully so], to the point where when the hyphen actually exists, as in a presentation by Burley, the course description leaves it out.

In no particular scientific sample, here are some titles found without the hyphen:

  • SIGGRAPH Physically Based Shading in Theory and Practice course
  • Graceful Degradation of Collision Handling in Physically Based Animation
  • Physically Based Area Lights
  • Antialiasing Physically Based Shading with LEADR Mapping
  • Distance Fields for Rapid Collision Detection in Physically Based Modeling
  • Beyond a Simple Physically Based Blinn-Phong Model in Real-Time
  • SIGGRAPH Real-time Rendering of Physically Based Optical Effect in Theory and Practice course
  • Physically Based Lens Flare
  • Implementation Notes: Physically Based Lens Flares
  • Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
  • Approximate Models for Physically Based Rendering
  • Physically Based Hair Shading in Unreal
  • Revisiting Physically Based Shading at Imageworks
  • Moving Frostbite to Physically Based Rendering
  • An Inexpensive BRDF Model for Physically based Rendering
  • Physically Based Lighting Calculations for Computer Graphics
  • Physically Based Deferred Shading on Mobile
  • SIGGRAPH Practical Physically Based Shading in Film and Game Production course
  • SIGGRAPH Physically Based Modeling course
  • Physically Based Shading at DreamWorks Animation

Titles found with:

  • Physically-Based Shading at Disney
  • Physically-based and Unified Volumetric Rendering in Frostbite
  • Fast, Flexible, Physically-Based Volumetric Light Scattering
  • Physically-Based Real-Time Lens Flare Rendering
  • Physically-based lighting in Call of Duty: Black Ops
  • Theory and Algorithms for Efficient Physically-Based Illumination
  • Faster Photorealism in Wonderland: Physically-Based Shading and Lighting at Sony Pictures Imageworks
  • Physically-Based Glare Effects for Digital Images

I suspect some authors just picked what earlier authors did. The hyphen’s better, go with it [no, don’t].

Now, don’t get me started on capitalization… Well, it’s easy, the word after the hyphen should be capitalized. There’s an online tool for testing titles, in fact, if you have any doubts – I use Chicago style.

But I digress. Submit to HPG 2018.

In my self-inflicted weekly reports for Autodesk I always included a “link for the week,” some graphics-related or -unrelated tidbit I found of interest. Did you pick up on the “d” in “included”? Out of the blue I was laid off from Autodesk three weeks ago (along with ~1149 others, 13% of the workforce), and it’s fine, no worries.

But, it meant that I had collected a bunch of links I was never going to use. So, here’s the curated dump, something to click on during the holidays. Not a sterling collection of the best of the internet, just things that caught my eye. Enjoy! Or not!

Seven links for today:

  • Prof. Min Chen has assembled a page of all the STAR (State of the Art), review, and survey papers in Computer Graphics Forum. Such articles are great for getting up to speed on a topic.
  • Jendrik Illner has been writing a weekly roundup of recent blog posts and other online resources for computer graphics. Some good stuff in there, articles I missed, and I’m happy to see someone filtering through and summing up what’s out there. I hope he doesn’t burn out anytime soon.
  • ACM TOG is now encouraging submitting code with articles, so as to be able to reproduce results and build off previous work. I’m happy to see it.
  • There is now a Monument to an Anonymous Peer Reviewer at Moscow’s Higher School of Economics (more pics here, and Kickstarter page). I liked, “Researchers from across the world will visit to touch the “Accept” side in the hope that the gods of peer review will smile down upon them.”
  • Some ARKit apps in development look like wonderful magic. Which is often how demos look, vs. reality, but let me have my dreams for now.
  • One more AI post: the jobs of people who name colors are not yet at risk. Though I do like the computer’s new color name “Snowbonk” and some of the others. Certainly “Stanky Bean” is descriptive, no worse than puce.
  • I should have reposted months ago, but many others already have. Just in case you missed it, Stephen Hill’s SIGGRAPH 2017 link collection is wonderfully useful, as usual.

Machine learning, and especially deep learning, is all the rage, so here are some (vaguely) graphics-related tie ins:

Tags: ,

I ran across this article from 2014, which is a worthwhile read about IKEA’s transition from real-world photography to virtual. It had an interesting quote:

…the real turning point for us was when, in 2009, they called us and said, “You have to stop using CG. I’ve got 200 product images and they’re just terrible. You guys need to practise more.” So we looked at all the images they said weren’t good enough and the two or three they said were great, and the ones they didn’t like were photography and the good ones were all CG! Now, we only talk about a good or a bad image – not what technique created it.”


The book GPU Zen is out. Go get it. This is a book edited by Wolfgang Engel and is essentially the successor to the GPU Pro series. Github code is here.

(Update: there’s a call for participation for GPU Zen 2.)

Full disclosure: I edited one small section, on VR. I forget if I am getting paid in some way. I probably get a free copy if I ask nicely. But, the e-book version is $9.99! So I simply bought one. Not $89.99, as books of this type usually are (even for the electronic edition), but rather the price of two coffees.

It’s a Kindle book. Unlike the GPU Pro and ShaderX books, it’s self-published. It’s mostly meant as an e-book, though you can buy a paperback edition if you insist.

So what’s in the book? Seventeen articles on interactive rendering techniques, in good part by practitioners, and nine have associated code. The book’s 360 pages. As you probably know from similar books, for any given person, most of the articles will be of mild interest at best. There will be a few that are worth knowing about. Then there will be one or two that are gold, that help with something you didn’t know about, or didn’t know how to do, or heard of but didn’t want to reinvent.

For example, Graham Wihlidal’s article “Optimizing the Graphics Pipeline with Compute” is a much-expanded and in-depth explanation of work that was presented at GDC 2016. Trying to figure out his work from the slideset is, I guess, theoretically possible. Now you don’t have to, as Graham lays it all out, along with other insights since his presentation, in a 44 page article. At $89.99, I might want to read it but would think twice before getting it (and I have done so in the past – some books of this type I still haven’t bought, if only one article is of interest).

The detailed explanation of XPerf, ETW, and GPUView in the article by James Hughes et alia on VR performance tuning might instead be the one you find invaluable. Or the two articles on SSAO, or one on bokeh depth-of-field, or – well, you get the idea. Or possibly none of them, in which case you’re out a small bit of cash.

For the whole table of contents, just go to the book’s listing and click on the cover to “Look Inside.”

Me, I usually prefer books made of atoms. But for technical works, if I have to choose, I’m happier overall with electronic versions. For these “collections of articles” books in particular, the big advantage of the e-book is searchability. No more “I vaguely recalls it’s in this book somewhere, or maybe one of these three books…” and spending a half-hour flipping through them all. Just search for a term or some particular word and go. Oh, one other cute thing: you can loan the e-book to someone else for 14 days (just once, total, I think).

At $9.99, it’s a minimal-brainer. Order by midnight tomorrow and you’ll get the Ginzu knife set with it. I’ll try to avoid being too much of a huckster here, but honestly, so cheap – you’d have money left for Pete Shirley’s ray tracing e-books, along with Morgan McGuire’s Graphics Codex. I like low-cost and worthwhile. Addendum: if you do buy the paperback, the Kindle “Matchbook” price for the e-book is $2.99. Which is how I think reality should be: buy the expensive atoms one, get the e-book version for a little more, vs. paying full price for each.

Haven’t done any “seven things” for a year, so it’s time. This one will just be stuff from the wonderful site This Is Colossal, dedicated to odd ways of making art.

No deep “man’s inhumanity to man” art-with-a-capital-A here, but rather some lovely samples from this wonderful site.



« Older entries