Category Archives: Miscellaneous

HPG 2018; oh, and a hyphen for “Physically-Based” (don’t!)

I mostly wanted to pass on the word that High-Performance Graphics 2018 has their call for participation up. Due date for papers is April 12th. HPG 2018 is co-located with SIGGRAPH 2018 in Vancouver in August.

Also, let’s talk about hyphens. See Rule 1: Generally, hyphenate two or more words when they come before a noun they modify and act as a single idea. This is called a compound adjective.

Update: John Owens wrote and said “Go read Rule 3,” which is: An often overlooked rule for hyphens: The adverb very and adverbs ending in ly are not hyphenated.

So, he’s right! The hyphen is indeed NOT needed, my mistake! I didn’t do all the work, reading through all eleven rules and noting that “physically” is indeed an adverb.

Here’s the rest of my incorrect post, for the record. I guess I’m in good company – about a quarter of authors get this wrong, judging from the list of publications below.

The phrase “High-Performance Graphics” is good to go; “Real-Time Rendering” is also fine. Writing “Physically Based Rendering,” as seen on Wikipedia and elsewhere, not quite [I’m wrong]. The world doesn’t end if the hyphen’s not there, especially in a title of just the phrase itself. Adding the hyphen just helps the reader know what to expect: Is the word “based” going to be a noun or part of a compound adjective? If you read the rest of Rule 1, note you don’t normally add the hyphen if the adjective is after the noun. So:

“Physically-based [that’s wrong] rendering is better than rendering that is spiritually based.”

is correct, “spiritually based” should not be hyphenated. Google came up with no direct hits for “spiritually-based rendering” that I could find – it’s an untapped field.

Not a big deal by any stretch, but we definitely noticed that “no hyphen” was the norm for a lot of authors for this particular phrase [and rightfully so], to the point where when the hyphen actually exists, as in a presentation by Burley, the course description leaves it out.

In no particular scientific sample, here are some titles found without the hyphen:

  • SIGGRAPH Physically Based Shading in Theory and Practice course
  • Graceful Degradation of Collision Handling in Physically Based Animation
  • Physically Based Area Lights
  • Antialiasing Physically Based Shading with LEADR Mapping
  • Distance Fields for Rapid Collision Detection in Physically Based Modeling
  • Beyond a Simple Physically Based Blinn-Phong Model in Real-Time
  • SIGGRAPH Real-time Rendering of Physically Based Optical Effect in Theory and Practice course
  • Physically Based Lens Flare
  • Implementation Notes: Physically Based Lens Flares
  • Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
  • Approximate Models for Physically Based Rendering
  • Physically Based Hair Shading in Unreal
  • Revisiting Physically Based Shading at Imageworks
  • Moving Frostbite to Physically Based Rendering
  • An Inexpensive BRDF Model for Physically based Rendering
  • Physically Based Lighting Calculations for Computer Graphics
  • Physically Based Deferred Shading on Mobile
  • SIGGRAPH Practical Physically Based Shading in Film and Game Production course
  • SIGGRAPH Physically Based Modeling course
  • Physically Based Shading at DreamWorks Animation

Titles found with:

  • Physically-Based Shading at Disney
  • Physically-based and Unified Volumetric Rendering in Frostbite
  • Fast, Flexible, Physically-Based Volumetric Light Scattering
  • Physically-Based Real-Time Lens Flare Rendering
  • Physically-based lighting in Call of Duty: Black Ops
  • Theory and Algorithms for Efficient Physically-Based Illumination
  • Faster Photorealism in Wonderland: Physically-Based Shading and Lighting at Sony Pictures Imageworks
  • Physically-Based Glare Effects for Digital Images

I suspect some authors just picked what earlier authors did. The hyphen’s better, go with it [no, don’t].

Now, don’t get me started on capitalization… Well, it’s easy, the word after the hyphen should be capitalized. There’s an online tool for testing titles, in fact, if you have any doubts – I use Chicago style.

But I digress. Submit to HPG 2018.

Links for the holidays

In my self-inflicted weekly reports for Autodesk I always included a “link for the week,” some graphics-related or -unrelated tidbit I found of interest. Did you pick up on the “d” in “included”? Out of the blue I was laid off from Autodesk three weeks ago (along with ~1149 others, 13% of the workforce), and it’s fine, no worries.

But, it meant that I had collected a bunch of links I was never going to use. So, here’s the curated dump, something to click on during the holidays. Not a sterling collection of the best of the internet, just things that caught my eye. Enjoy! Or not!

Seven Things for October 25, 2017

Seven links for today:

  • Prof. Min Chen has assembled a page of all the STAR (State of the Art), review, and survey papers in Computer Graphics Forum. Such articles are great for getting up to speed on a topic.
  • Jendrik Illner has been writing a weekly roundup of recent blog posts and other online resources for computer graphics. Some good stuff in there, articles I missed, and I’m happy to see someone filtering through and summing up what’s out there. I hope he doesn’t burn out anytime soon.
  • ACM TOG is now encouraging submitting code with articles, so as to be able to reproduce results and build off previous work. I’m happy to see it.
  • There is now a Monument to an Anonymous Peer Reviewer at Moscow’s Higher School of Economics (more pics here, and Kickstarter page). I liked, “Researchers from across the world will visit to touch the “Accept” side in the hope that the gods of peer review will smile down upon them.”
  • Some ARKit apps in development look like wonderful magic. Which is often how demos look, vs. reality, but let me have my dreams for now.
  • One more AI post: the jobs of people who name colors are not yet at risk. Though I do like the computer’s new color name “Snowbonk” and some of the others. Certainly “Stanky Bean” is descriptive, no worse than puce.
  • I should have reposted months ago, but many others already have. Just in case you missed it, Stephen Hill’s SIGGRAPH 2017 link collection is wonderfully useful, as usual.

Seven Things for October 24, 2017

Machine learning, and especially deep learning, is all the rage, so here are some (vaguely) graphics-related tie ins:

IKEA Reality

I ran across this article from 2014, which is a worthwhile read about IKEA’s transition from real-world photography to virtual. It had an interesting quote:

…the real turning point for us was when, in 2009, they called us and said, “You have to stop using CG. I’ve got 200 product images and they’re just terrible. You guys need to practise more.” So we looked at all the images they said weren’t good enough and the two or three they said were great, and the ones they didn’t like were photography and the good ones were all CG! Now, we only talk about a good or a bad image – not what technique created it.”

GPU Zen == Two Cups of Joe

The book GPU Zen is out. Go get it. This is a book edited by Wolfgang Engel and is essentially the successor to the GPU Pro series. Github code is here.

(Update: there’s a call for participation for GPU Zen 2.)

Full disclosure: I edited one small section, on VR. I forget if I am getting paid in some way. I probably get a free copy if I ask nicely. But, the e-book version is $9.99! So I simply bought one. Not $89.99, as books of this type usually are (even for the electronic edition), but rather the price of two coffees.

It’s a Kindle book. Unlike the GPU Pro and ShaderX books, it’s self-published. It’s mostly meant as an e-book, though you can buy a paperback edition if you insist.

So what’s in the book? Seventeen articles on interactive rendering techniques, in good part by practitioners, and nine have associated code. The book’s 360 pages. As you probably know from similar books, for any given person, most of the articles will be of mild interest at best. There will be a few that are worth knowing about. Then there will be one or two that are gold, that help with something you didn’t know about, or didn’t know how to do, or heard of but didn’t want to reinvent.

For example, Graham Wihlidal’s article “Optimizing the Graphics Pipeline with Compute” is a much-expanded and in-depth explanation of work that was presented at GDC 2016. Trying to figure out his work from the slideset is, I guess, theoretically possible. Now you don’t have to, as Graham lays it all out, along with other insights since his presentation, in a 44 page article. At $89.99, I might want to read it but would think twice before getting it (and I have done so in the past – some books of this type I still haven’t bought, if only one article is of interest).

The detailed explanation of XPerf, ETW, and GPUView in the article by James Hughes et alia on VR performance tuning might instead be the one you find invaluable. Or the two articles on SSAO, or one on bokeh depth-of-field, or – well, you get the idea. Or possibly none of them, in which case you’re out a small bit of cash.

For the whole table of contents, just go to the book’s listing and click on the cover to “Look Inside.”

Me, I usually prefer books made of atoms. But for technical works, if I have to choose, I’m happier overall with electronic versions. For these “collections of articles” books in particular, the big advantage of the e-book is searchability. No more “I vaguely recalls it’s in this book somewhere, or maybe one of these three books…” and spending a half-hour flipping through them all. Just search for a term or some particular word and go. Oh, one other cute thing: you can loan the e-book to someone else for 14 days (just once, total, I think).

At $9.99, it’s a minimal-brainer. Order by midnight tomorrow and you’ll get the Ginzu knife set with it. I’ll try to avoid being too much of a huckster here, but honestly, so cheap – you’d have money left for Pete Shirley’s ray tracing e-books, along with Morgan McGuire’s Graphics Codex. I like low-cost and worthwhile. Addendum: if you do buy the paperback, the Kindle “Matchbook” price for the e-book is $2.99. Which is how I think reality should be: buy the expensive atoms one, get the e-book version for a little more, vs. paying full price for each.

7 Colossal Things for May 15, 2017

Haven’t done any “seven things” for a year, so it’s time. This one will just be stuff from the wonderful site This Is Colossal, dedicated to odd ways of making art.

No deep “man’s inhumanity to man” art-with-a-capital-A here, but rather some lovely samples from this wonderful site.

 

Everything is Triangles

I was entertained to see that the new NVIDIA HQ is triangle inspired. Great quote from an interesting article about new technology company offices:

 

“At this point I’m kind of over the triangle shape, because we took that theme and beat it to death,” admits John O’Brien, the company’s head of real estate, who pointedly vetoed a colleague’s recent suggestion to offer triangle-shaped water bottles in the cafeteria.

High Performance Graphics 2017 Call for Participation

The High-Performance Graphics 2017 conference call for participation is here.

Summary: deadline for papers is Friday April 21st. Conference itself is Friday-Sunday, July 28-30, colocated with SIGGRAPH in Los Angeles.

For me, this is one of the two great conferences each year for interactive rendering related papers (SIGGRAPH’s papers selection, for whatever reasons, seems to have mostly moved on to other things).

Real-World Sampling “Artifact”

[A repost, due to WordPress weirdness – sorry about that. Note to self: don’t paste images into WordPress, always upload and insert them.]

I’m seeing this more and more in my neighborhood in the evening:

shadows
It’s the shadow of a tree on pavement, superimposed 3 times. It’s because they’ve been installing new LED streetlights with 3 bulbs.

Hard for me to photograph the light source well, but a reflection of some sort in the camera shows the three bulbs in the upper right:

light

It’s like the artifacts you see when anyone tries to approximate an area light with point lights. So with the advent of LEDs, I guess we won’t need light area sampling algorithms as much?

Maybe one for the Real Artifacts gallery.