SIGGRAPH 2018 Stuff

First, the links you need:

  • Stephen Hill’s wonderful SIGGRAPH 2018 links page. Note the fair number of presentations recorded and quickly put up.
  • Material from the High Performance Graphics conference preceding SIGGRAPH is all online. Hats off to them for doing this.
  • Matt Pharr guest-edited a special section of the latest issue (vol. 37, no. 3, June 2018) of ACM TOG, full of papers on how production renderers work.
  • Bonus link: this GDC 2018 link collection by Krzysztof Narkowicz; also, GDC 2014 and earlier by Javier “Jare” Arevalo (thanks to Stephen Hill for the tip-off).

Beyond all the deep ray learning tracing, which I’ve noted in other tweets and posts, the one technology on the show floor that got “you should go check it out” buzz was the Looking Glass Kickstarter, a good-looking and semi-affordable (starting in the $600 range, not thousands or higher) “holographic” display. 60 FPS color, 4 and 8 megapixel versions, but those pixels are divided up among the 45 views that must be generated each frame. Still, it looked lovely, and vaguely magical (and has Sketchfab support).

I mostly went to courses and talked with people. Here are a few tidbits of general interest:

  • Shadertoy is one of my favorite websites, but it takes too long to load and looks like it’s locked up. I learned that you can avoid this problem! Sign in, go to your profile, Config, and check “Use previews (avoid compilation times)”. The site is so much more usable now. Too bad it’s not the default, I expect because they want to show you cool things without clicking, but then end up not showing anything for awhile, e.g., 22 seconds for the popular page to compile on my fast workstation. Now this page shows up immediately, and I don’t mind that the animations don’t run when my mouse hovers over the image. (thanks, David Hart!)
  • Colin Barré-Brisebois mentioned in NVIDIA’s real-time ray tracing course that Schlick’s Fresnel approximation could not be used in their work, but I didn’t quite catch why. The notes don’t mention this – I need to follow up with him… message sent. Aha, he replies, “It was because of total internal reflection. The Schlick approx doesn’t handle it.”
  • One of the speakers in Path Tracing in Production mentioned in passing that some film frames took 300 hours of compute per frame, 1600 rays per pixel. Aha, it was for “Transformers: The Last Knight.” I recall Jim Blinn had some rule-of-thumb long ago about how frames will take 20 minutes no matter how much faster computers get and algorithms improve efficiency. I think that amount needs updating, maybe based on cost (after all, the computers he was using for computation back then weren’t cheap).
  • The PDF notes for this course were extensive, though the course lectures were not recorded (heavens forbid anyone capture an unauthorized bunny or chimp). It’ll be interesting once consumer body cams become a thing. Anyway, the notes capture all sorts of bits of wisdom, such as ways of finding structure to help denoise images (yes, film rendering companies use denoising, too), e.g., “we used the tangent vector of the fur to provide the denoiser with “normals” as it proved to have higher contrast than either the true normals of the fur, or the normals of the skin that the fur was spawned from.”
  • You know quaternions. David Hart noted a different algebra, octonions, which I’d never heard of before. Bunch of videos on YouTube.
  • Regrets: I missed the “Future Artificial Intelligence and Deep Learning Tools for VFX” panel, and there’s no video AFAIK. I wanted to go, because after Glassner’s intro to deep learning course (which was recorded, and well-attended & well-received), Doug Roble from Digital Domain showed me a little bit of their markerless facial mocap system, which looked great. He writes, “We’ll probably have some YouTube videos… soon.”

Finally, this, a clever photo booth at NVIDIA. The “glass” sphere looks rasterized, since it shows little refraction. Though, to make it solid, or even fill it with water, would have been massively heavy and unworkable. Sadly, gasses don’t have much of a refractive index, though maybe it’s as well – a chloroform-filled sphere might not be the safest thing. Anyway, it’s best as-is: people want to see their undistorted faces through the sphere. If you want realism, fix it in post.

2 thoughts on “SIGGRAPH 2018 Stuff

  1. robpieke

    While it wasn’t recorded, I suspect most/all of the “Future of AI Tools” presenters would be happy to share slides/notes with you. Same for the “Path Tracing in Production” session if there’s anything you feel we omitted from the notes (I know I rushed my section to hit the submission deadline).

  2. Eric Post author

    As I say, the notes for the Path Tracing in Production course are a great resource, thanks for making them. It certainly makes up for the (somewhat silly, in my opinion, but, lawyers) “no photography” restriction during the course itself. I’m more sympathetic for the panel “Future Artificial Intelligence and Deep Learning Tools for VFX” not having a video – it’s a panel, and they want to show off bits that are unannounced and unofficial, and without any real press coverage. Personally, I think the various “don’t photograph this session” slides shown during the conference serve no real purpose – people may not see them or may not obey them, so why bother? – but if that’s what it takes to make some higher-up happy, so be it.

Comments are closed.