Ray Tracing at GDC (and beyond)

One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos: