Tag Archives: ray tracing

Back of the Business Card Ray Tracers

Basic ray tracing is a simple algorithm. As sometimes given as proof, there are two business card ray tracers I know of: Paul Heckbert’s and Andrew Kensler’s. You can see Andrew’s explanation here (as well as links to ports), the leet code here, and an involved revisit and optimization and CUDA-ization of this code by Fabien Sanglard here.

Andrew’s dates from somewhere during 2005-2009, when he was avoiding writing his thesis. Paul’s dates from 1987, including tricks he learned from Darwyn Peachey and Joe Cychosz. His links are here, the code’s here and unminimized here, and the whole thing is written up in Graphics Gems IV. Which I wish was free on the (legal) web at this point… But through the miracle of Google Books, you can find most of the article here and missing page 379 (just the code listing) here.

Yesterday I realized I likely had never seen the output of Paul’s ray tracer. I installed Ghostscript to look at his file, but it was pretty beat:

Yeah, definitely ray tracing, or something

So I compiled Paul’s minimized code, finding a few small syntax errors that VS 2019 didn’t like so much. Adding a PPM header to the image file dumped, it’s this:

32 x 32

So, better, I can see spheres. I upped the resolution – it took a whole 10 seconds to run on a single CPU:

1024 x 1024

Not stunning, but now I know!

And today I ran across a set of lecture slides from the University of Utah which shows both of these business-card ray tracers’ results, page 22 on. Oddly, the image shown there has different colors – maybe gamma correction or something?

Anyway, now you can say you’ve seen it, too. And here’s the back of my copy of Paul’s business card, in the flesh, as it were:

Me, I was thrilled that 34-year-old code basically still ran, with minimal messing about.

Seven Things for September 23, 2019

Seven things:

Please go with “DXR” or “DirectX Raytracing” (secret agenda: ray-traced Minecraft)

This is a post in which I sneak in an announcement at Gamescom from Microsoft and NVIDIA under the guise of an engaging post about terminology. I tell you now to avoid any anxiety or stress from surprise, to keep your heart healthy. The announcement is that official ray tracing support is coming to the Windows 10 Bedrock edition of Minecraft. Video here; it’s lovely:

Now the gripping terminology post:

I’ve seen “DirectX Raytracing” and “DXR” used for Microsoft’s DirectX 12 API extension – perfect. My concern with today’s announcement is seeing “DirectX R” and “DirectX R raytracing” getting used as terms. Please don’t.

OK, I feel better now. Go enjoy the video! It’s nice stuff, and I say that as an entirely unbiased source, other than being employed by NVIDIA and loving ray tracing and Minecraft. I particularly enjoy the beams o’ light effects, having played with these long ago.

Minecraft fans: It will work on only the Windows 10 Bedrock Edition, not the Java Edition, so Sonic Ether’s Unbelievable Shaders project (which offers some ray tracing, but currently does not use RTX hardware) is unaffected. There are some technical details on Polygon, and at the other end of the spectrum, various musings on Reddit.

SIGGRAPH 2019 – my plan

This is my guaranteed-biased view of what I think is likely to be exciting at SIGGRAPH 2019, i.e., what I’ll be attending.

First, there are way too many ray tracing events, around 50 I’ve found so far (and that’s counting each of the eight SIGGRAPH courses having to do with ray tracing each as a single event). List at http://bit.ly/rtrt2019, which took me way longer to collate than I expected. Additions appreciated.

Of these, here are ones I won’t miss:

There are a bunch of other courses and talks at other times I’ll be at, but these are the ones I’m particularly interested in and can attend.

Here is the “hmmm, many things are going on at once, which do I choose?” part of the conference:

or

or

or

Here are the ones I can’t miss, since I’m involved:

  • Emerging Technologies, Matching Visual Acuity and Prescription: Towards AR for Humans – Some next steps in lightweight AR. I didn’t work on this, but I’m helping out in the booth Sunday 1-5:30, so stop on by.
  • Tuesday 10am – 10:25am, NVIDIA booth #1313, Booth Talk: A Fast Forward through Ray Tracing Gems – 32 papers in 25 minutes, so if I speak at 400 WPM I’ll be fine.
  • Tuesday 11am – 12 noon, Room 507, Los Angeles Convention Center, Birds of a Feather: Ray Tracing Roundtable – I’m the “organizer.” This will be in a 60-person room with whoever shows up first and wants to talk informally about ray tracing R&D. No presentations or other planned activities – I give an intro, we quickly introduce ourselves, then real-time parallel processing happens. That is, it’s a cocktail party without the cocktails, or the party – just the talk, with whoever shows up.
  • Wednesday 2pm – 5:15pm, Room 501AB, NVIDIA Presents: Ray Tracing Gems 1.1 – I’m chairing, and am looking forward to hearing these talks, which will include progress since the book was published (e.g., the new work presented at HPG).
  • Wednesday 5:30pm – 6pm, SIGGRAPH bookseller, outside Room 403, Book Signing: Ray Tracing Gems – meet some of the contributors; you will be a sad panda if you miss it.

As far as evening activities go, as usual SIGGRAPH needs to have twice as many nights as it provides. For everyone, Sunday’s Fast Forward; Monday’s the sake party, Electronic Theater, SIGGRAPH Reception, and Chapters Party; Tuesday’s Real-Time Live; Wednesday’s the Khronos reception (and I don’t want to think of all the good Khronos presentations I’m missing that day). Plus all the parties I’m not invited to.

So, what cool things do I not know about and shouldn’t miss?

One week to go… Submit!

The Ray Tracing Gems early proposals deadline is June 21, a week away (the final deadline is October 15th). Submit a one-page proposal by June 21 and there’s the extra incentive offered by NVIDIA, a Titan V graphics card to the top five proposals (which I finally looked up – if you don’t want it, trade it in for a nice used car). Anyway, call for proposals for the book is here.

While some initial impetus for making such a book is the new DXR/VKRT APIs, we want the book to be broader than just this area, e.g.,  ray tracing methods using various hardware platforms and software, summaries of the state of the art, best practices, etc. In the spirit of Graphics Gems, GPU Gems, and the Journal of Computer Graphics Techniques, I see our book as a way to inform readers about implementation details and other elements that normally don’t make it into papers. For example, if you have a technique that was not long enough, or too technically involved, to publish in a journal article, now is your chance. Mathematics journals publish short results all the time – computer graphics journals, not so much.

I would also like to see summaries for various facets of the field of ray tracing. For example, I think of Larry Gritz’s article “The Importance of Being Linear” from GPU Gems 3 as a great example of this type of article. It is about gamma correction – not a new topic by any stretch – but its wonderful and thoughtful exposition reached many readers and did a great service for our field. I still point it out to this day, especially since it is open access (a goal for Ray Tracing Gems, too).

You can submit more than one proposal – the more the better, and short proposals are fine (encouraged, in fact). That said, no “Efficient Radiosity for Daylight Simulation in Closed Environments” papers, please; that’s been done (if that paper doesn’t ring a bell, you owe it to yourself to read the classic WARNING: Beware of VIDEA! page). In return, we promise fair reviewing and not to roll the die.

Update: a proposal is just a one-page or less summary of some idea for a paper, and can be written in any format you like: Word, PDF, plain text, etc. Proposals are not required, either by June 21 or after. They’re useful to us, though, as a way to see what’s coming, let each prospective contributor know if it’s a good topic, and possibly connect like-minded writers together. Also, a proposal that “wins” on June 21 does not mean the paper itself will automatically be accepted – each article submitted will be judged on its merits. The main thing is the paper itself, due October 15th. Send proposals to raytracinggems@nvidia.com – we look forward to what you all contribute!

Monument to the Anonymous Peer Reviewer,

Monument to the Anonymous Peer Reviewer

“Ray Tracing Gems” Book Call for Participation

Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!

Ray Tracing at GDC (and beyond)

One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos:

Seven Things for April 4, 2016

Next in the continuing series. In this episode Jaimie finds that the world is an illusion and she’s a butterfly’s dream, while Wilson works out his plumbing problems.

512 and counting

I noticed I reached a milestone number of postings today, 512 answers posted to the online Intro to 3D Graphics course. Admittedly, some are replies to questions such as “how is your voice so dull?” However, most of the questions are ones that I can chew into. For example, I enjoyed answering this one today, about how diffuse surfaces work. I then start to ramble on about area light sources and how they work, which I think is a really worthwhile way to think about radiance and what’s happening at a pixel. I also like this recent one, about z-fighting, as I talk about the giant headache (and a common solution) that occurs in ray tracing when two transparent materials touch each other.

So the takeaway is that if you ever want to ask me a question and I’m not replying to email, act like you’re a student, find a relevant lesson, and post a question there. Honestly, I’m thoroughly enjoying answering questions on these forums; I get to help people, and for the most part the questions are ones I can actually answer, which is always a nice feeling. Sometimes others will give even better answers and I get to learn something. So go ahead, find some dumb answer of mine and give a better one.

By the way, I finally annotated the syllabus for the class. Now it’s possible to cherry-pick lessons; in particularly, I mark all lessons that are specifically about three.js syntax and methodology if you already know graphics.

alpha

Seven Things for 10/13/2011

  • Fairly new book: Practical Rendering and Computation with Direct3D 11, by Jason Zink, Matt Pettineo, and Jack Hoxley, A.K.Peters/CRC Press, July 2011 (more info). It’s meant for people who already know DirectX 10 and want to learn just the new stuff. I found the first half pretty abstract; the second half was more useful, as it gives in-depth explanation of practical examples that show how the new functionality can be used.
  • Two nice little Moore’s Law-related articles appeared recently in The Economist. This one is about how the law looks to have legs for a number of more years, and presents a graph showing how various breakthroughs have kept the law going over the past decades. Moore himself thought the law might hold for ten years. This one talks about how computational energy efficiency is doubling every 18 months, which is great news for mobile devices.
  • I used to use MWSnap for screen captures, but it doesn’t work well with two monitors and it hangs at times. I finally found a replacement that does all the things I want, with a mostly-good UI: FastStone Capture. The downside is that it actually costs money ($19.95), but I’m happy to have purchased it.
  • Ray tracing vs. rasterization, part XIV: Gavan Woolery thinks RT is the future, DEADC0DE argues both will always have a place, and gives a deeper analysis of the strengths and weaknesses of each (though the PITA that transparency causes rasterization is not called out) – I mostly agree with his stance. Both posts have lots of followup comments.
  • This shows exactly how far behind we are in blogging about SIGGRAPH: find the Beyond Programmable Shading course notes here – that’s just a mere two months overdue.
  • Tantalizing SIGGRAPH Talk demo: KinectFusion from Microsoft Research and many others. Watch around 3:11 on for the great reconstruction, and the last minute for fun stuff. Newer demo here.
  • OnLive – you should check it out, it’ll take ten minutes. Sign up for a free account and visit the Arena, if nothing else: it’s like being in a sci-fi movie, with a bunch of games being played by others before your eyes that you can scroll through and click on to watch the player. I admit to being skeptical of the whole cloud-gaming idea originally, but in trying it out, it’s surprisingly fast and the video quality is not bad. Not good enough to satisfy hardcore FPS players – I’ve seen my teenage boys pick out targets that cover like two pixels, which would be invisible with OnLive – but otherwise quite usable. The “no download, no GPU upgrade, just play immediately” aspect is brilliant and lends itself extremely well to game trials.

OnLive Arena