One more day for (optionally) submitting a proposal for Ray Tracing Gems for a shot at also winning a Titan V GPU. You can find the details and an update on what (the heck) a proposal is and what we’re looking for is on this page. A proposal is optional, the article is the main thing, but we hope this promotion gets you thinking about it. We’re happy to hear from you after tomorrow, of course, so please feel free to bounce ideas off of us.

Being the last one in the world to contribute to this meme, maybe our cat Ezra will inspire you:


The Ray Tracing Gems early proposals deadline is June 21, a week away (the final deadline is October 15th). Submit a one-page proposal by June 21 and there’s the extra incentive offered by NVIDIA, a Titan V graphics card to the top five proposals (which I finally looked up – if you don’t want it, trade it in for a nice used car). Anyway, call for proposals for the book is here.

While some initial impetus for making such a book is the new DXR/VKRT APIs, we want the book to be broader than just this area, e.g.,  ray tracing methods using various hardware platforms and software, summaries of the state of the art, best practices, etc. In the spirit of Graphics Gems, GPU Gems, and the Journal of Computer Graphics Techniques, I see our book as a way to inform readers about implementation details and other elements that normally don’t make it into papers. For example, if you have a technique that was not long enough, or too technically involved, to publish in a journal article, now is your chance. Mathematics journals publish short results all the time – computer graphics journals, not so much.

I would also like to see summaries for various facets of the field of ray tracing. For example, I think of Larry Gritz’s article “The Importance of Being Linear” from GPU Gems 3 as a great example of this type of article. It is about gamma correction – not a new topic by any stretch – but its wonderful and thoughtful exposition reached many readers and did a great service for our field. I still point it out to this day, especially since it is open access (a goal for Ray Tracing Gems, too).

You can submit more than one proposal – the more the better, and short proposals are fine (encouraged, in fact). That said, no “Efficient Radiosity for Daylight Simulation in Closed Environments” papers, please; that’s been done (if that paper doesn’t ring a bell, you owe it to yourself to read the classic WARNING: Beware of VIDEA! page). In return, we promise fair reviewing and not to roll the die.

Update: a proposal is just a one-page or less summary of some idea for a paper, and can be written in any format you like: Word, PDF, plain text, etc. Proposals are not required, either by June 21 or after. They’re useful to us, though, as a way to see what’s coming, let each prospective contributor know if it’s a good topic, and possibly connect like-minded writers together. Also, a proposal that “wins” on June 21 does not mean the paper itself will automatically be accepted – each article submitted will be judged on its merits. The main thing is the paper itself, due October 15th. Send proposals to [email protected] – we look forward to what you all contribute!

Monument to the Anonymous Peer Reviewer,

Monument to the Anonymous Peer Reviewer

Tags: ,

I’m passing on this tweet from Tomas:

Titan V competition w/ Ray Tracing Gems.

Submit a one-page abstract to [email protected]
The five best article proposals will receive a Titan V graphics card. Submit before the end of June 21st.
More info:

I also wanted to note that the Ray Tracing Gems CFP has been updated with some significant new bits of information:

The book will be published by Apress, which is a subsidiary of Springer Nature and the e-book will be available in PDF, EPUB, and Mobi (Kindle). We are working on getting open access for the e-book, which means that it will be free for all, and that authors may post a draft version to other sites; however, we ask that they include a link to the final version once published. The printed book will cost approximately $60.


Executive summary: use the Perl script at

I have been fiddling with this Perl script for a few editions of Real-Time Rendering. It’s handy enough now that I thought I’d put it up in a repository, since it might help others out. There are other LaTeX linters out there, but I’ve found them fussy to set up and use (“just download the babbleTeX distribution, use the GNU C compiler to make the files, be sure to use tippyShell for the command line, and define three paths…”). Frankly, I’ve never been able to get any of them to work – maybe I just haven’t found the right one, and please do point me at any (and make sure the links are not dead).

Anyway, this script runs over 300 tests on your .tex files, returning warnings. I’ve tried to keep it simple and not over-spew (if you would like more spew, use the “-ps” command line options to look for additional stylistic glitches). I haven’t tried to put in every rule under the sun. Most of the tests exist because we ran into the problem in the book. The script is also graphics-friendly, in that common misspellings such as “tesselate” are flagged. It finds awkward phrases and weak writing. For example, you’ll rarely find the word “very” in the new edition of our book, as I took Mark Twain’s advice to heart: “Substitute ‘damn’ every time you’re inclined to write ‘very.’ Your editor will delete it and the writing will be just as it should be.” So the word “very” gets flagged. You could also find a substitute (and that website is also in the comments in the Perl script itself, along with other explanations of the sometimes-terse warnings).

Maybe you love to use “very” – that’s fine, just comment out or delete that rule in the Perl script, it’s trivial to do so. Or put “% chex_latex” as a comment at the end of the line using it, so the warning is no longer flagged. The script is just a text file, nothing to compile. Maybe you delete everything in the script but the one line that finds doubled words such as “the the” or “in in” or similar. In testing the script on five student theses kindly provided by John Owens, I was surprised by how many doubled words were found, along with a bunch of other true errors.

Oh, and even if you do not use this tool at all, consider at least tossing your titles through this website’s tester. It checks that all the words in a title are properly capitalized or lowercase.

A few minutes of additional work with various tools will make your presentation look more professional (and so, more trustworthy), so “just do it”. And, do you see the error in that previous sentence (hint: I wrote it in the U.S.)?

Update: I also added a little Perl script for “batch spell checking,” which for large documents is much more efficient (for me) than most interactive spell checkers. See the bottom of the repo page for details.

Tags: ,

I mentioned in a post last week that I expected interest in ray-tracing to increase. So, there actually does appear to have been an uptick in Google searches on the term “ray-tracing,” looking at Google Trends. The last time there was as much interest was March 2010 (though other months in between have come close).

It’s a funny area to explore: South Korea seems the most interested, by far. Under “Related topics” is “NVIDIA – Company,” which is not surprising. What’s funny is that if you click that topic, you find that NVIDIA is of strongest interest in Romania, followed by Czechia, Estonia, Hungary, then Russia. I assumed the explanation is “Bitcoin,” but that’s not quite right. According to NVIDIA’s CEO, it’s actually Ethereum mining, as Bitcoins are most profitably mined by custom ASICs at this point. Such a world.

As announced today at the Games Developers Conference by CRC Press / Taylor & Francis Group (booth 2104, South Hall – I’m told there’s a discount code to be had), we’re indeed finally putting out a new edition of Real-Time Rendering. It should be out by SIGGRAPH if all goes well. Tomas, Naty, and I have been working on this edition since August 2016. We realized that, given the amount that’s changed in area lighting, global illumination, and volume rendering, that we could use help, so asked Angelo Pesce, Michał Iwanicki, and Sébastien Hillaire to join us, which they all kindly and eagerly did. Their contributions both considerably improved the book and got it done.

If you want me to just shut up and tell you where to pre-order, go here. You’ll note the lack of cover, and lack of the new three authors. Those’ll get fixed once there’s a more official launch, and official pricing. I suspect the price won’t go down (which is a hint, and you can cancel later if I’m wrong; which reminds me, you should also book a room now for SIGGRAPH if you have the slightest chance of going, since you can also cancel up until July 22 without penalty).

One reason for no cover is that we’re still evaluating them. At the GDC booth you’ll see this artwork used:

fish cover candidate

This is a lovely, colorful model by Elinor Quittner. You can see the interactive model here, and definitely check out the Model Inspector feature on that page by clicking the “I” key (or the “layers” looking icon in the lower right) once the model’s loaded. I love this feature in Sketchfab, that you can examine the various elements. All that said, we’re still examining a number of other cover possibilities. Me, I’m happy we get to show off this potential design here now.

Back to the book itself. Let’s look at page count:

  • First edition, published 1999, 482 pages
  • Second edition, published 2002, 864 pages
  • Third edition, published 2008, 1045 pages
  • Fourth edition, to be published 2018, 1269? pages (1356?, including online)

This new edition is probably a worst-kept secret, in that anyone searching “Real-Time Rendering, 4th edition” on Amazon would have found the entry months ago, and CRC put it on their site some time before March 11. Also, doing a quick count just now, not including the editorial staff, 178 people helped us out in some way: reviewing sections or chapters, providing images, or clarifying concepts. The kind and generous support we’ve received is one of the reasons I love this field. There’s competition between companies, between research teams, and all the rest, it’s part of the landscape. But, underlying this “red in tooth and claw” veneer of competition, most everyone we asked genuinely wanted to share their knowledge and labor to help others understand how things work. I hope it’s the same in other fields, but I know it’s true for this one.

The progression of 3 years between 1st and 2nd, 6 between 2nd and 3rd, and 10 between 3rd and 4th is a reflection not so much of the length of time it takes for each new edition (which has indeed steadily increased), but rather how long it takes us to forget all the stress and pain involved in making a new edition. As a data point, our Google Doc of new references since the last edition is around 170 pages long, and does not include references we could easily dismiss, nor those we ran into later when more closely reading and writing. Each page has about 20 references on it (some duplicated among chapters), about 3200 in all. In the fourth edition we added “only” 1151 new references, and deleted 508 older ones, for a final total of 2059 references (this does not include references on collision detection – more on that in a minute).

We could have added all 3200 and more, but instead focused on that which sees use in applications, or is newest and presents a good overview of the state of the art in its area. The field has simply become far too large for us to cover every piece of research, and doing so would have been a disservice to most readers. On the other end of the spectrum, we have continued to avoid API-specific information and code, as there are plenty of books, repositories, and articles describing these – this website points to many of them (and will be updated in the coming months). We aim to be a guide to algorithms for practitioners.

To conclude, here’s the list of chapters:

1 Introduction
2 The Graphics Rendering Pipeline
3 The Graphics Processing Unit
4 Transforms
5 Shading Basics
6 Texturing
7 Shadows
8 Light and Color
9 Physically-Based Shading
10 Local Illumination
11 Global Illumination
12 Image-Space Effects
13 Beyond Polygons
14 Volumetric and Translucency Rendering
15 Non-Photorealistic Rendering
16 Polygonal Techniques
17 Curves and Curved Surfaces
18 Pipeline Optimization
19 Acceleration Algorithms
20 Efficient Shading
21 Virtual and Augmented Reality
22 Intersection Test Methods
23 Graphics Hardware
24 The Future

If you have a great memory, you’ll notice that the “Collision Detection” chapter from the 3rd edition is missing. We have a fully-updated chapter on this subject for the 4th edition. However, the page count was such that we decided to distribute it, along with the two math-related appendices in the 3rd edition, as online chapters free to download (Collision detection is not strictly a part of real-time rendering, but is an area we think is fascinating and where a fair bit of change has occurred – about 40% of the chapter is new material). We’ll be formatting all of these resources into PDF files nearer to release.

Because I have an addiction to text manipulation and analysis programs (more on that in a future blog post), I did some measures of how much the fourth edition is different than the third. The highly-precise but who knows how accurate number I computed was 59.81% new material by lines changed. By further weighting using the character count, I get a value of 68.99% new. These are probably high – if you change a word in a sentence, or even just join two lines into one, the whole line is considered new – but the takeaway is that a lot has changed in the past decade. We’ve learned a huge amount from writing the book, and by SIGGRAPH look forward to sharing it with you all.


Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!


One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos:


Andrew Glassner wrote another book, Deep Learning: From Basics to Practice. It’s two volumes, find it on Amazon here and here. It is meant as a full introduction to the topic, 1650 pages of text (with an additional 90 page glossary at the end). It uses about 1000 figures to build up mental models of how the various algorithms and processes work, and explains how to use the popular Keras neural net API with Python. There’s a free sample chapter, on backpropagation, at his site. I’ve read about a quarter of the book and look forward to getting to “the meat” – Glassner lays the groundwork with chapters on probability, test data and analysis, information theory, and other relevant topics before plunging into deep learning itself. He aims to be accessible to math-averse readers, but does not dumb down the material. While the writing style is informal and approachable, it sometimes takes a bit of work to absorb, which is as it should be.

Full disclosure: I’m friends with Andrew and helped review a portion of the book. I’ve received no pay, and bought the books for my own education, as they look to be useful. I’m impressed by his dedication in writing such a tome, 20 months of labor, working through a large number of academic papers (each chapter ends with a set of references, along with URLs). From past works, I feel confident that what I’m going to read is factually correct and written in a clear fashion.

If you already know about the topic and are lecturing on the subject, he’s made all the figures free to download and use under Fair Use, along with his Python/Jupyter notebooks for all examples. Here’s a figure from the style transfer section of Chapter 28.

Style Transfer

My only regret is there’s no back cover (e-books don’t need them), for relevant quotes from famous people. I even suggested a few:

  • “With artificial intelligence we are summoning the demon.” – Elon Musk (source)
  • “I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking (source)
  • “Artificial intelligence is the future, not only for Russia but for all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin (source)

Wouldn’t you want to read a book explaining the methods that will bring about the downfall of our civilization? Of course, they mean general intelligence, not the specialized tasks deep learning is aimed at. Books such as Incognito show how little we know of our own internal workings, how consciousness is just a small part of what the brain’s about. It’s hard to imagine we’re going to suddenly crack the problem of creating general intelligence any time soon, let alone create a runaway paperclip maximizer.

This existential threat feels way overblown, something that makes for great movies, sort of like how elevators go into free fall in Hollywood but never in real-life (problem was essentially solved a century ago). I saw Steven Pinker give a talk last night (his new book seems cheery, nice review here), and he noted that nuclear war and climate change catastrophes are much more real and important than fictitious runaway AIs. (Fun fact: Pinker was once an assembly language programmer.) His opinion piece is a great read, pointing out the dangers of apocalyptic thought. But I digress…

So, whether you’re waiting for the end of the world or for the Singularity (or both), Glassner’s book looks to be a good one to read in the meantime to get a grounding in this old-yet-new field and learn how to use deep learning systems available (for free!). Oh, and the two volumes are ridiculously cheap, and I find I can even read them on my cell phone.

The article collection GPU Zen was a ridiculously good deal at $10 for the electronic version of the book. A call for participation for GPU Zen 2 is now out. First important date: March 30th for submitting proposals (i.e., not the first draft, which is due August 3rd).

Just because I wanted to have a title with a series of 3 letter bits, I wrote out the Two. I recently read some little tidbit about some old book passage with the longest-known (at least, to him) string of 3 letter words in a row, that someone found from analyzing a huge pile of Project Gutenberg texts or similar. Can’t find the article now, thought it was at the Futility Closet site, but maybe not. Which is my roundabout way of saying that site is sometimes entertaining, it has an odd historical oddities & mathematical recreations bent to it.

To continue to ramble, in memory of the first anniversary of his death (and LAA), I’ll end with this quote from the wonderful Raymond Smullyan: “I understand that a computer has been invented that is so remarkably intelligent that if you put it into communication with either a computer or a human, it can’t tell the difference!”

« Older entries