Category Archives: Book

One Last Link – RTR4 Reference Page “Done”

I finally finished the Sisyphus-like task of putting useful links for RTR4’s references. For this brief moment I think all the links on that page work – enjoy it for the few minutes it lasts (and feel free to send me fixes, though I may blithely ignore these for a bit, as I’m sick to death of this task – no mas!). At the top of the page I note some pleasant tools, such as the Google Scholar search button extension, which saved me a lot of copying and pasting titles.

Oh, also, the free, online Collision Detection chapter now has its hyperlinked bibliography up, too, along with the appendices.

I’m writing a post mostly because I found this oddity: The classic paper

Torrance, K., and E. Sparrow, “Theory for Off-Specular Reflection from Roughened Surfaces,” Journal of the Optical Society of America, vol. 57, no. 9, pp. 1105-1114, Sept. 1967

is not one Google Scholar knows about in English. It turns up one in Japanese, which was a surprise. Searching on Google as a whole, it turns out Steve Westin still has one squirreled away. Paper archiving is a house of cards, I tells ya.

Next task: work on our main page of resources.

One week to go… Submit!

The Ray Tracing Gems early proposals deadline is June 21, a week away (the final deadline is October 15th). Submit a one-page proposal by June 21 and there’s the extra incentive offered by NVIDIA, a Titan V graphics card to the top five proposals (which I finally looked up – if you don’t want it, trade it in for a nice used car). Anyway, call for proposals for the book is here.

While some initial impetus for making such a book is the new DXR/VKRT APIs, we want the book to be broader than just this area, e.g.,  ray tracing methods using various hardware platforms and software, summaries of the state of the art, best practices, etc. In the spirit of Graphics Gems, GPU Gems, and the Journal of Computer Graphics Techniques, I see our book as a way to inform readers about implementation details and other elements that normally don’t make it into papers. For example, if you have a technique that was not long enough, or too technically involved, to publish in a journal article, now is your chance. Mathematics journals publish short results all the time – computer graphics journals, not so much.

I would also like to see summaries for various facets of the field of ray tracing. For example, I think of Larry Gritz’s article “The Importance of Being Linear” from GPU Gems 3 as a great example of this type of article. It is about gamma correction – not a new topic by any stretch – but its wonderful and thoughtful exposition reached many readers and did a great service for our field. I still point it out to this day, especially since it is open access (a goal for Ray Tracing Gems, too).

You can submit more than one proposal – the more the better, and short proposals are fine (encouraged, in fact). That said, no “Efficient Radiosity for Daylight Simulation in Closed Environments” papers, please; that’s been done (if that paper doesn’t ring a bell, you owe it to yourself to read the classic WARNING: Beware of VIDEA! page). In return, we promise fair reviewing and not to roll the die.

Update: a proposal is just a one-page or less summary of some idea for a paper, and can be written in any format you like: Word, PDF, plain text, etc. Proposals are not required, either by June 21 or after. They’re useful to us, though, as a way to see what’s coming, let each prospective contributor know if it’s a good topic, and possibly connect like-minded writers together. Also, a proposal that “wins” on June 21 does not mean the paper itself will automatically be accepted – each article submitted will be judged on its merits. The main thing is the paper itself, due October 15th. Send proposals to raytracinggems@nvidia.com – we look forward to what you all contribute!

Monument to the Anonymous Peer Reviewer,

Monument to the Anonymous Peer Reviewer

“Real-Time Rendering, 4th Edition” available in August 2018

As announced today at the Games Developers Conference by CRC Press / Taylor & Francis Group (booth 2104, South Hall – I’m told there’s a discount code to be had), we’re indeed finally putting out a new edition of Real-Time Rendering. It should be out by SIGGRAPH if all goes well. Tomas, Naty, and I have been working on this edition since August 2016. We realized that, given the amount that’s changed in area lighting, global illumination, and volume rendering, that we could use help, so asked Angelo Pesce, Michał Iwanicki, and Sébastien Hillaire to join us, which they all kindly and eagerly did. Their contributions both considerably improved the book and got it done.

If you want me to just shut up and tell you where to pre-order, go here. You’ll note the lack of cover, and lack of the new three authors. Those’ll get fixed once there’s a more official launch, and official pricing. I suspect the price won’t go down (which is a hint, and you can cancel later if I’m wrong; which reminds me, you should also book a room now for SIGGRAPH if you have the slightest chance of going, since you can also cancel up until July 22 without penalty).

One reason for no cover is that we’re still evaluating them. At the GDC booth you’ll see this artwork used:

fish cover candidate

This is a lovely, colorful model by Elinor Quittner. You can see the interactive model here, and definitely check out the Model Inspector feature on that page by clicking the “I” key (or the “layers” looking icon in the lower right) once the model’s loaded. I love this feature in Sketchfab, that you can examine the various elements. All that said, we’re still examining a number of other cover possibilities. Me, I’m happy we get to show off this potential design here now.

Back to the book itself. Let’s look at page count:

  • First edition, published 1999, 482 pages
  • Second edition, published 2002, 864 pages
  • Third edition, published 2008, 1045 pages
  • Fourth edition, to be published 2018, 1269? pages (1356?, including online)

This new edition is probably a worst-kept secret, in that anyone searching “Real-Time Rendering, 4th edition” on Amazon would have found the entry months ago, and CRC put it on their site some time before March 11. Also, doing a quick count just now, not including the editorial staff, 178 people helped us out in some way: reviewing sections or chapters, providing images, or clarifying concepts. The kind and generous support we’ve received is one of the reasons I love this field. There’s competition between companies, between research teams, and all the rest, it’s part of the landscape. But, underlying this “red in tooth and claw” veneer of competition, most everyone we asked genuinely wanted to share their knowledge and labor to help others understand how things work. I hope it’s the same in other fields, but I know it’s true for this one.

The progression of 3 years between 1st and 2nd, 6 between 2nd and 3rd, and 10 between 3rd and 4th is a reflection not so much of the length of time it takes for each new edition (which has indeed steadily increased), but rather how long it takes us to forget all the stress and pain involved in making a new edition. As a data point, our Google Doc of new references since the last edition is around 170 pages long, and does not include references we could easily dismiss, nor those we ran into later when more closely reading and writing. Each page has about 20 references on it (some duplicated among chapters), about 3200 in all. In the fourth edition we added “only” 1151 new references, and deleted 508 older ones, for a final total of 2059 references (this does not include references on collision detection – more on that in a minute).

We could have added all 3200 and more, but instead focused on that which sees use in applications, or is newest and presents a good overview of the state of the art in its area. The field has simply become far too large for us to cover every piece of research, and doing so would have been a disservice to most readers. On the other end of the spectrum, we have continued to avoid API-specific information and code, as there are plenty of books, repositories, and articles describing these – this website points to many of them (and will be updated in the coming months). We aim to be a guide to algorithms for practitioners.

To conclude, here’s the list of chapters:

1 Introduction
2 The Graphics Rendering Pipeline
3 The Graphics Processing Unit
4 Transforms
5 Shading Basics
6 Texturing
7 Shadows
8 Light and Color
9 Physically-Based Shading
10 Local Illumination
11 Global Illumination
12 Image-Space Effects
13 Beyond Polygons
14 Volumetric and Translucency Rendering
15 Non-Photorealistic Rendering
16 Polygonal Techniques
17 Curves and Curved Surfaces
18 Pipeline Optimization
19 Acceleration Algorithms
20 Efficient Shading
21 Virtual and Augmented Reality
22 Intersection Test Methods
23 Graphics Hardware
24 The Future

If you have a great memory, you’ll notice that the “Collision Detection” chapter from the 3rd edition is missing. We have a fully-updated chapter on this subject for the 4th edition. However, the page count was such that we decided to distribute it, along with the two math-related appendices in the 3rd edition, as online chapters free to download (Collision detection is not strictly a part of real-time rendering, but is an area we think is fascinating and where a fair bit of change has occurred – about 40% of the chapter is new material). We’ll be formatting all of these resources into PDF files nearer to release.

Because I have an addiction to text manipulation and analysis programs (more on that in a future blog post), I did some measures of how much the fourth edition is different than the third. The highly-precise but who knows how accurate number I computed was 59.81% new material by lines changed. By further weighting using the character count, I get a value of 68.99% new. These are probably high – if you change a word in a sentence, or even just join two lines into one, the whole line is considered new – but the takeaway is that a lot has changed in the past decade. We’ve learned a huge amount from writing the book, and by SIGGRAPH look forward to sharing it with you all.

“Ray Tracing Gems” Book Call for Participation

Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!

Deep Learning: From Basics to Practice

Andrew Glassner wrote another book, Deep Learning: From Basics to Practice. It’s two volumes, find it on Amazon here and here. It is meant as a full introduction to the topic, 1650 pages of text (with an additional 90 page glossary at the end). It uses about 1000 figures to build up mental models of how the various algorithms and processes work, and explains how to use the popular Keras neural net API with Python. There’s a free sample chapter, on backpropagation, at his site. I’ve read about a quarter of the book and look forward to getting to “the meat” – Glassner lays the groundwork with chapters on probability, test data and analysis, information theory, and other relevant topics before plunging into deep learning itself. He aims to be accessible to math-averse readers, but does not dumb down the material. While the writing style is informal and approachable, it sometimes takes a bit of work to absorb, which is as it should be.

Full disclosure: I’m friends with Andrew and helped review a portion of the book. I’ve received no pay, and bought the books for my own education, as they look to be useful. I’m impressed by his dedication in writing such a tome, 20 months of labor, working through a large number of academic papers (each chapter ends with a set of references, along with URLs). From past works, I feel confident that what I’m going to read is factually correct and written in a clear fashion.

If you already know about the topic and are lecturing on the subject, he’s made all the figures free to download and use under Fair Use, along with his Python/Jupyter notebooks for all examples. Here’s a figure from the style transfer section of Chapter 28.

Style Transfer

My only regret is there’s no back cover (e-books don’t need them), for relevant quotes from famous people. I even suggested a few:

  • “With artificial intelligence we are summoning the demon.” – Elon Musk (source)
  • “I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking (source)
  • “Artificial intelligence is the future, not only for Russia but for all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin (source)

Wouldn’t you want to read a book explaining the methods that will bring about the downfall of our civilization? Of course, they mean general intelligence, not the specialized tasks deep learning is aimed at. Books such as Incognito show how little we know of our own internal workings, how consciousness is just a small part of what the brain’s about. It’s hard to imagine we’re going to suddenly crack the problem of creating general intelligence any time soon, let alone create a runaway paperclip maximizer.

This existential threat feels way overblown, something that makes for great movies, sort of like how elevators go into free fall in Hollywood but never in real-life (problem was essentially solved a century ago). I saw Steven Pinker give a talk last night (his new book seems cheery, nice review here), and he noted that nuclear war and climate change catastrophes are much more real and important than fictitious runaway AIs. (Fun fact: Pinker was once an assembly language programmer.) His opinion piece is a great read, pointing out the dangers of apocalyptic thought. But I digress…

So, whether you’re waiting for the end of the world or for the Singularity (or both), Glassner’s book looks to be a good one to read in the meantime to get a grounding in this old-yet-new field and learn how to use deep learning systems available (for free!). Oh, and the two volumes are ridiculously cheap, and I find I can even read them on my cell phone.

Seven Things for August 19, 2015

More stuff:

  • New interactive 3D graphics books at SIGGRAPH 2015: WebGL Insights, GPU Pro 6 (Kindle right now, hardcover in September). Let me know if I missed anything (see full list here, which also includes links to Google Books previews for these new books).
  • Updated book: 7th edition of the OpenGL SuperBible. I would guess that, with Vulkan coming down the pike, and Apple going with Metal and no longer developing OpenGL (it’s back in 2010 at 4.1 in Mavericks), this will be the final edition. Future students having to learn Vulkan or DirectX 12, well, that won’t be much fun at all…
  • I mentioned yesterday how you can download the SIGGRAPH 2015 Proceedings for free this week. There’s more, in theory. Some of the links there have nothing as of right now. The Posters are worth a skim, especially since I didn’t see them at SIGGRAPH. I also liked the Studio PDF. It starts with a bunch of single-page talks that are fun to snack on, followed by a few random slidesets. Emerging Tech also has longer descriptions than on the ETech page (which has more pics and videos, however). If you gotta catch ’em all, there’s also a PDF for Panels.
  • There have been many news articles recently about not viewing screens at bedtime. Right, sure. Michael Herf (former CTO at Picasa) is the president at f.lux, one company that makes screens vary in overall spectra during the day to ameliorate the problem. He pointed me at a useful-to-researchers bit: their fluxometer site, with spectra for many different displays, all downloadable.
  • Oh, and related, a tip from Michael: Pantone stickers with differing colors (using metameric failure) under different temperature lights, so you can ensure you’re showing work under consistent lighting conditions.
  • I was impressed by HALIDE, an MIT licensed open source project for writing high performance image processing code (including GPU versions) from scratch. Most impressive is their case study for local Laplacian filters (p. 28), showing great performance with considerably less code and time coding vs. Adobe Photoshop’s efforts. Google and others use it extensively (p. 32).
  • Path tracing is all the rage for the film industry; the Arnold renderer started it (AFAIK) and others have followed suit. Here’s an entertaining path trace of interior lighting for a Minecraft scene using the free Chunky path tracer. SPP is samples per pixel:

Chunk progressive render

GPU Pro 5 is out

Really, the title says it all, the book GPU Pro 5 is shipping. Sadly, there’s no “Look Inside” for the book on Amazon; I’ll hope they at least put the Table of Contents there. You can find a rough Table of Contents on the CRC site; rough in that you can’t see the number of pages for each article. A few articles are quite lengthy: Physically Base Area Lights is 34 pages long, Hi-Z Screen-Space Cone-Traced Reflections is an incredible 44. The rest are in the 10-20 page range.

You can get a taste of the book at the GPU Pro blog, it has previews of a large number of the articles. At $70 this is not a casual purchase, but if you’re a practitioner and just one article saves you 2 hours, the book’s more than paid for itself.

Me, I was amused to see the following, a model from Morgan McGuire’s high-quality model repository – hey, that’s from our world! (And you thought I was done with Minecraft references here.)

VoxeliaMC

3rd Edition now on Google Books

Naty just noticed that our latest edition is up on Google Books. It’s the usual deal, about 20% of the book is excerpted. Between this and Amazon’s Look Inside, a fair bit of the book is at your fingertips.

By the way, if you are the author of an out-of-print book, please do get it 100% up on Google Books, if you can. Even if it’s dated, it captures where the field was at a particular time – at the least you’re helping future archaeologists. First step is to get the rights back. Contact your publisher and ask. It’s not a high priority for any of them, but they usually have no reason to hold onto the rights and will freely return these, or so I’m told. After that, well, I’ve personally never done step two, but I’d hope it’s not an arduous process to get Google Books to list it. If anyone has experience in this area, please do speak up.

In other news, the Amazon Stock Market for our book had a sudden uptick. Interestingly, Barnes and Noble kicked its price up the same week. Just a coincidence, I’m sure. The May 10th uptick was no doubt due to Mother’s Day and the busy summer reading season; our book is a chick magnet when casually left out on your beach blanket.