Monthly Archives: January 2024

Two classic radiosity books now free

John Wallace, with Michael Cohen’s help, just finished the hard work of freeing a book they coauthored on radiosity, putting it under the least-restrictive Creative Commons license. They wrote the first computer graphics book on the subject, Radiosity and Realistic Image Synthesis, in 1993. It is now free for download. Skimming through the plates at the beginning is a walk down memory lane for me.

Ian Ashdown wrote the book Radiosity: A Programmer’s Perspective a year later, in 1994. Ian passed away last June. His book is available free for download on ResearchGate.

Michael Herf mentioned Ian’s book to me in Nov. 2022, asking if I knew of anything better on the subject of lighting units at this point. We included Ian in the conversation about resources (I had just finished my own summary – I wish I had recalled Ian had one before wading into this topic!).

Ian wrote: “Being self-taught in photometry and radiometry beginning in 1980, I struggled mightily at first with such concepts as illuminance and luminance with only the IES Lighting Handbook to guide me. I remembered this when writing my book, and focused on explaining the topics intuitively with a minimal amount of mathematics.”

Consider reading each books’ early chapters if you want a solid introduction to the terminology and equations of lighting. And, if you’d like to free up a book you have authored, please make the effort! In case it’s a help, I recorded some notes about the process Andrew Glassner went through when freeing up An Introduction to Ray Tracing in 2019.

Seven Things for January 1, 2024

Time to look both forward and back!

  1. It’s Public Domain Day, when various old works become legal to share and draw upon for new creative endeavors. The original Mickey Mouse, Lady Chatterly’s Lover, Escher’s Tower of Babel, and much else is now free, at least in the US. (Sadly, Canada’s gone the other direction, along with New Zealand and Japan.) Reuse has already begun.
  2. Speaking of copying, “3D prints” of paintings, where the robot uses brushes to reproduce a work, is now a commercial venture.
  3. Speaking of free works, happily the authors have put the new, 4th edition of Physically Based Rendering, published in March 2023, free on the web. Our list of all free graphics books (we know) is here.
  4. Speaking of books, Jendrik Illner started a page describing books and resources for game engine development. His name should be familiar; he’s the person that compiles the wonderful Graphics Programming weekly posts. I admit to hearing about the PBR 4th edition being up for free from his latest issue, #320 (well, it’s been free since November 1st, but I forgot to mark my calendar). This issue is not openly online as of today, being sent first to Patreon subscribers. Totally worth a dollar a month for me (actually, I pay $5, because he deserves it).
  5. ChatGPT was, of course, hot in 2023, but isn’t quite ready to replace graphics programmers. Pretty funny, and now I want someone to add a control called Photon Confabulation to Arnold (or to every renderer). Make it so, please.
  6. The other good news is that our future AI overlords can be defeated by somersaults, hiding in cardboard boxes, or dressing up as a fir tree.
  7. What’s the new graphics thing in 2023? NeRFs are so 2020. This year the cool kids started using 3D Gaussian splatting to represent and render models. Lots and lots of papers and open source implementations came out (and will come out) after the initial paper presentation at SIGGRAPH 2023. Aras has a good primer on the basic ideas of this stuff, at least on the rendering end. If you just want to look at the pretty, this (not open source) viewer page is nicely done. Me, I like both NeRFs and gsplats – non-polygonal representation is fun stuff. I think part of the appeal of Gaussian splatting is that it’s mostly old school. Using spherical harmonics to store direction-dependent colors is an old idea. Splatting is a relatively old rendering technique that can work well with rasterization (no ray casting needed). Forming a set of splats does not invoke neural anything – there’s no AI magic to decode (though, as Aras notes, they form the set of splats “using gradient descent and ‘differentiable rendering’ and all the other things that are way over my head”). I do like that someone created a conspiracy post – that’s how you know you’ve made it.