Normals Gone Bad

Short version: check out my free, open-source tool to check if normals textures (those bluish things) are well-formed, and fix them when you find they’re not. Code’s for Windows, but it’s command-line driven and should be not-too-hard to translate to Linux. The page explains the problem, solutions, and proper formulas. Let me know of any goofs.

I’ve been finding over the years that lots of normals textures out there are wrong. Sometimes just a little bit, sometimes quite a lot. Part of the problem is tools that don’t properly normalize the output. Another source is that these textures are blithely resized or otherwise filtered, which doesn’t entirely work when you’re storing normals. My long-term goal is to check over any tools that generate these textures and make sure they work well, and if they don’t, try to get them fixed.

So, please help! If you generate normals textures, what tools do you use? I’ve listed the few I know near the bottom of the repo’s page. Write me at erich@acm.org, or reply on my Twitter post.

Also, if you know a good way to programmatically tell DirectX-style from OpenGL-style normals textures (yes, there’s a difference: Y normal component are negated for DirectX), please clue me in. I’ve tried a few ideas; no winner yet. I think converting from the texture back to a heightfield twice, reversing Y for one attempt, and seeing which is more reasonable, would do it, but haven’t seriously tried that yet.

Which one is poorly formed? I can’t (easily) tell without a tool, which is why I wrote one.

EGSR, HPG, and I3D

SIGGRAPH rejection notices went out yesterday, so time to note a few other graphics conference venues to consider.

I3D 2024: May 8-10, Cesium, Philadelphia, PA, USA. Well, you’re too late for the Call for Papers, but you should consider going.

EGSR 2024: July 3-5, Imperial College, London, UK. On July 2nd is a material appearance workshop at the same location. Call for Papers. I’m biased towards this one, since I’m a program co-chair this year.

HPG 2024: July 26-28, co-located with SIGGRAPH, Denver, CO, USA. Call for Participation.

I like these small conferences. There’s usually around 100 attendees, mostly researchers or interested people from academia or industry. Everyone in the room is someone you likely have a fair bit in common with, graphics-wise, compared to the vast but wide-ranging set of people at SIGGRAPH.

Two classic radiosity books now free

John Wallace, with Michael Cohen’s help, just finished the hard work of freeing a book they coauthored on radiosity, putting it under the least-restrictive Creative Commons license. They wrote the first computer graphics book on the subject, Radiosity and Realistic Image Synthesis, in 1993. It is now free for download. Skimming through the plates at the beginning is a walk down memory lane for me.

Ian Ashdown wrote the book Radiosity: A Programmer’s Perspective a year later, in 1994. Ian passed away last June. His book is available free for download on ResearchGate.

Michael Herf mentioned Ian’s book to me in Nov. 2022, asking if I knew of anything better on the subject of lighting units at this point. We included Ian in the conversation about resources (I had just finished my own summary – I wish I had recalled Ian had one before wading into this topic!).

Ian wrote: “Being self-taught in photometry and radiometry beginning in 1980, I struggled mightily at first with such concepts as illuminance and luminance with only the IES Lighting Handbook to guide me. I remembered this when writing my book, and focused on explaining the topics intuitively with a minimal amount of mathematics.”

Consider reading each books’ early chapters if you want a solid introduction to the terminology and equations of lighting. And, if you’d like to free up a book you have authored, please make the effort! In case it’s a help, I recorded some notes about the process Andrew Glassner went through when freeing up An Introduction to Ray Tracing in 2019.

Seven Things for January 1, 2024

Time to look both forward and back!

  1. It’s Public Domain Day, when various old works become legal to share and draw upon for new creative endeavors. The original Mickey Mouse, Lady Chatterly’s Lover, Escher’s Tower of Babel, and much else is now free, at least in the US. (Sadly, Canada’s gone the other direction, along with New Zealand and Japan.) Reuse has already begun.
  2. Speaking of copying, “3D prints” of paintings, where the robot uses brushes to reproduce a work, is now a commercial venture.
  3. Speaking of free works, happily the authors have put the new, 4th edition of Physically Based Rendering, published in March 2023, free on the web. Our list of all free graphics books (we know) is here.
  4. Speaking of books, Jendrik Illner started a page describing books and resources for game engine development. His name should be familiar; he’s the person that compiles the wonderful Graphics Programming weekly posts. I admit to hearing about the PBR 4th edition being up for free from his latest issue, #320 (well, it’s been free since November 1st, but I forgot to mark my calendar). This issue is not openly online as of today, being sent first to Patreon subscribers. Totally worth a dollar a month for me (actually, I pay $5, because he deserves it).
  5. ChatGPT was, of course, hot in 2023, but isn’t quite ready to replace graphics programmers. Pretty funny, and now I want someone to add a control called Photon Confabulation to Arnold (or to every renderer). Make it so, please.
  6. The other good news is that our future AI overlords can be defeated by somersaults, hiding in cardboard boxes, or dressing up as a fir tree.
  7. What’s the new graphics thing in 2023? NeRFs are so 2020. This year the cool kids started using 3D Gaussian splatting to represent and render models. Lots and lots of papers and open source implementations came out (and will come out) after the initial paper presentation at SIGGRAPH 2023. Aras has a good primer on the basic ideas of this stuff, at least on the rendering end. If you just want to look at the pretty, this (not open source) viewer page is nicely done. Me, I like both NeRFs and gsplats – non-polygonal representation is fun stuff. I think part of the appeal of Gaussian splatting is that it’s mostly old school. Using spherical harmonics to store direction-dependent colors is an old idea. Splatting is a relatively old rendering technique that can work well with rasterization (no ray casting needed). Forming a set of splats does not invoke neural anything – there’s no AI magic to decode (though, as Aras notes, they form the set of splats “using gradient descent and ‘differentiable rendering’ and all the other things that are way over my head”). I do like that someone created a conspiracy post – that’s how you know you’ve made it.

OBJ lives on

Yesterday I accidentally drag-and-dropped an OBJ file onto Visual Studio 2022 (I was aiming for G3D’s viewer, which I love for OBJs, despite one annoying bug – hoping it’s fixed someday). To my surprise, VS2022 opened and displayed it! It even looked fairly reasonable. Googling a bit, they seem to also support FBX and DAE (who uses DAE nowadays?). Sadly, this feature doesn’t work for glTF or USD files (yet…).

The controls are little odd, e.g., use the mouse wheel and the model moves a bit up and down. But, control-mousewheel does what you’d expect, move in and out. Shift-mousewheel moves right and left. Control-leftmouse pans, alt-leftmouse rotates. Lots of other UI buttons around the viewport to explore. Fun!

VS2022, you surprised me. G3D’s prettier view follows.
G3D, with customized background. However, there’s a bug where the R of RGBA is interpreted as the A.

USD and glTF Resources Page

Here’s my new page about USD and glTF resources: https://www.realtimerendering.com/usd_gltf.html – if you see anything missing or wrong, let me know. I’m no expert, still learning. Which is in part why I put this page up, to get feedback.

This past year I made two minor contributions to the USD Working Group Assets repo. Being involved in the ASWF‘s activities and Slack channel has been a great help in learning about USD (and also a bit about glTF along the way). Fun thing: if you have an iPhone, click here – I like that Apple’s AR Quick Look is simply built in. Here’s one Minecrafty test model on my office floor.

Lights on, no one quite home

I gave a talk about ray tracing yesterday. For my entertainment, before the talk I asked ChatGPT what ray tracing is. Its full answer was pretty good overall. However, it stumbled at the start – I’ll let you figure it out (hint: Plato). There are some other small oddities (e.g., “follow similar paths”) and “I wouldn’t have put it exactly like that” bits. Here’s the question and its answer.

What is ray tracing?

ChatGPT:
Ray tracing is a rendering technique used in computer graphics and computer-generated imagery (CGI) to simulate the way light interacts with objects in a virtual environment. It is a method for generating highly realistic images by tracing the path of individual rays of light as they interact with the objects in a scene. Ray tracing can simulate complex optical effects, such as reflections, refractions, shadows, and global illumination, with a high degree of accuracy and realism.

Here’s a simplified explanation of how ray tracing works:

1. Ray Generation: In a ray tracing system, a virtual camera is placed within a 3D scene. Rays of light are generated from the camera’s viewpoint, with one ray typically cast for each pixel on the screen.

2. Ray Intersection: Each ray is traced into the scene, and its path is followed as it interacts with objects. When a ray encounters an object, the software calculates how it intersects with that object’s geometry.

3. Shading and Lighting: After determining the intersection point, the renderer calculates how the object at that point interacts with light. This involves considering factors like material properties (e.g., reflectivity, transparency), surface normals, and lighting conditions (e.g., direct and indirect lighting).

4. Secondary Rays: In addition to primary rays cast from the camera, secondary rays may be generated to simulate effects like reflections, refractions, and shadows. These rays follow similar paths as primary rays but may be traced in different directions based on the materials and lighting in the scene.

5. Recursive Ray Tracing: To simulate complex lighting interactions, ray tracing often involves recursion. For example, when a ray hits a reflective surface, a new ray is generated to simulate the reflection, and this new ray undergoes the same process of intersection and shading. This recursion continues until a termination condition is met.

6. Image Formation: As the rays interact with objects and lights in the scene, they carry information about the color and intensity of light at each pixel. This information is used to generate the final image that is displayed on the screen.

Ray tracing is known for its ability to produce photorealistic images, making it a popular choice in fields like computer gaming, visual effects in film and television, architectural visualization, and more. However, it is computationally intensive and can require significant processing power, so real-time ray tracing in complex scenes is still a demanding task that often requires powerful hardware, such as high-end graphics cards. Advances in hardware and software continue to make ray tracing more accessible and practical for a wider range of applications.