Monthly Archives: November 2022

Early Paper “Ray Tracing News” Editions Unearthed!

I asked Andrew Glassner if he had the original paper issues of the Ray Tracing News available. He replied, “About an hour ago I entered the vault, filled with nitrogen to prevent decay, put on the fiber-free white gloves, and was allowed to view the original manuscripts.” In other words, he found them in some box (I suspect I have them in some different box, too, somewhere…). He kindly scanned all four and they’re now available as PDFs, hosted here.

Andrew started this informal journal for us ray tracing researchers immediately after SIGGRAPH 1987, where he had organized the first “ray-tracing roundtable.” It was no mean feat to gather us together, check the email list at the end of the first issue. Tip: I’m no longer at hpfcla!hpfcrs!eye!erich@hplabs.HP.COM. Delivery was like the Pony Express back then.

Having these (virtually) in hand makes the Ray Tracing News collection online complete, sort of like catching all the Pokémon. It’s nice to have these publications available on the web now. I enjoy seeing ideas such as Jim Arvo’s simulated annealing and Paul Heckbert’s minimal ray tracer where they first appeared.

And, the issues have filler cartoons, made by Andrew – these follow. Hey, I enjoyed them. Ray tracing is not a rich vein of comedy gold; there isn’t exactly an abundance of comics on the subject (I know of this, this, and this one, at most – xkcd and SMBC, step up your game. Well, SMBC at least had this, and xkcd this).

We finally, a mere 30-odd years later, have tracing tablets (if you view some Shadertoys on an iPad).

When Edges and Vertices Were Discovered

I just finished the book Euler’s Gem. Chapter 7 starts off with this astounding statement:

On November 14, 1750, the newspaper headlines should have read “Mathematician discovers edge of polyhedron!”
On that day Euler wrote from Berlin to his friend Christian Goldbach in St. Petersburg. In a phrase seemingly devoid of interesting mathematics, Euler described “the junctures where two faces come together along their sides, which, for lack of an accepted term, I call ‘edges.'”

The book uses as a focus Euler’s polyhedron formula, V-E+F = 2. I agree with the author that this thing should be taught in grade schools, it’s so simple and beautiful and visual. I also agree that it’s amazing the ancient Greeks or anyone before Euler didn’t figure this out (well, maybe Descartes did – read the book, p. 84, or see here).

He continues some pages later:

Amazingly, until he gave them a name, no one had explicitly referred to the edges of a polyhedron. Euler, writing in Latin, used the word acies to mean edge. In “everyday Latin” acies is use for the sharp edge of a weapon, a beam of light, or an army lined up for battle. Giving a name to this obvious feature may seem to be a trivial point, but it is not. It was a crucial recognition that the 1-dimensional edge of a polyhedron is an essential concept.

Even though Euler came up with the formula (though was not able to prove it – that came later), the next mind-blowing thing was reading that he didn’t call vertices vertices, but rather:

Euler referred to a vertex of a polyhedron as an angulus solidus, or solid angle.

In 1794 – 44 years after edges – the mathematician Legendre renamed them:

We often use the word angle, in common discourse, to designate the point situated at its vertex; this expression is faulty. It would be more clear and more exact to denote by a particular name, as that of vertices, the points situated at the vertices of the angles of a polygon, or of a polyhedron.

Me, I found this passage a little confusing and circular, “the points at the vertices of the angles of a polygon.” Sounds like “vertices” existed as a term before then? Anyway, the word wasn’t applied as a name for these points until then. If someone has access to an Oxford English Dictionary, speak up!

Addendum: Erik Demaine kindly sent on the OED’s “vertex” entry. It appears “vertex” (Latin for “whirl,” related to “vortex”) was first used for geometry back in 1570 by J. Dee in H. Billingsley’s translation of Euclid’s Elements Geom. “From the vertex, to the Circumference of the base of the Cone.” From this and the other three entries through 1672, “vertex” seems to get used as meaning the tip of a pyramid. (This is further backed up by this entry in Entymonline). In 1715 the term is then used in “Two half Parabolas’s [sic] whose Vertex’s are C c.” Not sure what that means – parabolas have vertices? Maybe he means the foci? (Update: David Richeson, author of Euler’s Gem, and Ari Blenkhorn both wrote and noted the “vertex of a parabola” is the point where the parabola intersects its axis of symmetry. David also was a good sport about my comments later in this post, noting his mother didn’t finish it. Ari says in class she illustrates how you get each of the conic sections from a cone by slicing up ice-cream cones, dipping the cut edges in chocolate syrup, and using them to print the shapes. Me, I learned a new term, “latus rectum” – literally, “right side.”)

It’s not until 1840 that D. Lardner says “These lines are called the side of the angle, and the point C where the sides unite, is called its vertex.” So, I think I buy Euler’s Gem‘s explanation: Euler called the corners of a polyhedron “solid angles” and Legendre renamed them to a term already used for points in other contexts, “vertices.” OK, I think we’ve beat that to death…

So, that’s it: “edges” will be 272 years old as of next Monday (let’s have a party), and “vertices” as we know them are only 228 years old.

By the way, I thought the book Euler’s Gem was pretty good. Lots of math history and some nice proofs along the way. The proofs sometime (for me) need a bit of pencil and paper to fully understand, which I appreciate – they’re not utterly dumbed down. However, I found I lost my mojo around chapter 17 of 23. The author tries to quickly bring the reader up to the present day about modern topology. More and more terms and concepts are introduced and quickly became word salad for me. But I hope I go back to these last chapters someday, with notebook and pencil in hand – they look rewarding. Or if there’s another topology book that’s readable by non-mathematicians, let me know. I’ve already read The Shape of Space, though the first edition, decades ago, so maybe I should (re-)read the newest edition.

On the strength of the author’s writing I bought his new book, Tales of Impossibility, which I plan to start soon. I found out about Euler’s Gems through a book by another author, called Shape. Also pretty good, more a collection of articles that in some way relate to geometry. His earlier book, a NY Times bestseller, is also a fairly nice collection of math-related articles. I’d give them each 4 out of 5 stars – a few uneven bits, but definitely worth my while. They’re no Humble Pi, which is nothing deep but I just love; all these books have something to offer.

Oh, and while I’m here, if you did read and like Humble Pi, or even if you didn’t, my summer walking-around-town podcast of choice was A Podcast of Unnecessary Detail, where Matt Parker is a third of the team. Silly stuff, maybe educational. I hope they make more soon.

Bonus test: if you feel like you’re on top of Euler’s polyhedral formula, go check out question one (from a 2003 lecture, “Subtle Tools“), and you might enjoy the rest of the test, too.

Oh, and the 272nd birthday of the term “edge” was celebrated with this virtual cake.

Low Discrepancy Color Sequences, Part Deux

“Done puttering.” Ha, I’m a liar. Here’s a follow up to the first article, a follow-up which just about no one wants. Short version: you can compute such groups of colors other ways. They all start to look a bit the same after a while. Plus, important information on what color is named “lime.”

So, I received some feedback from some readers. (Thanks, all!)

Peter-Pike Sloan gave my technique the proper name: Farthest First Traversal. Great! “Low discrepancy sequences” didn’t really feel right, as I associate that technique more with quasirandom sampling. He writes: “I think it is generally called farthest point sampling, it is common for clustering, but best with small K (or sub-sampling in some fashion).”

Alan Wolfe said, “You are nearly doing Mitchell’s best candidate for blue noise points :). For MBC, instead of looking through all triplets, you generate N*k of them randomly & keep the one with the best score. N is the number of points you have already. k is a constant (I use 1 for bn points).” – He nailed it, that’s in fact the inspiration for the method I used. But I of course just look through all the triplets, since the time to test them all is reasonable and I just need to do so once. Or more than once; read on.

Matt Pharr says he uses a low discrepancy 3D Halton sequence of points in a cube:

Matt’s pattern

I should have thought of trying those, it makes sense! My naive algorithm’s a bit different and doesn’t have the nice feature that adjacent colors are noticeably different, if that’s important. If I would have had this sequence in hand, I would never have delved. But then I would never have learned about the supposed popularity of lime.

Bart Wronski points out that you could use low-discrepancy normalized spherical surface coordinates:

Bart’s pattern

Since they’re on a sphere, you get only those colors at a “constant” distance from the center of the color cube. These, similarly, have the nice “neighbors differ” feature. He used this sequence, noting there’s an improved R2 sequence (this page is worth a visit, for the animations alone!), which he suspects won’t make much difference.

Veedrac wrote: “Here’s a quicker version if you don’t want to wait all day.” He implemented the whole shebang in the last 24 hours! It’s in python using numpy, includes skipping dark colors and grays, plus a control to adjust for blues looking dark. So, if you want to experiment with Python code, go get his. It takes 129 seconds to generate a sequence of 256 colors. Maybe there’s something to this Python stuff after all. I also like that he does a clever output trick: he writes swatch colors to SVG, instead of laborious filling in an image, like my program does. Here’s his pattern, starting with gray (the only gray), with these constraints:

Veedrac’s pattern, RGB metric, no darks, adjust for dark blues, no grays (except the first)

Towaki Takikawa also made a compact python/numpy version of my original color-cube tester, one that also properly converts from sRGB instead of my old-school gamma==2.2. It runs on my machine in 19 seconds, vs. my original running overnight. The results are about the same as mine, just differing towards the end of the sequence. This cheers me up – I don’t have to feel too guilty about my quick gamma hack. I’ve put his code here for download.

Andrew Helmer wrote: “I had a very similar idea using Faure (0,3)-sequences rather than maximizing neighbor distances! This has really nice progressive ‘stratification’ properties.”

Andrew Helmer’s Faure (0,3) pattern, generated in RGB (I assume he means sRGB)

John Kaniarz wrote: “When I was reading your post on color sequences it reminded me of an on the fly solution I read years ago. I hunted it down only to discover that it only solved the problem in one dimension and the post has been updated to recommend a technique similar to yours. However, it’s still a neat trick you may be interested in. The algorithm is nextHue = (hue + 1/phi) % 1.0; (for hue in the range 0 to 1). It never repeats the same color twice and slowly fills in the space fairly evenly. Perhaps if instead of hue it looped over a 3-D space filling curve (Morton perhaps?), it could generate increasingly large palettes. Aras has a good post on gradients that use the Oklab perceptual color space that may also be useful to your original solution.”

Looking at that StackOverflow post John notes, the second answer down has some nice tidbits in it. The link in that post to “Paint Inspired Color Compositing” is dead, but you can find that paper here, though I disagree that this paper is relevant to the question. But, there’s a cool tool that post points at: I Want Hue. It’s got a slick interface, with all sorts of things you can vary (including optimized for color blindness) and lots of output formats. However, it doesn’t give an optimized sequence, just an optimized palette for a fixed number of colors. And, to be honest, I’m not loving the palettes it produces, I’m not sure why. Which speaks to how this whole area is a fun puzzle: tastes definitely vary, so there’s no one right answer.

Josef Spjut noted this related article, which has a number of alternate manual approaches to choosing colors, discussing reasons for picking and avoiding colors and some ways to pick a quasirandom order.

Nicolas Bonneel wrote: “You can generate LDS sequences with arbitrary constraints on projection with our sampler :P” and pointed to their SIGGRAPH 2022 paper. Cool, and correct, except for the “you” part ;). I’m joking, but I don’t plan to make a third post here to complete the trilogy. If anyone wants to further experiment, comment, or read more comments, please do! Just respond to my original twitter post.

Pontus Andersson pointed out this colour-science Python library for converting to a more perceptually uniform colorspace. He notes that CAM16-UCS is one of the most recent but that the original perceptually uniform colorspace, CIELAB, though less accurate, is an easier option to implement. There are several other options in between those two as well, where increased accuracy often requires more advanced models. Once in a perceptually uniform colorspace, you can estimate the perceived distance between colors by computing the Euclidean distances between them.

Andrew Glassner asked the same, “why not run in a perceptual color space like Lab?” Andrew Helmer did, too, noting the Oklab colorspace. Three, maybe four people said to try a perceptual color space? I of course then had to try it out.

Tomas Akenine-Möller pointed me at this code for converting from sRGB to CIELab. It’s now yet another option in my (now updated) perl program. Here’s using 100 divisions (i.e., 0.00, 0.01, 0.02…, 1.00 – 101 levels on each color axis) of the color cube, since this doesn’t take all night to run – just an hour or two – and I truly want to be done messing with this stuff. Here’s CIELab starting with white as the first color, then gray as the first:

CIELab metric, 100 divisions tested, initial colors white and gray

Get the data files here. Notice the second color in both is blue, not black. If you’re paying attention, you’ll now exclaim, “What?!” Yes, blue (0,0,255) is farther away from white (255,255,255) than black (0,0,52) is from white, according to CIELab metrics. And, if you read that last sentence carefully, you’ll note that I listed the black as (0,0,52), not (0,0,0). That’s what the CIELab metric said is farthest from the colors that precede it, vs. full black (0,0,0).

I thought I had screwed up their CIELab conversion code, but I think this is how it truly is. I asked, Tomas replied, “Euclidean distance is ‘correct’ only for smaller distances.” He also pointed out that, in CIELab, green (0,255,0) and blue (0,0,255) are the most distant colors from one another! So, it’s all a bit suspect to use CIELab at this scale. I should also note there are other CIELab conversion code bits out there, like this site’s. It was pretty similar to the XYZ->CIELab code Tomas points at (not sure why there are differences), so, wheels within wheels? Here’s my stop; I’m getting off the tilt-a-whirl at this point.

Here are the original RGB distance “white” and “gray” sequences, for comparison (data files here):

Linear RGB metric, 100 divisions tested, initial colors white and gray

Interesting that the RGB sets look brighter overall than the CIELab results. Might be a bug, but I don’t think so. Bart Wronski’s tweet and Aras’s post, “Gradients in linear space are not better,” mentioned earlier, may apply. Must… resist… urge to simply interpolate in sRGB. Well, actually, that’s how I started out, in the original post, and convinced myself that linear should be better. There are other oddities, like how the black swatches in the CIELab are actually (0,52,0) not (0,0,0). Why? Well…

At this point I go, “any of these look fine to me, as I would like my life back now.” Honestly, it’s been educational, and CIELab seems perhaps a bit better, but after a certain number of colors I just want “these are different enough, not exactly the same.” I was pretty happy with what I posted yesterday, so am sticking with those for now.

Tomas also noted that color vision deficiency is another thing that could be factored in, and Pontus pointed to the matrices and related publication here. I truly will leave that for someone else who wants to experiment.

Mark Kilgard had an interesting idea of using the CSS Color Module Level 4 names and making a sequence using just them. That way, you could use the “official” color name when talking about it. This of course lured me into spending too much time trying this out. The program’s almost instantaneous to run, since there are only 139 different colors to choose from, vs. 16.7 million. Here’s the ordered name list computed using RGB and CIELab distances:

139 CSS named colors, using RGB vs. CIELab metrics

Ignore the lower right corner – there are 139 colors, which doesn’t divide nicely (it’s prime). Clearly there are a lot of beiges in the CSS list, and in both solutions these get shoved to the bottom of the list, though CIELab feels like it shove these further down – look at the bottom two rows on the right. Code is here.

The two closest colors on the whole list are, in both cases, chartreuse (127, 255, 0) and lawngreen (124, 252, 0) – quite similar! RGB chose chartreuse last; CIELab chose lawngreen last. I guess picking one over the other depends if you prefer liqueurs or mowing.

Looking at these color names, I noticed one new color was added going from version 3 to 4: Rebecca Purple, which has a sad origin story.

Since you made it this far, here’s some bonus trivia on color names. In the CSS names, there is a “red,” “green,” and “blue.” Red is as you might guess: (255,0,0). Blue is, too: (0,0,255). Green is, well, (0,128,0). What name is used for (0,255,0)? “Lime.”

In their defense, they say these names are pretty bad. Here’s their whole bit, with other fun facts:

My response: “Lime?! Who the heck has been using ‘lime’ for (0,255,0) for decades?” I suspect the spec writers had too much lime (and rum) in the coconut when they named these things. Follow up: Michael Chock responds, “Paul Heckbert.”