Author Archives: Eric

Low Discrepancy Color Sequences, Part Deux

“Done puttering.” Ha, I’m a liar. Here’s a follow up to the first article, a follow-up which just about no one wants. Short version: you can compute such groups of colors other ways. They all start to look a bit the same after a while. Plus, important information on what color is named “lime.”

So, I received some feedback from some readers. (Thanks, all!)

Peter-Pike Sloan gave my technique the proper name: Farthest First Traversal. Great! “Low discrepancy sequences” didn’t really feel right, as I associate that technique more with quasirandom sampling. He writes: “I think it is generally called farthest point sampling, it is common for clustering, but best with small K (or sub-sampling in some fashion).”

Alan Wolfe said, “You are nearly doing Mitchell’s best candidate for blue noise points :). For MBC, instead of looking through all triplets, you generate N*k of them randomly & keep the one with the best score. N is the number of points you have already. k is a constant (I use 1 for bn points).” – He nailed it, that’s in fact the inspiration for the method I used. But I of course just look through all the triplets, since the time to test them all is reasonable and I just need to do so once. Or more than once; read on.

Matt Pharr says he uses a low discrepancy 3D Halton sequence of points in a cube:

Matt’s pattern

I should have thought of trying those, it makes sense! My naive algorithm’s a bit different and doesn’t have the nice feature that adjacent colors are noticeably different, if that’s important. If I would have had this sequence in hand, I would never have delved. But then I would never have learned about the supposed popularity of lime.

Bart Wronski points out that you could use low-discrepancy normalized spherical surface coordinates:

Bart’s pattern

Since they’re on a sphere, you get only those colors at a “constant” distance from the center of the color cube. These, similarly, have the nice “neighbors differ” feature. He used this sequence, noting there’s an improved R2 sequence (this page is worth a visit, for the animations alone!), which he suspects won’t make much difference.

Veedrac wrote: “Here’s a quicker version if you don’t want to wait all day.” He implemented the whole shebang in the last 24 hours! It’s in python using numpy, includes skipping dark colors and grays, plus a control to adjust for blues looking dark. So, if you want to experiment with Python code, go get his. It takes 129 seconds to generate a sequence of 256 colors. Maybe there’s something to this Python stuff after all. I also like that he does a clever output trick: he writes swatch colors to SVG, instead of laborious filling in an image, like my program does. Here’s his pattern, starting with gray (the only gray), with these constraints:

Veedrac’s pattern, RGB metric, no darks, adjust for dark blues, no grays (except the first)

Towaki Takikawa also made a compact python/numpy version of my original color-cube tester, one that also properly converts from sRGB instead of my old-school gamma==2.2. It runs on my machine in 19 seconds, vs. my original running overnight. The results are about the same as mine, just differing towards the end of the sequence. This cheers me up – I don’t have to feel too guilty about my quick gamma hack. I’ve put his code here for download.

Andrew Helmer wrote: “I had a very similar idea using Faure (0,3)-sequences rather than maximizing neighbor distances! This has really nice progressive ‘stratification’ properties.”

Andrew Helmer’s Faure (0,3) pattern, generated in RGB (I assume he means sRGB)

John Kaniarz wrote: “When I was reading your post on color sequences it reminded me of an on the fly solution I read years ago. I hunted it down only to discover that it only solved the problem in one dimension and the post has been updated to recommend a technique similar to yours. However, it’s still a neat trick you may be interested in. The algorithm is nextHue = (hue + 1/phi) % 1.0; (for hue in the range 0 to 1). It never repeats the same color twice and slowly fills in the space fairly evenly. Perhaps if instead of hue it looped over a 3-D space filling curve (Morton perhaps?), it could generate increasingly large palettes. Aras has a good post on gradients that use the Oklab perceptual color space that may also be useful to your original solution.”

Looking at that StackOverflow post John notes, the second answer down has some nice tidbits in it. The link in that post to “Paint Inspired Color Compositing” is dead, but you can find that paper here, though I disagree that this paper is relevant to the question. But, there’s a cool tool that post points at: I Want Hue. It’s got a slick interface, with all sorts of things you can vary (including optimized for color blindness) and lots of output formats. However, it doesn’t give an optimized sequence, just an optimized palette for a fixed number of colors. And, to be honest, I’m not loving the palettes it produces, I’m not sure why. Which speaks to how this whole area is a fun puzzle: tastes definitely vary, so there’s no one right answer.

Josef Spjut noted this related article, which has a number of alternate manual approaches to choosing colors, discussing reasons for picking and avoiding colors and some ways to pick a quasirandom order.

Nicolas Bonneel wrote: “You can generate LDS sequences with arbitrary constraints on projection with our sampler :P” and pointed to their SIGGRAPH 2022 paper. Cool, and correct, except for the “you” part ;). I’m joking, but I don’t plan to make a third post here to complete the trilogy. If anyone wants to further experiment, comment, or read more comments, please do! Just respond to my original twitter post.

Pontus Andersson pointed out this colour-science Python library for converting to a more perceptually uniform colorspace. He notes that CAM16-UCS is one of the most recent but that the original perceptually uniform colorspace, CIELAB, though less accurate, is an easier option to implement. There are several other options in between those two as well, where increased accuracy often requires more advanced models. Once in a perceptually uniform colorspace, you can estimate the perceived distance between colors by computing the Euclidean distances between them.

Andrew Glassner asked the same, “why not run in a perceptual color space like Lab?” Andrew Helmer did, too, noting the Oklab colorspace. Three, maybe four people said to try a perceptual color space? I of course then had to try it out.

Tomas Akenine-Möller pointed me at this code for converting from sRGB to CIELab. It’s now yet another option in my (now updated) perl program. Here’s using 100 divisions (i.e., 0.00, 0.01, 0.02…, 1.00 – 101 levels on each color axis) of the color cube, since this doesn’t take all night to run – just an hour or two – and I truly want to be done messing with this stuff. Here’s CIELab starting with white as the first color, then gray as the first:

CIELab metric, 100 divisions tested, initial colors white and gray

Get the data files here. Notice the second color in both is blue, not black. If you’re paying attention, you’ll now exclaim, “What?!” Yes, blue (0,0,255) is farther away from white (255,255,255) than black (0,0,52) is from white, according to CIELab metrics. And, if you read that last sentence carefully, you’ll note that I listed the black as (0,0,52), not (0,0,0). That’s what the CIELab metric said is farthest from the colors that precede it, vs. full black (0,0,0).

I thought I had screwed up their CIELab conversion code, but I think this is how it truly is. I asked, Tomas replied, “Euclidean distance is ‘correct’ only for smaller distances.” He also pointed out that, in CIELab, green (0,255,0) and blue (0,0,255) are the most distant colors from one another! So, it’s all a bit suspect to use CIELab at this scale. I should also note there are other CIELab conversion code bits out there, like this site’s. It was pretty similar to the XYZ->CIELab code Tomas points at (not sure why there are differences), so, wheels within wheels? Here’s my stop; I’m getting off the tilt-a-whirl at this point.

Here are the original RGB distance “white” and “gray” sequences, for comparison (data files here):

Linear RGB metric, 100 divisions tested, initial colors white and gray

Interesting that the RGB sets look brighter overall than the CIELab results. Might be a bug, but I don’t think so. Bart Wronski’s tweet and Aras’s post, “Gradients in linear space are not better,” mentioned earlier, may apply. Must… resist… urge to simply interpolate in sRGB. Well, actually, that’s how I started out, in the original post, and convinced myself that linear should be better. There are other oddities, like how the black swatches in the CIELab are actually (0,52,0) not (0,0,0). Why? Well…

At this point I go, “any of these look fine to me, as I would like my life back now.” Honestly, it’s been educational, and CIELab seems perhaps a bit better, but after a certain number of colors I just want “these are different enough, not exactly the same.” I was pretty happy with what I posted yesterday, so am sticking with those for now.

Tomas also noted that color vision deficiency is another thing that could be factored in, and Pontus pointed to the matrices and related publication here. I truly will leave that for someone else who wants to experiment.

Mark Kilgard had an interesting idea of using the CSS Color Module Level 4 names and making a sequence using just them. That way, you could use the “official” color name when talking about it. This of course lured me into spending too much time trying this out. The program’s almost instantaneous to run, since there are only 139 different colors to choose from, vs. 16.7 million. Here’s the ordered name list computed using RGB and CIELab distances:

139 CSS named colors, using RGB vs. CIELab metrics

Ignore the lower right corner – there are 139 colors, which doesn’t divide nicely (it’s prime). Clearly there are a lot of beiges in the CSS list, and in both solutions these get shoved to the bottom of the list, though CIELab feels like it shove these further down – look at the bottom two rows on the right. Code is here.

The two closest colors on the whole list are, in both cases, chartreuse (127, 255, 0) and lawngreen (124, 252, 0) – quite similar! RGB chose chartreuse last; CIELab chose lawngreen last. I guess picking one over the other depends if you prefer liqueurs or mowing.

Looking at these color names, I noticed one new color was added going from version 3 to 4: Rebecca Purple, which has a sad origin story.

Since you made it this far, here’s some bonus trivia on color names. In the CSS names, there is a “red,” “green,” and “blue.” Red is as you might guess: (255,0,0). Blue is, too: (0,0,255). Green is, well, (0,128,0). What name is used for (0,255,0)? “Lime.”

In their defense, they say these names are pretty bad. Here’s their whole bit, with other fun facts:

My response: “Lime?! Who the heck has been using ‘lime’ for (0,255,0) for decades?” I suspect the spec writers had too much lime (and rum) in the coconut when they named these things. Follow up: Michael Chock responds, “Paul Heckbert.”

Low Discrepancy Color Sequences

I have been working on a project where there are a bunch of objects next to each other and I want different colors for each, so that I can tell where one ends and another starts. In the past I’ve simply hacked this sort of palette:

for (my $i=0; $i<8; $i++){
    my $r = $i % 2;
    my $g = (int($i/2)) % 2;
    my $b = (int($i/4)) % 2;
    print "Color $i is ($r, $g, $b)\n";
}

varying the red, green, and blue channels between their min and max values. (Yes, I’m using Perl; I imprinted on it before Python existed. It’s easy enough to understand.)

The 8 colors produced:

Color 0 is (0, 0, 0)
Color 1 is (1, 0, 0)
Color 2 is (0, 1, 0)
Color 3 is (1, 1, 0)
Color 4 is (0, 0, 1)
Color 5 is (1, 0, 1)
Color 6 is (0, 1, 1)
Color 7 is (1, 1, 1)

which gives:

Color cube colors, in “ascending” order

Good enough, when all I needed was up to 8 colors. But, I was finding I needed 30 or more different colors to help differentiate the set of objects. The four-color map theorem says we just need four distinct colors, but figuring out that coloring is often not easy, and doesn’t animate. Say you’re debugging particles displayed as squares. Giving each a unique color helps solve the problem of two of them blending together and looking like one.

To make more colors, I first tried something like this, cycling each channel between 0, 0.5, and 1:

my $n=3;    # number of subdivisions along each color axis
for (my $i=0; $i<$n*$n*$n; $i++){
    my $r[$i] = ($i % $n)/($n-1);
    my $g[$i] = ((int($i/$n)) % $n)/($n-1);
    my $b[$i] = ((int($i/($n*$n))) % $n)/($n-1);
    print "Color $i is ($r, $g, $b)\n";
}

Which looks like:

3x3x3 colors, in “ascending” order

These are OK, I guess, but you can see the blues are left out until the later colors. The colors also start out pretty dark, building up and becoming mostly light at the end of the set.

And it gets worse the more you subdivide. Say I use $n = 5. We’re then just walking through variants where the red channel walks up by +0.25. Here are the first 10, to show what I mean:

Color 0 is (0, 0, 0)
Color 1 is (0.25, 0, 0)
Color 2 is (0.5, 0, 0)
Color 3 is (0.75, 0, 0)
Color 4 is (1, 0, 0)
Color 5 is (0, 0.25, 0)
Color 6 is (0.25, 0.25, 0)
Color 7 is (0.5, 0.25, 0)
Color 8 is (0.75, 0.25, 0)
Color 9 is (1, 0.25, 0)

The result for 125 colors:

5x5x5 colors, in “ascending” order

These might be OK if I was picking out a random color, and that would actually be the easiest way: just shuffle the order. After calculating a set of colors and putting them in arrays, go through each color and swap it with some other random location in the array (here the colors are now in arrays $r[], $g[], $b[]):

for (my $i=($n*$n*$n)-1; $i>=1; $i--){
    my $idx = int(rand($i+1));    # pick random index from remaining colors, [0,$i]
    my @tc = ($r[$i],$g[$i],$b[$i]);    # save color so we can swap to its location
    $r[$i] = $r[$idx]; $g[$i] = $g[$idx]; $b[$i] = $b[$idx];    # swap
    $r[$idx] = $tc[0]; $g[$idx] = $tc[1]; $b[$idx] = $tc[2]; 
}
Shuffled 5x5x5 colors

Some colors don’t look all that different, and the palette tends to be dark. This can be improved with simple gamma correction:

my $gamma = 1/2.2;
$r[$i] = (($i % $n)/($n-1))**$gamma;
$g[$i] = (((int($i/$n)) % $n)/($n-1))**$gamma;
$b[$i] = (((int($i/($n*$n))) % $n)/($n-1))**$gamma;

That’s a little better, I think:

Gamma corrected shuffled 5x5x5 colors

The bigger problem is that these are just random colors over a fixed range, 125 colors in this case. Sometimes I’m displaying 4 objects, sometimes 15, sometimes 33. With this sequence, the first four colors have two oranges that are not considerably different – much worse than my original 8-color palette. This was just (bad) luck, but doing another random roll of the dice isn’t the solution. Any random swizzle will almost always give colors that are close to each other in some sets of the first N colors, missing out on colors that would have been more distinctive.

I’d like them to all look as different as possible, as the number grows, and I’d like to have one table. This goal reminded me of low-discrepancy sequences, commonly used for progressively sampling a pixel for ray tracing, for example. Nice run-through of that topic here by Alan Wolfe.

The idea is simple: start with a color. To add a second color, look at every possible pixel RGB triplet and see how far it is from that first color. Whichever is the farthest is the color you use. For your third color, look at every possible pixel triplet and find which color has the largest “closest distance” to one of the first two. Lather, rinse, repeat, choosing the next color as that which maximizes the distance to its nearest neighbor.

Long and short, it works pretty well! Here are 100 colors, in order:

Maximum closest distance to previous colors, starting at white

I started with white. No surprise, the farthest color from white is black. For the next color, the program happened to pick a blue, (0,128,255), which is (0,186,255) after gamma correction. At first, I thought this third color was a bug. But thinking about it, it makes sense: any midpoint of the six edges of the color cube that don’t touch the white or black corner (these form a hexagon) are equally far from both corner colors (the other RGB cube corners are not).

The other colors distribute themselves nicely enough after that. At a certainly point, some colors start to look a bit the same, but I at least know they’re all different, as best as can be, given the constraints.

In Perl it took an overnight run on a CPU to get this sequence, as I test all 16.7 million (256^3) triplets against all the previous colors found for the largest of the closest approach distances computed. But, who cares. Computers are fast. Once I have the sequence, I’m done. Here’s the sequence in a text file, if of interest.

This is a sequence, meant to optimize all groups of the first N colors for any given N. If you know you’ll always need, say, 27 colors, the colors on a 3x3x3 subdivided color cube (in sRGB) are going to be better, because you’re globally optimizing for exactly 27 colors. Here I did not want to find some optimal set of colors for every number N from 1 to 100, but just wanted a single table I could store and reasonably use for a group of any size.

What’s surprising is that none of the other color cube corner colors – red (255,0,0), cyan (0,255,255), etc. – appear in this sequence. If you start with another color than white, you get a different sequence. Starting with a different RGB cube corner results in some rotation or flip of the color sequence above, e.g., start with black and your next color is white, then the rest are (or can be; depends on tie breaks) the same. Start with red, cyan is next, and then some swapping and flipping of the RGB values in the original sequence. But, start with “middle gray” and the next eight colors are the corners of the color cube, followed by a different sequence. Here are the first twenty:

Start at middle gray

I tried some other ideas, such as limiting the colors searched to those that aren’t too dark. If I want to, for example, display black wireframes around my objects, black and other dark colors are worth avoiding:

Avoid dark colors

This uses a rule of “if red + green + blue < 0.2, don’t use the color.” Gets rid of black, though that dark blue is still pretty low contrast, so maybe I should kick that number up. But dark greens and reds are not so bad, so maybe balance by the perceptual brightness of each channel… Endless fiddling is possible.

I also tried “avoid grays, they’re boring” by having a similar rule that if a test color’s three differences among the three RGB channel values were all less than 0.15, don’t use that color. I started with the green corner of the color cube, to avoid white. Here’s that rule:

Avoid grays (well, some grays)

Still some pretty gray-looking swatches in there – maybe increase the difference value? One downside is that these types of rules remove colors from the selection set, forcing the remaining colors to be closer to one another.

I could have made this process much faster by simply choosing from fewer colors, e.g., looking at only the color levels (0.0, 0.25, 0.5, 0.75, 1.0), which would give me 125 colors to choose from, instead of 16.7 million. But it’s fun to run the program overnight and have that warm and fuzzy feeling that I’m finding the very best colors, each most distant from the previous colors in the sequence.

I should probably consider better perceptual measures of “different colors,” there’s a lot of work in this area. And 100 colors is arbitrary – above this number, I just repeat. I could probably get away with a smaller array (useful if I was including this in a shader program), as the 100-color list has some entries that look pretty similar. Alternately, a longer table is fine for most applications, it does not take a lot of space. Computing the full 16.7 million entry table might take quite a while.

There’s lots of other things to tinker with… But, good enough – done puttering! Here’s my perl program. If you make or know of a better, perceptually based, low-discrepancy color sequence, great, I’d be happy to use it.

Addendum: Can’t get enough? See what other people say and more things I try here, in a follow-up blog post.

Some SIGGRAPH 2022 Info

I’ve started looking over the schedule for SIGGRAPH 2022, for online and in person attendance. Some things I’ve found (the course links below may be the most valuable):

  • If you’re traveling to Canada (from the US or any country), you must use ArriveCAN before you travel and must be fully vaccinated. You have to fill out ArriveCAN up to 72 hours (but not longer) before your arrival. The app needs to show you a V, I or A after you filled it out. Otherwise you did something wrong and they won’t let you into the country (unless you are Canadian). SIGGRAPH notes this and more info here.
  • Beyond the full program, also take a look at the “at a glance” schedule. That said, there’s no way to dig into any event (no links) from that page. The online “for attendees” scheduler (if you’ve registered) is your best bet for in-depth schedule building.
  • Probably the most important thing on any of the public SIGGRAPH schedule pages is the “spinning person with a black background” vs. the “person sledding or sit-skiing on a blue background” icons – see image at bottom. These are in person vs. virtual. So, for example, all Birds of a Feather sessions are virtual, even those actually during the week of in-person SIGGRAPH days (which I think is a mistake, but let’s go no farther down that path…).
  • The in-person course descriptions on the SIGGRAPH site don’t include schedules, just speaker lists, at best. Here are two courses’ schedules and other information, online elsewhere:
  • There are not a lot of in-person courses. Like, six (some with a few sessions), not including the roundtables (whatever those are – they look to each be 15 minutes long). However, if you go to the full program, select courses, and scroll all the way to the bottom, then click on “Courses” under “On Demand,” you’ll suddenly see 14 more virtual courses revealed.
  • There’s no SIGGRAPH app for the iPhone, etc., this year. I was told “it will be a mobile version of the platform. And it will be available July 25 when the virtual platform launches. Once you log into the virtual platform you will get instructions on how to access the mobile version of the platform.” True. And, so far, the online “for attendees” scheduler is much better than the public site, e.g., you can actually see where Appy Hour will be located (West Building, Exhibit Hall A, near registration).
  • Worried about ventilation and COVID? This account plans to post live-tweet measures of CO2 (which correlates with COVID risk) at SIGGRAPH (and DigiPro).
  • And, yes, you should be a a little worried: this informal poll for CVPR in June shows a third of people attending catching it, and this poll for CVPR shows half catching it, borne out by this other poll. It also spread at GDC 2022, despite precautions.
  • My guess is attendance is down (duh), so the odds of running into people I know is higher, though I might not recognize them with masks (so I recommend we all wear sashimono).
  • For off-the-beaten-path things to do in Vancouver, see Atlas Obscura. Gassy Jack sounds like a character from Borderlands. Also, consider this art exhibit. And, there’s a free Vancouver Murals app – it’s mural festival weekm, e.g., at 437 Hornby a mural is in progress.

I’ll update this post as I learn more tidbits – feel free to write me at erich@acm.org (yes, there, I did it, put my email address)

Keep your eye on these!

Physical Units for Lights

I expect most of us have a passing knowledge of physical units for lights. We have some sense of what lumens and candelas are about, we’ve maybe heard of nits with regard to our monitors, and maybe have a vague sense of what lux is about. This was, at least, my level of understanding for the past lessee 38 years I’ve been in the field of computer graphics. My usual attitude with lights was (and still is, most of the time), “make them brighter” if the scene is too dim. That’s all most of us normally need, to be honest.

These past months I’ve been learning a fair bit about this area, as proper specification of lights is critical if you want to, for example, move a fully modeled scene from one application to another, or are merging real-world data with synthetic. APIs and programs with “0.7” or “90% brightness” or other relative units don’t hack it, as they are not anchored to any physical meaning. So, here’s my summary of the four main light units, with others mentioned along the way. My focus is on the practical, real-world use of these units. Some of this knowledge was hard won, for me. Lux, in particular, is a term where I have been misled by many pages on the internet that attempt to define it. My thanks to Luca Fascione and Anders Langlands in particular for correcting me along the way. I may still have a bug or two in this post (though am trying hard not to), so tell me if I do and I’ll fix it: erich@acm.org.

The PBR book gives the textbook basics, starting with radiometric units and going on to photometric. Here’s their useful table:

You’ll see similar tables in many other places, including page 272 of our own book. I like theirs better: more columns. Radiometry is concerned with any electromagnetic radiation – radio, microwaves, x-rays, etc.; photometry factors in how our eyes respond to light, described by the luminous efficiency function (well, functions: there’s the photopic function, for brightly lit conditions, and the scotopic, for dim). I’m focusing on photometric units.

If you know all this, you’re done here. Well, some of the links below may inform and entertain.

Radiant energy: like it says, energy, some total amount of radiation, basically. Luminous energy: same, modified by the eye’s response. I say “forget them” as far as graphics goes – I’ve never seen energy (vs. power) get used for light in the field of computer graphics. All the units that follow are in terms of “per second,” and those are the ones you’ll see used in describing lights in the real world and computer graphics. Begone, Talbots!

Luminous flux: measured in lumens, this is what you’ll see on the box for most light bulbs you buy nowadays. It’s the power of the light. Think of it as the number of photons emitted per second, again modulated by the luminous efficiency function. Incandescent bulbs are (were) sold as “60 Watts” or similar. This rating refers to the incoming amount of power, not what the bulb itself produces. With LED bulbs being 6x or more efficient converting power to light than incandescents, you’ll see “60W equivalent – 800 lumens” on packaging, since such LEDs actually draw around 9 Watts. This ratio of lumens to Watts is the luminous efficacy, not to be confused with luminous efficiency, noted earlier. Welcome to the first of many instances where the English language mostly runs out of words for describing light, resulting in a lot of similar-sounding terms (and I won’t even begin to discuss “exitant radiance” vs. “radiant exitance” – see PBR). That’s about all you need to know about basic light bulbs.

If you’re curious about how manufacturers measure lumens in bulbs they produce, go down the integrating sphere rabbit hole. Invented around 1900, you put a bulb in one part and a detector at another. To eliminate any directionality from the light source, its photons usually do 10-25 scatters inside the sphere before being detected. This geometric series of scatters converges to a simple formula and gives a resulting lumens value. These devices can also be used to measure reflectivity of materials. Buy one now or build your own (no, don’t do either of those).

Luminous intensity: measured in candelas (abbreviated as cd). You might know a bulb gives 800 lumens, but that’s the total power. Even a bulb has a base where it screws into the socket, so no light is going that direction. What you’d more likely want to know is how many photons per second (again, modulated by the luminous efficiency function for our eyes) are going in a particular direction. It’s a more precise measure of how much light a surface is receiving, dividing up the lumens over the “actually emitting” part of the bulb.

From our table, you can see candelas are lumens divided by steradians, sr, a measure of solid angle. For a perfect point light, emitting equally in all directions, we get the luminous intensity by dividing the luminous flux by the solid angle of a sphere, which is 4*pi steradians, so 800 lumens/12.57 sr = 63.6 candelas. However, a real bulb has a base that blocks emission. For example, this bulb says it emits over 150 degrees (out of 180). Using this calculator, putting in 300 degrees (150 * 2), the effective intensity of the bulb is 68.2 cd, a bit higher than our 63.6 “isotropic emitter” estimate.

Spotlights, flashlights, laser pointers, and other directed light sources are most sensibly described using candelas, sometimes as “maximum beam candlepower” (MBCP). Imagine we have a flashlight that is described as providing 100 lumens over a 20 degree wide beam. By the calculator, this would give 1047 cd – pretty bright. Oddly, most consumer flashlights and similar are marketed by lumens, not candelas. I expect this is because we’re just getting used to lumens on our lightbulb packaging and have no idea what candelas are. But, the beam angle matters: if a 100 lumen flashlight instead has a 10 degree uniform beam, the intensity goes up to 4182 cd. Here’s, in fact, a flashlight along these lines, one that lists as 100 lumens and 4200 cd, so I’m guessing its beam angle is indeed 10 degrees. Note you’ll also see absolute lies out there, such as this million lumen flashlight. For comparison, the highest DIY flashlight I know is this amusing 1.4 million lumen monster.

A more elaborate and accurate way to describe a light’s emission is to provide an IES profile (another IES collection here). A profile is a simple text file describing candelas emitted in a latitude-longitude type or mapping. Find more format information here, here, and here, for starters. Or just skip to here (thanks to BellaRender for the tipoff).

RenderMan’s free, public domain “artistic” IES profiles

A candela, by the way, is indeed related to a candle. It has a fancy physics definition nowadays, but used to be things like “one candlepower is the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour.”

Luckily for all involved, we no longer need whales to figure out what a candela is. Wikimedia

Illuminance: measured in lux. Here’s where the internet is a morass of poorly written, confused, or just plain misleading information. This unit is the main reason I’m writing this piece. Illuminance depends purely on three things: the detector’s position and orientation, and its shape (but not size). I’ve seen way too many pages saying things like illuminance being “about how much light an area receives” – no, the area is irrelevant. There are other mentions relating the unit to lux and candelas, with a light (somehow) shining on (only) a square meter a meter away – technically correct, but useless for understanding.

I’m not sure if this will help or hurt: Imagine you have a flat little piece of whatever that detects visible light, sitting on some surface. It merrily records some number of photons per second, weighted by the luminous efficiency function, as usual. Divide this weighted value by the number of square meters your detector covers and you have the illuminance (in some form – you’d actually need some constant to convert to lux). You’re dividing by the detector’s size, so all that’s left is its location and which direction it’s pointing. In photography, this is an “incident light meter,” a separate device you put in your scene and point in a direction, vs. a “reflected light meter,” which your in-camera light meter is.

Wikipedia has a particularly good, detailed page on light meters, which are used to detect lux in a scene. Highlights: a hemispherical receptor shape is more useful in photography than a flat detector – your subjects are likely curved surfaces pointing in a bunch of directions, not flat and pointing in a single direction. This hemisphere shape leads to a cardioid falloff with angle (instead of a cosine, for a flat detector). There’s a “constant C” that varies for incident meters, a matter of taste in exposure value. Me, for yuks I bought a cheap illuminance meter that gives lux values, though it has a dubious receptor shape (hemisphere recessed inside a black bowl – what?). Still, point it at the sun and it gives a reasonable value. Update: there are apps for measuring lux with your phone – no idea how good these are.

While lumens and candelas are directly associated with the light and nothing else, lux is associated with the scene being lit. The incoming photons are from wherever. Various environments have different typical lux ranges; here’s a pretty reasonable typical table on Wikipedia. Note the range of illuminance is incredible: 0.3 lux for the full moon to 100,000 lux for direct sunlight. Beware, however, of the internet, as this similar table, also on Wikipedia, says full daylight is only 10,000 lux – a factor of 10 difference! In this case I think this second table is just plain wrong. (Update: I fixed this table on Wikipedia, adding “Sunlight” to it, and directly referenced this original source.)

But, also recall that the direction the incident meter is pointed matters. Straight up will give a different reading than pointing it directly at the sun, for example (unless the sun’s directly overhead, of course). For designing an interior space, you’ll see terms like “horizontal illuminance”, e.g., for a desk or other work surface and “vertical illuminance,” for what illumination a wall or similar receives.

Part of the variance I see in these tables I believe depends on what you’re measuring. For example, this table, yet another on Wikipedia, lists a full moon on a clear night as 0.25 lux (well, 25 decilux) and moonlight as 1 lux. I assume the latter includes reflections off the surroundings, including clouds, i.e., direct vs. global illumination. Like the moon, lights are sometimes rated by lux. A filmmaker, photographer, or set designer may not care so much about a light in terms of lumens or candelas, but rather cares how much light is reaching the subject. Light panels, for example, are described in terms such as “the Lume Cube 2.0 puts out 750 lux at 1m.” Lux is also handy for the sun and moon, where no other unit makes a heck of a lot of sense (or has a lot more zeros), and where the distance from the source is essentially constant and so can be ignored. Conversion for local lights is easy: divide candelas by the square of the distance and you get lux for the light (assuming the receptor is facing the light). For area lights, the five-times rule is useful.

To confuse things a bit, you can also describe an area light’s output in terms of lumens per square meter. Note that this same SI unit description – lumens per meters-squared (lm/m^2) – describes lux, but isn’t called lux when used for emission. In this case, the area is emitting a certain amount of visible light, again divided by area. When emitted, this is called luminous exitance or luminous emittance. That said, this sort of area emission is often better described by our last physical unit…

Luminance: measured in nits (well, candelas per square meter is the SI name – “nits” are not an official part of the SI system, but this unit name is commonly used; there are many obscure units for luminance that I’ve rarely seen employed). Luminance is a measure of light along a given direction. When you take a picture, you’re capturing luminance (well, after converting to grayscale). Your camera, of course, has its exposure adjusted so that we can see something reasonable, but it’s taking in luminance at each pixel and remapping this value in some way. When you let your camera use its through-the-lens reflected light meter to figure out how much to expose a shot, you’re depending on some average, weighted, or spot luminance reading it detects. As an aside, measuring luminance is not always a great way to shoot photos. This article explains – and gives practical examples – why using an incident light meter to capture the illuminance instead can be better.

“Nits” is from the Latin nitere, to shine. This unit name is often used to characterize the brightness of flat screens (though I won’t stop you if you use it for any surface, e.g., a reflector). For example, monitors, laptops, and mobile devices are typically 200-300 nits, though a 13-inch MacBook can max out at 548 nits. Televisions can be as bright as 1000 nits or more. The sun is about 1.6 billion nits (which I see quoted a lot, but I’m not sure where this number comes from). The filament (tiny area) of a clear incandescent bulb is 7 million nits (video where I saw this number forgotten – update: the book Vision says on p. 54 that it’s a million mL, which is about 3 million nits). This unit makes sense as a measure for area light sources. That said, be careful that you don’t assume angle doesn’t matter. As this page shows, at a 70-degree angle from perpendicular, “lightness” on real displays are between 50% to 75% of the maximum nit value. “Nits” defines the peak luminance when facing the emitter straight on.

Luminance from a surface is constant along a direction, no matter the distance. Think about looking at a blue screen of death (BSOD, to fans) in a darkened room – really, assume the screen is all blue. You walk closer to the monitor. While you yourself are more illuminated by it (since you’re nearer the light source), any location on the screen itself is not brighter. The blue stays constant; there’s just more of it in your field of view as you approach it.

File:Windows Server 2019 x64 Build 17763.1935-2021-06-08-17-33-34.png
Search “blue screen” on Wikimedia Commons and you get this, and a lot of weird hits on old-timey photos

The radiometric equivalent of “luminance” is “radiance,” a term you should likely know, as this is what a physically based renderer typical uses for computation under the hood (or spectral radiance, but I’m trying to keep away from spectra and color in this already overlong post). It’s a key quantity for us, since it is independent of distance. When we shoot a ray from the eye, our goal is to compute the luminance for that ray, for display. If we’re rendering a BSOD monitor, being closer just means its emission covers more pixels of the image we form, not that any pixel on its screen or our image is brighter. This of course works for any object viewed (ignoring atmospheric effects): you look at the walls of the Green Monster (hey, I live next to Boston), it doesn’t matter whether you’re behind home plate or in the cheap seats, it’s the same amount of green along any given direction.

[Addendum: Not formal enough? I like the explanation here, which points to where to get more detail, so I’ll quote it: “radiance is defined in terms of the photons flowing through a given patch of surface, with directions within a given cone, and then taking a limit as the surface patch and cone sizes go to zero.”]

Last unit – done! And, my plea: please try to avoid using the word “intensity” in your UI and documentation if you don’t mean luminous intensity. I’m likely fighting the tide here (I tried, once; it was too late), but “intensity” has a real-world physical unit meaning, “luminous intensity” and candelas. I’d use “brightness” or perhaps “multiplier” if you are using a non-physical light and just want to adjust its effect. Think of the children!

Your homework assignment: if you were building a physically based rendering system, what units would you use for light sources? Point, spotlight, directional (at infinity) light, environment (dome or image-based lighting, aka IBL), and area lights of various sorts (subdivide into flat and other, if you prefer). Some are pretty obvious, at least to me, some are tricky, some might allow multiple representations, holding some value constant. For example, think what happens if you change the shape or size of an area light. Did it do what you expected? Anyway, a thing for a future blog post.

And we describe this light source how?

More info: A reasonable free book, Light Measurement Handbook, on the subject of light and physical units from 1998 is found here and here. Note it is in the camp of “correct but misleading” with what “lux” means, e.g., page 32 implies lux depends on sensor area – it doesn’t.

Sebastien Lagarde’s discussion of lights in Frostbite is worthwhile. It delves deeply into representations of various light source types for their game engine, including various formulae for evaluating area lights.

For a serious book on the subject, see Introduction to Radiometry and Photometry. The first 52 pages of this book are online at Google Books (click the link). It even has a chapter on ray tracing, which you can get a glimpse of here.

Update: check the tweet replies for even more physical-light-unit goodness.

One more late addition, since the PDF is now available: the PhysLight system, used by Wētā and others. See the “/docs” area for the PDF. I spent a fair bit of time verifying that the equations there work for naive users like me. Magic: you really can go from a lux reading, a camera with given exposure settings, and a standard 18% gray reference and get out just about what the pixel value is. For some reason, my results matched more closely inside than outside – my guess is the lux meter’s weird sensor shape. Maybe if I get a (self-assigned) budget higher than $40 for equipment I might get better results…

Seven Things for June 21, 2022

The Beyond demo; the dragon made of dragons at the vertices, by Jacco Bikker

Seven Things for June 2, 2022

  • Now that Aaron Hertzmann’s pointed it out, I’m noticing all the time that my perception of a scene doesn’t much match what a camera captures. Starts out lightweight; by the end he delves into some current research.
  • For good or ill, reading the ancient Wavefront OBJ model format (now 32 years old) is still a thing. Aras Pranckevičius compares the speeds of various popular OBJ readers (and an earlier one on optimizing such code). The readers vary considerably: a range of 70x in speed! He admits it’s a fair bit apples vs. oranges – each reader has different goals and two are multithreaded – but it’s worth a look if you read in such models. He also put all the test code in a repo for testing. Fun for me to see that the Minecraft Rungholt model created with my Mineways software gets used as a benchmark.
  • Large models? Try the coronavirus. The article’s just some visualizations presented in the NY Times, in the billion atoms range (sorry, not interactive, and your browser may appear to lock up – be patient).
  • Make meshes smaller and faster? I need to find some time someday to poke at meshoptimizer, an open-source project I’ve heard good things about and that has a lot of features.
  • AI-generated imagery has been rapidly evolving. If you missed DALL-E 2, here’s a pleasant, long video about it (or see their short marketing video) – worth the time. Midjourney is a related effort from another group with an emphasis on styles; you can sign up for the beta (but your GPU time is limited, so spend it wisely, perhaps building on others’ work). Video about both, with a bit more explanation. But wait, there’s more! So very much more. Among those many links I found this midjourney artist’s style dump is worth a skim.
  • I’m thinking I should always make the sixth element in these lists something goofy. This time it’s Coca-Cola Byte (two cans for just $14.77, plus $6.95 min. shipping).
  • This new illusion is fantastic – I did a screen cap of it (below) just to make sure they weren’t cheating with a GIF. That said, you might not see it – 14% of people don’t. And, for me, it’s much stronger on my PC than my phone. The authors note that the larger the better; here’s their big one.
Viewing the abyss, by Laeng, Nabil, and Kitaoka

Seven Things for June 1, 2022

Meandering Musings on the Metaverse

Well, that’s maybe not the most tantalizing title, but it’s about right (and I did mis-title it originally – see the URL).

[And, before I start: to give a virtual space or two a try next week, register for I3D 2022 – happening next week, May 3-5, 2022 – for free. Due to COVID, they continue to experiment with ways to connect attendees through various means. Virtually see you there – say “hi” if you see me.]

I had a chat with Patrick Cozzi on the podcast “Building the Open Metaverse.” Our episode came out on Tuesday. It was fun, I rambled on about various topics. But after it was over I thought of other things I had neglected to mention or clarify, plus pointers to resources, plus… So, I thought I’d add a few tidbits here, with links as possible.

The negative themes I’ve been seeing lately in some articles about “the metaverse” are along these lines:

  • The metaverse is a long time off from being fully realized,
  • It’s a lot of hype,
  • I wouldn’t want to spend time there, anyway, and
  • The metaverse is already here.

Some examples are this PC Gamer article and this one from Wired.

I agree with all of these to some large extent! Short of direct brain interfacing, having a full-featured “you feel like you’re there, you can walk around and touch things” holodeck-like experience looks way unlikely. “The metaverse” is definitely peaking on the hype cycle, even though Gartner says it’s further than 8 years out (which is as far out as they ever go). The Economist says cell phone sales are declining, so investors are looking for, hoping for, the Next Big Thing. So, yes, lots of hype and people floating technologies, 90% of which will fail. That’s nothing new, happens with all new tech.

The “who’d want to go there?” question is the more interesting one, on a few levels. Do we truly want to visit Chipotle in the metaverse? Is Powerpoint more compelling? Going to see a concert “live” via VR could be fun once or twice for the novelty and simplicity, but ultimately seems a bit of a hollow experience. If we value a live experience, say seeing a play vs. watching a film, we like to get all four dimensions aligned, close as possible in XYZ real space and time. Shift any of those, even if it’s “well, that famous person was in the room next door giving a speech and we all saw them on a giant TV screen outside” and there’s a loss of immediacy (and loss of bragging rights). Or even this, which I’ve certainly experienced.

At “holodeck-level” support, you could indeed have all sorts of experiences become available. Sail the seven seas as a pirate with others (oh, wait, there’s already Sea of Thieves), see if you can survive a zombie apocalypse (OMG where do I start?), or just chill out alone under the ocean (ABZÛ). I don’t think we want lots more realism, e.g., I truly don’t want to feel a bullet hit me or what falling off a skyscraper is really like, a la “the artwork formerly known as PainStation” (and if you do want that, go get this and don’t let me know how it works out).

Which gets to the last point, of the metaverse already being here. I admit to entirely losing myself in Valheim for hours when playing with my two (grown, each living elsewhere) sons. Making “actual” persistent changes with people you know in a virtual world, one where there’s no real save and restore system, is compelling – 10 million people agree with me (and the game is still not officially released).

Minecraft is the ultimate example of how making changes is great. It’s the best-selling game of all time. For a long time there was no actual game goal, beyond “survive the creepers” and other monsters. Even with that, it’s still not much of an adventure, more just virtual Legos. But what Legos they are! I think a major part of its success was an accident, that it was written in Java, which could be easily decompiled – Minecraft has over 100,000 mods for it.

Just being social is fine, too. I remember around 2005 wanting to quit the grind of playing the original World of Warcraft (“monster’ll be back in 15 minutes – see you then”). I had been tapering off, mostly been playing the auction house for a month at the end, making a pile of gold coins as sort of a mini-game. I ended my time there by going around newbie zones and holding informal trivia quizzes (“what’s the name of Harry Potter’s owl?”) and sending winners money. It was about the most fun I had in the game, interacting with strangers, since I didn’t have a group of people I played with. Other people liked the quizzes, too. I remember one stranger responding to the reward I sent, messaging me back about how they had been feeling down when they joined that night, but simply winning a little unexpected prize with their real-world knowledge had lifted their spirits.

All that said, I hardly think “the metaverse is here, game over.” There’s lots we can work on that helps immersion, interconnectivity, and much else. I talk about these a bit in the podcast, such as good data interchange through USD, glTF, or whatever other means evolve. Having objects developers or users could purchase and use in various shared spaces is intriguing (though for games I believe mostly unrealistic, beyond costuming – bringing a stirruped horse, let alone a spaceship, to a game about ancient Rome is going to break the balance). Buying a virtual item easily usable as content, vs. having artists and programmers spend days or weeks making a bespoke version for a single use in a game or film, seems like a huge efficiency and variety win. We’ve seen this “sell a hat” model work (and crash) in single games. This should be doable with a rich enough simulation representation.

That’s another area where I think one element of the metaverse is “here” (wherever “here” is). The idea of digital twins of the world, where you can test and train without fear of serious consequences, is being used to design factories, train robots and autonomous vehicles, and for other industrial uses. BIM, building information modeling, has been around a good long while, and covers similar ground as a digital twin – a virtual model of the building you can use for maintenance or upgrading operations after it’s built. There’s of course tons of other simulations out there – from viruses to stellar evolution – but the ones I like are when the virtual and real overlap, Pokemon Go–style or otherwise.

My sense of the metaverse is of technologies – hardware and software – that extend our senses. Do I need the fully realized 100 meter wide and ridiculously-long Street from Snow Crash? I liked that book, but that place sounds kinda dull and limited. Do I need to have all my senses overridden by the virtual? Doing so opens up a lot of questions, most involving some episode of Black Mirror

I see extending our senses as more open and organic, where the real world and the virtual connect in diverse and fascinating ways. Ignore the obvious “almost all the world’s knowledge at our fingertips.” We meet with distant friends to play in a virtual space. We scan a QR code in a museum’s room to learn about the art on the walls. We hold up our phone to instantly translate a sign in a foreign language. Our car hits a pothole and registers the jolt (through a cell phone app or from the car itself) with the city; enough jolts from the community and a crew is sent out to fill the hole in. All of these are “obvious” now, but thirty years ago they were barely conceivable. And these, plus those we don’t yet even dream of, will become obvious and seamless in the future.

Now to take a walk outside. It’s a bit cold, but the sun’s out and I need some bananas. The world’s a convincing simulation.

(And hyperlinks are yet another lovely example of new technology quietly layering atop the old, making for a richer world. The unseen just behind the seen. Imagine a world where you can’t use links. If you want to reference something, you write “go to the library and look up this article in The New Yorker from two months ago, if your library has it available.” Welcome to 1990. I’m amazed we got anything done back then.)

[Feel like commenting? I’m interested, but comments on this blog are dicey – we’ve had too much spam. Easier is to respond in the tweet. – Eric]

Seven Things for January 22, 2022

That date has a lot of 2’s in it, so maybe I’ll double (or triple) up on links for each thing.

  • One of the best uses of WebGL: showing old videogame levels in your browser, a lovely open source project. My favorite so far is the Cancer Constellation in Katamari Damacy, which includes animation – even works on your phone.
  • I discovered this podcast, Building the Open Metaverse, through a tweet from Morgan McGuire. As chief scientist at Roblox, he has some interesting things to day.
  • Another podcast about videogame creation is Our Machinery. Episode 9 sounded interesting, interviewing the creator of Teardown, which is a pretty amazing voxel-based simulation game engine. I only just started this podcast, and the first seven minutes were mostly just chit-chat; looks like it gets more technical after that.
  • While we’re in videogame mode, I liked this album of models from Assassin’s Creed: Unity, particularly this and this (but they’re all quite nice).
  • Want a full-featured, battle-tested, physically based camera model? Give Nick Porcino’s a look. He kindly pointed it out to me due to my Twitter poll on “film back”, which has some interesting comments.
  • The New York Times has a number of Github projects, including a US COVID data set. A friend made a short movie of it (spoiler: it doesn’t end well, so far). There are tricky data visualization questions he’s grappling with (and hasn’t landed on answers yet), e.g., if a large area, low population county gets just a case or two, it lights up like it’s a major outbreak. Which remind me: I recommend the book The Data Detective (which sounded dull to me, but was not – Tim Harford is great at telling tales). And that, in turn, reminds me of Humble Pi, which I gave to a lot of people for Christmas – a great bathroom book. The author, Matt Parker, has a ton of YouTube videos, if that’s your preference.
  • Tesla owner mines crypto-currency with his car. The auto’s computer controls some separate GPUs, plugged directly into the car’s motor. This sort of stuff looks to be the epitaph on our civilization; see the endless stream on Web3 is going just great for more, more, more.

I love the crabs, I admit it (and if you’re a crab lover, check out these robot crabs). The plastic teddy bear driving the horseshoe crab is my favorite: