Let’s get visual. Last in the series, for now.
You are currently browsing the archive for the Miscellaneous category.
All linked out yet? Here’s more worthwhile stuff I’ve run across since last SIGGRAPH.
- Guidance and resources galore at this site on GPU debugging for OpenGL.
- Remember Animusic? It got real.
- If the article made me teary, well… I haven’t gotten up the nerve to play the game yet.
- 3D model capture is getting absurdly good.
- It’s been a number of months since the last chapter was released, but it’s worth noting the free & interactive Immersive Linear Algebra book is now up to six chapters.
- Car models made of paper, everyone’s building them. One artist’s effort (her site here), and more corporatish efforts from Nissan and from Toyota.
- Speaking of paper, Halloween’s only a half-year away, so consider making a Wintercroft polygonal mask. The plans and materials are cheap, and I recommend printing directly onto the paper vs. making templates. Post to Twitter or elsewhere; I did.
Next in the continuing series. In this episode Jaimie finds that the world is an illusion and she’s a butterfly’s dream, while Wilson works out his plumbing problems.
- Ray tracing blasts from the future-past: Rayshade is on Github, and the IBM 1401 is now rendering. Oh, and at the other end of things, 2D light transport simulation.
- Speaking of ray tracing, here’s a 4K demoscene demo that’s pretty great considering its size. I started it 50 seconds in because it gets nice after that. More info and code’s here.
- 3D printing is dead, thank heavens, and has lots going on: phone-driven 3Dprinter (color me skeptical), and Mattel plans on releasing a 3D printer for kids (if it doesn’t burn or maim, is it real technology?).
- How did this lovely image fill technique not get discovered long ago? Very clever. Code here.
- $3 million in art and sound assets from Infinity Blade are free for download, for use in Unreal Engine. Find them this way.
- Pixar collaborated with Khan Academy to make a free set of lessons on computer graphics with nice production values for grade/high schoolers. A few questions were mildly buggy when I tried them, and I reported the problems, so these may have been fixed.
- Freaky – click and orbit it a bit.
- If you are involved in videogames and education, or even have just a passing interest, get Mark Deloura’s free weekly Level Up Report.
- Are those cool computer graphics researcher kids at school still mocking you? Read this pronunciation guide (related post of ours here).
- Entrim 4D: I tried this technology long ago at SIGGRAPH (after signing a “don’t sue us if we mess you up” form), it’s super-weird. Best part of the demo was when they handed you a joystick so you could control your perception of gravity.
- Physical, artistic environment map.
- TiltBrush drawing in VR is amazing to watch. This video is fun, too.
- Newell’s Utah teapot siting over Linz, Austria.
- Effect of changing field of view and distance to subject:
I haven’t made one of these link posts for awhile. This one’s recent news, the ones to come will have more fun stuff.
- Forget the dropped mic, Google’s VR announcement yesterday was exciting, almost as good as the Apple Pencil (oh, wait, that one’s real).
- This week I learned that you can embed Sketchfab models in Facebook posts (just post the share link; another example and another), and also set them to automatically spin (though not both at once, yet) and other options.
- Pete Shirley’s last book in his trilogy of “Ray Tracing Minibooks” is free to purchase until April 5th. Short & sweet & solid. More information on the series and other stuff on Pete’s blog, and here’s his how-to publish guide – go make one yourself!
- GPU Pro 7 was briefly available on Amazon but then sold out, the rest of us have to wait for the slow boat making its way to Amazon. Source code repository has been started here (also includes GPU Pro 6 code).
- In a similar vein, Eric Lengyel’s Game Engine Gems 3 should be out in a few weeks.
- The figures for “Real-Time Rendering, 3rd edition” are now on Flickr for easy viewing. These have been available for a long while, and I tossed them up on to Flickr in good part so that I can find a figure easily, any time I want.
- Ray casting gets attention, see the original here.
The deadline for submission is April 5th. See http://s2016.siggraph.org/real-time-live-submissions
If you don’t know, “Real-Time Live!” is an event showcasing cool rendering and interactive techniques over the past year. If you’re working in this area, submit a proposal and let the rest of us enjoy seeing it.
Michael Cohen was looking at John Hable’s useful test image:
He noticed an odd thing. Looking at the image on his monitor (“an oldish Dell”) from across the room, it looked fine, the 187 area matched the side areas:
(yes, ignore the moires and all the rest – honestly, the 187 matches the side bars.)
However, sitting at his computer, the 128 square then matched the side bars:
Pretty surprising! We finally figured it out, that it’s the view angle: going off-axis resulted in a considerably different gray appearance. Moving your head left and right didn’t have much effect on the grays, but up and down had a dramatic effect.
Even knowing this, I can’t say I fully understand it. I get the idea that off-axis viewing can affect the brightness, but I would have thought this change would be consistent: grays would be dimmed down the same factor as whites. That last image shows this isn’t the case: the grays may indeed be dimmed down, but the alternating white lines are clearly getting dimmed down more than twice as much, such that they then match the 128 gray level. It’s like there’s a different gamma level when moving off-axis. Anyone know the answer?
Addendum: and if you view the first image on the iPhone, you get this sort of thing, depending on your zoom level. Here’s a typical screen shot – I’ve trimmed off the right so that the blog will show one pixel for pixel (on desktop computers – for mobile devices you’re on your own). This darkening is from bad filtering, see the end of this article.
Follow-up: one person commented that it’s probably a TN panel. Indeed, there’s a video showing the tremendous shift that occurs. The blue turning to brown is particularly impressive. I haven’t yet found a good explanation of what mechanism causes this shift to occur (Wikipedia notes other monitors having a gamma shift, so maybe it is something about gamma varying for some reason). There’s a nice color shift test here, along with other interesting tests.
Even better: check out this amazing video from Microsoft Research. (thanks to Marcel Lancelle for pointing it out)
- Research papers should be free to anyone to access, especially since the authors do not earn royalties and want their papers to be read.
- Publishers deserve to eat. Update: by which I mean, whoever is hosting and maintaining the journal deserves some reasonable amount of money. I don’t subscribe to the “people making buggy whips should have their jobs maintained and the automobile should be outlawed” school of thought.
Last post was too long, covering too much terrain. Here’s a puzzle instead which whittles it all down.
What values do you store in an sRGB PNG to display a perceptually half-gray color, with an alpha of 0.5?
If you’re an absolute expert on PNG and perception and alpha, that’s all the information you need. Just in case, to make sure you don’t break any rules, here are the key bits:
- A perceptually half-gray color on the screen is (187,187,187), not (128,128,128). See the image below to prove this to yourself, which is from John Hable’s lovely article.
- Your PNG is saving values in sRGB space. No extremely-rare gamma = 1.0 PNG for you.
- Alpha is coverage. The PNG spec notes, “The gamma value has no effect on alpha samples, which are always a linear fraction of full opacity.”
- PNG alphas are unassociated, they do not premultiply the color. To display your sRGB PNG color composited against black, you must multiply it by your unassociated alpha value.
So, what do you store in your PNG image to get a half-gray color displayed, with an alpha of 0.5? A few hints, then the answer, is after the image below.
Horizontal fully-black and fully-white lines combine to a half-gray, represented by 187. That’s sRGB in action:
Hint #1: a half-gray color with an alpha of 1.0 (fully opaque) is stored in a PNG by (187,187,187,255).
Hint #2: if a PNG could store a premultiplied color, the answer would be (187,187,187,128).
Hint #3: to turn a premultiplied color into an unassociated color, divide the color by the (fractional) alpha.
And just to have something between you and the answer, here’s this, from I wish I knew where.
The answer is (255,255,255,128), provided by Mike Chock (aka friedlinguini), who commented on my post – see the comments below. My answer was definitely wrong, so I’ll explain why this answer works.
The PNG spec notes, “This computation should be performed with intensity samples (not gamma-encoded samples)”. So, to display an sRGB-encoded PNG, you must do the following:
- Convert the sRGB color to linear space. For (255,255,255,128) this gives (1.0,1.0,1.0).
- Now multiply in the alpha, to get a linear premultiplied value. Times (128/255) -> 0.5 gives (0.5,0.5,0.5).
- Convert this value back to sRGB space and display it. This gives (187,187,187) as the color to display.
Me, I thought that PNGs with sRGB values and alphas were displayed by simply multiplying the sRGB by the stored alpha. Wrong! At least, by the spec. How could I think such a crazy thing? Because every viewer and every browser I tested showed this to be how such a PNG was displayed.
So, I’m very happy to find PNG is not broken; it’s simply that no one implements it correctly. If you do know some software that does display this image properly (your browser does not), let me know – it’ll be my example of how things should work.
Update: as usual, Jim Blinn predates my realizations by about 18 years. His article “A Ghost in a Snowstorm” (collected in the book Notation, Notation, Notation; most of this article can be found here) talks about the right way (linearization) and the errors caused by the various wrong ways of encoding alpha and sRGB. Thanks to Sean Barrett for pointing it out.
My conclusion remains the same: if you want fun puzzles and you’re near a big city, check out The Puzzled Pint, a great free social puzzle event each month.
For the record, here’s my original wrong answer:
The answer is (373,373,373,128). To display this RGBA correctly, you multiply by the alpha (and divide by 255, since the value 128 represents 0.5) to get (187,187,187).
And that’s the fatal flaw of sRGB PNGs in a nutshell: you can’t store 373 in 8 bits in a PNG. 16 bits doesn’t help: PNGs store their values as fractions in the range [0.0, 1.0].
No linearization or filtering or order of operations or any such thing involved, just a simple question. Unfortunately, PNG fails.
Wrong answers include:
- (187,187,187,128) – this would work if PNG had a premultiplied mode. It does not, so this color would be multiplied by 0.5 and displayed as (94,94,94). That said, this is a fine way to store the data if you have a closed system and no one else will ever use your PNGs.
- (187,187,187,255) – this will display correctly, but doesn’t keep the alpha around.
- (255,255,255,128) – this gives you a display value of (128,128,128) for the color, which Hable’s image shows is not a perceptual half-gray. If you used the PNG gamma chunk and set gamma to 1.0, this would work. Almost no one uses this gamma setting (it causes banding unless you use 16 bits) and it’s rarely supported by most tools.
- (255,255,255,187) – you break the PNG spec by sRGB correcting the alpha. This will actually display correctly, (187,187,187). If you composite this image over some other image with an alpha, this wrong alpha fails.
- (255,255,255,187) again – you decide to “remember” the alpha is sRGB corrected and will uncorrect it before using it as an alpha elsewhere. If you want to break the spec, better to go with storing a premultiplied color, the first wrong answer. This fix is confusing.
- (255,255,255,128) again – you store the correct alpha, but require that you first convert the stored color from sRGB to linear before applying the alpha, then convert the color back to sRGB to display it. This will work, but it defies radiance and alpha theory, it’s convoluted, expensive, super-confusing, not how anyone implements PNG display, and not how the spec reads, as I understand it. Better to just store a premultiplied color.
I wish my conclusion was wrong, but I don’t see any solution short of adding a new chunk to the PNG spec. My preference is adding a chunk that notes the values are stored as premultiplied.
In the meantime, if you want solvable puzzles and you’re near a big city, check out The Puzzled Pint, a great free social puzzle event each month.
Zap Andersson debated this puzzle with me on Facebook, and many thanks to him. He prefers the solution (255,255,255,128), applying the alpha “later.” To clarify, here’s how PNGs are normally interpreted (and I think this follows the spec, though I’d be happy to be proven wrong, as then PNG would still work, even if no viewer or browser I know currently implements it correctly):
To display a PNG RGBA in sRGB: you multiply the RGB color by the alpha (expressed as a fraction).
The “later” solution to display a PNG RGBA in sRGB: you convert the sRGB number stored to a linear value, you then apply the alpha, and then you convert this linear value back to sRGB for display.
I like this, as convoluted as it is, in that it makes PNG work (I really don’t want to see PNG fail). The problem with this solution is that I don’t think anyone does it this way; browsers certainly don’t.
The other interesting thing Zap points out is this interesting page, which points to this even more relevant page. My takeaway is that I shouldn’t talk about 187-gray as the perceptually average gray; 128 gray really does look perceptually more acceptable (which is often why gamma correction is justified, that human perception is non-linear along with the monitor – I forgot). This doesn’t actually change anything above, the “half-covered pixel” example should still get a display level of 187. This is confirmed by alternating full-black and full-white lines averaging out to 187, for example.
Kavita Bala asked, “What is the etymology of ‘tap’ in texture filtering?”
This is a term we use in graphics for taking a sample from a texture map. I didn’t know where it came from, and recall being a bit mystified as to what it even meant when I first encountered it, finally puzzling it out from the context. Searching around now, the earliest reference I could find in 3D graphics literature was in this article, so I asked Dave Luebke, who coauthored that paper.
I think it’s actually very old and references the idea of putting a probe, as in an oscilloscope, to tap a signal (like tapping a pipe, meaning to take water out of it at a particular location, or tapping a maple tree for sap to make syrup from).
Dave asked two other experts.
Lance Williams replied:
It’s traditional filter terminology. For example:
“Filter Coefficients – the set of constants, also called tap weights, used to multiply against delayed signal sample values within a digital filter structure.”
“A direct form discrete-time FIR filter of order N. The top part is an N-stage delay line with N + 1 taps.”
“For FIR filters, there is no denominator in the transfer function and the filter order is merely the number of taps used in the filter structure.”
John Montrym replied:
Follow this trail:
https://en.wikipedia.org/wiki/Finite_impulse_response see phrase “tapped delay line” which takes you to:
“tap” in texture filtering uses the terminology of old-time signal processing. It wouldn’t surprise me if the notion of tapping a delay line takes you back to the 1930’s or 1940’s, though I don’t have a specific reference for you.
Radar was one of the early drivers for the development of signal processing theory & practice.
And your “tapping a water pipe” analogy is a pretty good one.
If you know more, pass it on.