Reports

You are currently browsing the archive for the Reports category.

I asked what others did for benchmarking in my last post. Here are the replies on Twitter in a semi-coherent edited form. If I missed any replies, I blame Twitter, whose interface is a magical maze.

First there were some FPS vs. SPF comments:

Richard Mitton: If you’re not measuring in milliseconds then you’re doing it wrong.

Christer Ericson: Yes, ms, not FPS. FPS is not a linear unit for the artists (or anyone).

Marc Olano: FPS isn’t linear. Usual definition of median averages middle 2 for even samples = also wrong. Use ms.

Morgan McGuire notes: FPS *is* a good measure if what you care about is interaction or visual smoothness. SPF is good for computational efficiency.

I replied to Richard & Christer: I’m interested in your reaction to the use of median vs. mean. FPS vs. SPF irrelevant for relative performance.

I also changed the original post to talk about milliseconds instead of frames, to avoid this facet of the discussion.

Christer Ericson: It’s important to catch the spikes, so in the context you’re talking about I would do max. Or mean+variance. Also, don’t think I’ve ever, for profiling reasons, looked at any average. You always look at a specific frame.

Timothy Lottes: I’m personally only interested in worst case ms/frame.

Cass Everitt: Agree with those that concentrate on worst times.

Eric Haines: Right, it depends what you’re looking for, e.g. don’t drop below 60 FPS. I’m mostly warning against using mean.

I added a note to the original post about tracking the max, which makes sense if you’re trying to guarantee a frame rate.

Tobias Berghoff, who benchmarks consoles:

I use min/max/med the most. Averages really only come into play when I need more digits. I spend significant amount of time below the 0.5% mark when wearing my platform tuning hat. I don’t miss trying to get sensible numbers out of PC h/w. But this also comes into play when measuring very short processes. When something only takes a couple of microseconds, you often end up oscillating between states that make the distribution multi-modal. Median won’t catch small shifts.

cupe: Stacked color-coded graph of nested timings (or a subtree of it). Usually unfiltered for analysis, avg for comparisons. Hierarchy is on the left, tooltip displays e.g. “scene/fluid/poisson”, click to restrict. Horizontal lines are milliseconds, orange line is 16.6 ms.

cupe1

E.g. click the big violet bar to see only post (and zoom in to stretch 4ms to screen):

cupe2

Javdev: We use a profiler, Adobe Scout, select multiple frames & see which code is most expensive & iterate it to prevent frame drops.

Björn Blissing: One option is to plot a histogram over the captured data. Reveals if your max/min are outliers or more common occurrences.

Michael Marcin: Try always running circular etwtrace and when frame time dips save and examine the trace.

Mikkel Gjoel: We filter in viewer. Options for all mentioned, and vsync (as that is what we are shipping).

Gjoel

Fabian Giesen: General order statistics (percentiles etc.) are good. Just a plot of frame durations over frame # is helpful, too! And simply recording all frame durations over a few seconds, sorting them and plotting that is quite handy, too. That gives you all the percentiles (and median etc.) and gives you a feel for the shape of the distribution, which matters. (I’m not very happy with single-value summaries; they lose too much information.)

Jaume Sanchez Elias: I like Chrome FPS meter: current, min, max; over time; frequency graph for each framerate

Elias

Krzysztof Narkowicz: Min, max, avg and std dev. Percentiles and med would make a nice addition, but it’s a hassle to compute them.

Anton the Mighty: I always use the standard deviation or standard error and indicated what value n sample size is. Most gfx benchs=bad. It’s usually worth also eyeballing actual data in detail because repeating patterns show either cycles or error in timers. Most recently there was something a friend had with the power manager in windows causing a cycling load on the cpu. I also visually check out timing for cpu+gpu functions across frames with apitrace etc. pretty neat.

All for now – feel free to email or tweet me with anything you want to add.

 

Tags: ,

I was just answering a question for the Udacity Interactive Graphics MOOC. I had made a rather confusing lecture, much more involved and less informative that I would have liked, so today I wrote a re-do (sadly, it’s not easy to make a new video, since step 1 is “fly from Boston to San Francisco”). I’m still not thrilled with my description – what do you think? Is there a better way to talk about this subject? Anything I could improve? Surprisingly, this course still gets about 35 sign-ups a day (though I’m guessing maybe one of those actually finishes), so it’d be nice to make this lesson better.

Background: up to this point in the course I’d been showing how you typically write down transforms from right to left (OpenGL-style column-major matrices), e.g. “TR” means “rotate the object, then translate it (in world space) to some location.” In this lesson I wanted to point out that you can also read the transform order from left to right.

=================

You’re at 41 Avenue George V in Paris. Someone comes up and asks “How can I see the Arc de Triomphe?” You tell them, “Go up two blocks and then turn to the left – you can’t miss it.” Indeed, at 101 Avenue des Champs-Élysées he can see L’Arc de Triomphe.

So if you wanted to take this person and apply these two transforms, translation T (walk two blocks) and rotation R (turn about 60 degrees to the left), how would you write that out? Think about it for a minute, then scroll down for the answer. (And I like the disembodied arm to the right from Google’s street view).

2016-04-09_153558

The order is (right-to-left “application order”): TR. That is, you want to apply the rotation first, so that it doesn’t affect the translation. So you rotate the person 60 degrees to the left, then you translate him north two blocks north, which is then not affected by the rotation.

If you incorrectly used order RT, you would first translate him north two blocks, so far so good. But, as you saw in the snowman lesson, rotating after translation means the object is rotated around the origin from his present location; in this case, the person’s starting location is the origin. So performing a translation, then a rotation, would move him up two blocks north, then rotate him in a circle with a 2 block radius by 60 degrees, putting him somewhere else in the city (Rue Euler, I guess, which is a great coincidence that it’s named for a famous mathematician).

I hope you accept TR is the right order, then. But, to describe directions we definitely first said “perform T” – walk two blocks north – “then perform R” – rotate to the left 60 degrees. So we talk about directions in a left-to-right fashion. This may seem odd, as we are then describing the last transform that we apply, T, if we actually want to position the man in his environment.

The key thing here, and the point of the lesson, is that by specifying T first, we’re saying to the man, change your frame of reference to be 2 blocks north. From this new frame of reference, then rotate 60 degrees around where you’re standing, your new origin. It’s how we talk about directions. We don’t say “when you get to your final position, rotate 60 degrees left. Then, to get to your final position, walk two blocks north.”

The person walking has his own frame of reference, where he’s always the origin, and rotations are done relative to whichever way he’s facing at the time. To specify transforms when talking in these terms, an object-centric way of describing things, we describe “from left to right.” When we’re looking at the world and want to think how to make some other object take on a particular orientation and position, we tend to work from right to left, getting it oriented and them moving it into position.

However, it all depends. Moving a couch up a flight of stairs, down a hall, and next to a wall in room is a series of transforms, and again we specify them from left to right. We could also shortcut the process if we don’t care about the intermediate steps along the way. Say the couch is facing north, and we know it’ll end up facing east. We could specify the one 90 degree rotation to get it to face east, then the one XYZ translation to move it directly to its desired location – right to left order, so that the rotation doesn’t interfere with the translation.

The final effect of the transforms – a series of moves or the direct rotation and translation – have the same final effect. The point is, each way of thinking has its uses.

Tags:

It’s rainy out, and I’m trying to avoid coding for Mineways and collecting code for JGT, so time to blog a little.

Some years ago I read the book The Public Domain about copyright and learned an interesting tidbit: photos of public domain paintings or photos are not covered by copyright in the U.S., they’re free to reuse.

Here’s the relevant bit from Wikipedia:

Reproductions of public domain works

The requirement of originality was also invoked in the 1999 United States District Court case Bridgeman Art Library v. Corel Corp. In the case, Bridgeman Art Library questioned the Corel Corporation‘s rights to redistribute their high quality reproductions of old paintings that had already fallen into the public domain due to age, claiming that it infringed on their copyrights. The court ruled that exact or “slavish” reproductions of two-dimensional works such as paintings and photographs that were already in the public domain could not be considered original enough for protection under U.S. law, “a photograph which is no more than a copy of a work of another as exact as science and technology permits lacks originality. That is not to say that such a feat is trivial, simply not original”.[30]

Another court case related to threshold of originality was the 2008 case Meshwerks v. Toyota Motor Sales U.S.A. In this case, the court ruled that wire-frame computer models of Toyota vehicles were not entitled to additional copyright protection since the purpose of the models was to faithfully represent the original objects without any creative additions.[31]

The wire-frame case is obviously relevant to computer graphics. There’s a rundown of other countries’ laws on Wikimedia Commons’ site.

Private collections are within their rights to limit access as they wish, as misguided as I think it is to sell public domain works to the public. I have a problem with any public institution invoking protection of photos of works, since there’s no legal basis for this.

The Public Domain is free to download and worth a read. To be honest, after a bit I skimmed chapter 2, but I particularly enjoyed chapter 7, a case study in which the U.S.’s more permissive rules on what is in the public domain (“sweat of the brow” works are not copyright in the U.S.) are contrasted with Europe’s more restrictive laws.

Oh, and if you like to read about copyright (you weirdo), you might enjoy The Idealist: Aaron Swartz and the Rise of Free Culture on the Internet. The second half is worthwhile, though quite sad, and a story I suspect many of us know to some extent. The first half is about the evolution of copyright laws in the United States, which went from being a haven for piracy of foreign (primarily English) works to an ardent defender of extending copyright almost into perpetuity (despite there being no incentive benefits in extending copyright retroactively, since the law at the time the work was created was found sufficiently appealing to the original author; sorry, I feel a rant coming on…). Ahhh, imagine this alternate universe. <– That’s a great link, by the way, well worth a click.

Tags: ,

[TL;DR? Go try the puzzle instead.]

A few questions came out of my blog entry on GPUs preferring premultiplication from various people, including myself. Let’s nail them down one by one, then add these bits up to explain why PNG is not very good at storing antialiased cutout and decal images (images which have an alpha component) that were generated using physically-based rendering. It turns out it’s not PNG’s fault, it’s the implementation used by PNG viewers. I provide two downloadable PNG images to test your own viewer or renderer to determine whether sRGB and compositing are working properly.

If you’re already convinced that you should do filtering (and most every other computation) in linear space, skip the first section. If you already know that you should think of linear values for a pixel as intrinsically premultiplied, since they represent radiance for the pixel, skip two sections. If you know that viewers and browsers don’t blend PNGs with alphas properly, skip to the conclusions at the very end and see if you agree. Me, I’m still learning, so can imagine I made a goof along the way (update: and indeed I did!), though I’ve tried very hard not to do so. I’m honestly surprised how many viewers and browsers (perhaps all?) don’t perform display, filtering, and compositing correctly for this image type.

Don’t Filter in sRGB

This should be one of those things everyone knows by now, but just in case…

So you have three texels and two colors you’ve stored in a PNG, red and green:

rg_interp

Interpolating between these two colors equally, what’s the color (that you store in the PNG) of the center texel? The answer is not (128, 128, 0), the average of the two texels on the ends. You can sort-of tell by just looking at the result:

rg_interp2

The right answer is:

rg_interp3

You shouldn’t interpolate or otherwise filter when in sRGB (essentially, gamma corrected) space, that’s why it looks bad. You shouldn’t do this because sRGB is non-linear – linear operations such as addition and multiplication don’t work properly. Update: see this link, for example – the bus license plate is a good example.

Instead you want to convert from sRGB to linear space, interpolate in linear space, and then convert back to sRGB (equations here). It’s also what you want to do to get good mipmaps, or anything else where you’re using multiple samples to get a new value. My favorite article on this is Larry Gritz’s from GPU Gems 3. There’s also a nice recent article about this workflow on the Renderman Community site, showing how to convert textures to linear space, do lighting there, then convert back for display. If these articles don’t convince you that linearization is necessary, I’m not sure what would.

Here’s another example, sRGB interpolation vs. the correct linear interpolation over a band of about 4 texels in width:

rgb_bad   rgb_good

The sRGB interpolation gives a black band, the correct linear interpolation gives a smooth transition (personally I see a more yellowish transition, which makes sense since it’s over a few pixels, but the general brightness is the thing to notice the most here; if you back up a bit the yellow goes away but the black band in the first image is still there. On a phone you may have to zoom in).

Premultiply before converting to sRGB

Say you’re computing the coverage of a triangle you’re rendering, in linear space. It covers half the area of some pixel, alpha = 0.5. You compute the color of the triangle covering half this pixel, and the color is (1.0, 0.0, 0.0). I’m going to use floating point triplets here for colors in linear space; sRGB maps these values to displayable values we store in, say, a PNG image file.

Normally you take your color, clamp or otherwise map each of the RGB values to [0.0, 1.0] (possibly using tone mapping), and then convert to sRGB for display and storage. The question is: do you first premultiply your color by alpha, then convert to sRGB, or vice versa?

It’s clear you don’t modify the alpha coverage itself by sRGB. Coverage is coverage, it remains the same in any color space. What coverage represents is how much of a surface is visible in a pixel. If you think about it, our half-covered pixel with a (1.0, 0.0, 0.0) surface color on the triangle should emit the same amount of radiance as a fully-covered pixel that has a surface color of (0.5, 0.0, 0.0). The only way to get these to be equivalent is to multiply by the alpha first, then convert the resulting color to sRGB. As Larry Gritz succinctly put it, “radiance is associated,” that is, the area of the emitter in the pixel matters. The radiance is computed by including the area coverage term in the computations.

So, the order is linear space -> premultiply the result to get the radiance -> convert this radiance to sRGB. Take our triangle’s color of (1.0, 0.0, 0.0) and alpha of 0.5, we get an RGBA result of (0.5, 0.0, 0.0, 0.5), our radiance values with an associated alpha.

To display this antialiased result on the screen we convert to sRGB space (or gamma space, if you’re a bit sloppy about it). Of course, our screen itself doesn’t store an alpha, we can’t see through the screen, so we normally think of such a result as being composited against a black background. Using sRGB conversion, we get (0.7366, 0.0, 0.0). Multiply by 255 for an 8-bit display and the displayed value is then (187,0,0).

PNG cannot store all clamped linear values…

I would be a terrible mystery writer, as my chapters would all have titles giving away what happens in the chapter. However, since I’m getting paid by the word (ha, joke), I’m going to walk through each step carefully and slowly, building the suspense (or boring you half to death).

Here’s the strange bit: you can’t store a number of seemingly valid RGBA values in a PNG for some combinations, when fractional alphas are involved.

Update: the following logic is wrong, but it’s what would be needed for your browser to work correctly. Skip to the next “Update:” if you want to skip past this erroneous, but still interesting, information.

To store this sRGB value in a PNG we need to “unassociate” or “un-premultiply” the RGBA value. In other words:

Unassociated RGB = Associated RGB / alpha

We then multiply the resulting RGBA floating point values by 255 to get values we can store in a PNG.

Just to be clear, alpha itself is unchanged for unassociated and associated colors, it’s just the RGBs that can differ. If alpha is 1.0, the unassociated RGB value is identical to the associated one. If alpha is 0.0, we don’t divide; we assume the RGB is (0.0, 0.0, 0.0), since the result has no area, and so, no radiance. It’s only the fractional alphas where the unassociated and associated values differ.

Take our RGBA value of (0.5, 0.0, 0.0, 0.5) from above.

We converted the color to sRGB, the four values were then (0.7353, 0.0, 0.0, 0.5).

Now convert by unmultiplying (a.k.a. dividing) the RGB value by the alpha value, to get the unassociated values that PNG so craves. That is, divide by the alpha of 0.5; in other words, multiply by 2.0. We get (1.4707, 0.0, 0.0, 0.5).

Multiply all four values by 255 to get 8-bit values that we can store. Just to show we haven’t converted to PNG’s unassociated format yet, let’s leave these as precise floating point values: (375.0, 0.0, 0.0, 127.5). Rounding, that gives us (375, 0, 0, 128).

If we could store premultiplied (associated) values, we could simply store (0.7353, 0.0, 0.0, 0.5) times 255, which is (187, 0, 0, 128), knowing that when we’d convert back to linear space someday the values would go back to about (0.5, 0.0, 0.0, 0.5).

To sum up:

(0.5, 0.0, 0.0, 0.5) the premultiplied result in linear space
(0.7353, 0.0, 0.0, 0.5) converted to sRGB
(1.4707, 0.0, 0.0, 0.5) RGB divided by the alpha of 0.5 to unassociate the alpha
(375, 0, 0, 128) multiplied by 255 and round

And that’s the punchline: this value cannot be stored in a PNG properly, since the maximum value in a PNG is 255 and PNG is always unassociated. The best we could do is store (255, 0, 0, 128). But if we then convert this back from sRGB to linear space, we don’t get anything near the original (0.5, 0.0, 0.0, 0.5) result:

(255, 0, 0, 128) stored in PNG
(128, 0, 0, 128) associating (multiplying by) the alpha/255
(0.216, 0.0, 0.0, 0.5) converting from sRGB to linear space

The answer should be (0.5, 0.0, 0.0, 0.5), but the clamping has dimmed the color value down massively. So instead of being able to store a linearized color value of 0.5 when alpha is 0.5, the best we can do is store one that is 0.216. Another way to say this is that our triangle can be no brighter than twice this value, (0.432, 0.0, 0.0), before premultiplication, instead of (1.0, 0.0, 0.0) – quite a drop on the linear side of things.

I don’t know about you, but I found this surprising, that PNG is actually incapable of storing antialiased cutout images computed by a normal renderer working in linearized space.

The complaint is often leveled at storing 8 bit pre-multiplied colors and alphas is that you lose precision: a gray level of 255 and of 128 will both be represented by a 1 if the alpha itself is 1. The flip side is that, for colors that have perfectly valid colors and alpha when premultiplied and converted to sRGB, unassociated storage as used in a normal PNG cannot properly save these RGBA values. PNG sadly does not have a premultiplied mode for storage, so is stuck; if it had such a mode it could properly store (187, 0, 0, 128) and so properly display (187, 0, 0) on the screen.

If you don’t believe this result, that there’s some misstep, solve this puzzle instead.

Update: in fact, there is a problem! It turns out that PNG says that you need to unmultiply before converting to sRGB. This goes against theory, in that you normally take a premultiplied result and convert that to sRGB for display (composited against a black background). But it turns out that the proper sequence for PNG conversion is to un-premultiply and then convert to sRGB. So the right answer is to store (255, 0, 0, 128). You convert this to linear space, (1.0, 0.0, 0.0), multiply by alpha (0.5, 0.0, 0.0), convert back to sRGB space (187,0,0) and display the result. It’s just that simple. Which is why premultiplication is nicer: none of these conversions is necessary, you’d just ignore the alpha and display the RGB stored, if PNG could store premultiplied values.

See the puzzle for more information, and my thanks to friedlinguini for finding the right passage in the spec. I’m happy to see PNG itself is not broken! Based on this new information, let’s see how viewers and browsers view such PNGs with alphas.

Let’s let our viewers at home decide…

Do image manipulation programs, viewers, and browsers implement PNG with alpha correctly? Let’s go grayscale and find out… (hint: the answer’s a pretty resounding “no” – if you find a package that does it right, let me know).

One question is whether PNGs are sRGB by default, or linear by default; that is, if the gamma or sRGB chunks are missing, what’s expected? I poked around through specs, but don’t see a definitive answer, and frankly in my experience 99.98% of all PNGs I see without tags are in sRGB – they’re meant for display.

But, let’s test. Here are two sample images in PNG:

sampler_raw  sampler_with_gamma_srgb_chunks

They (probably) look identical on your display: two grayish squares on the left, a dark gray square upper right, and white square lower right. I checked: it won’t work on the iPhone 6 or Samsung Galaxy S3, as you can’t display this image at its native resolution. These devices perform cheap and incorrect filtering on the image (they filter in sRGB space; more on that below).

Both images have the same data:

sampler_labeled

The upper left square in each has alternating lines of full white and full black. Blur your eyes and you get a half-gray. The sRGB nature of this gray is shown by how the bottom left matches the top left (on sRGB monitors) when you blur your eyes, a basic gamma test. This shows that both PNGs are treated as storing non-linear sRGB values, as the 187 gray value is the sRGB equivalent of half-gray in linear space, as we’ve seen. There is a gamma chunk in PNG, but it’s rarely used.

The only difference between the two images is that the one on the left does not have gamma or sRGB PNG chunks (generated using LodePNG), the one on the right has both (it was generated by reading the one on the left into paint.net and then writing it out; you can review the chunks using pngcheck in verbose mode). They display identically, so the browser is clearly assuming that if these two chunks are missing, the PNG should be interpreted by default as storing sRGB values. This is indeed the norm: PNGs are usually used for lossless display of images, so the color values naturally are sRGB values that are directly copied to the display. However, this means that the “you could set the gamma to 1.0” option in PNG is extremely unlikely to be honored by most tools. Also, even if possible, storing 8-bit values in linear space can give a banded look when converted to sRGB. PNG does support 16-bit storage, which would solve any banding from using a gamma of 1.0.

Display this image in, say, IrfanView, which composites against a black background for display, and you get this:

irfanview_view

Note that the lower right corner is a 128-gray.

If you want to see the test image composited in your browser against a black, white, and gray background in turn, see this page.

Most (all?) browsers and viewers are a bit broken

Now we know PNGs are treated as if they’re in sRGB space by default. However, it turns out most browsers and viewers do not properly interpret or blend PNG colors when alphas are present, or even when they’re not! Here’s the proof.

The two squares on the right each have an alpha of 0.5. The upper square is black, the lower is white. Browsers composite these images against their background color. If the background color is white (as it is on this page), then the upper right square should composite to be half-black, half-white. With a value of (0,0,0,128), it’s saying that the surface is covered with a black color that is half-transparent, so that the white background should contribute only half its emission. If the math is done properly – sRGB to linear, perform blending, then linear to sRGB – then the resulting color should be around (187,187,187) and so match the results on the left. It clearly doesn’t; the browser is simply blending the two colors directly in sRGB space, without any linearization, giving a darker gray than should be displayed.

If instead you display these images composited against black, as happens in the popular IrfanView viewer, you get a darker gray for the lower-right square, when again you should get a 187-level gray, as shown above. So, IrfanView (and other viewers I tested) also do not perform linearization when blending.

You can tell that blending is also done improperly even when no alphas are present, by using the “resize” function. Resize the test image to 50% of its original size, i.e., make it 128×128. Use the best filter available (e.g., Lanczos).

Here’s the result for XnView, for example (I had problems getting IrfanView to properly save the alpha channel):

xnview_50

It’s wrong, it’s not blending in linear space. You can tell because the alternating lines in the upper left are now a 128-level gray instead of the proper 187. The gray in the upper left is significantly darker than a scaled down version of the original image. If you have an image manipulation program that gives the right answer, let me know. Imagine this is the next level up in a mip-map pyramid and you can see why the norm in interactive 3D graphics is to perform linearization before filtering, and why there’s GPU support for it. Pity we can’t get the 2D guys to adopt the correct algorithms.

Here’s the original image, again, but made smaller (128×128) by your browser by adjusting the HTML image display width and height:

sampler_with_gamma_srgb_chunks

I’m betting dollars to donuts you see the wrong result, similar to XnView’s (and every other free image manipulation package I tried). The image is shrunk to half size and so the alternating lines of white and black are incorrectly blurred to a 128-gray.

By the way, the reason the original image alternates lines of white and black, instead of using a white and black checkerboard, is to avoid any level response problem the display might have. This used to be a problem with CRTs, I don’t know if it is with LCDs, but let’s leave it out of the equation.

Right-click on the two test images and save them if you want to experiment; attach as a surface texture to see if you are performing compositing correctly. If neither of the squares on the right looks very close to the matching grays on the left, the software is not performing alpha blending properly. It should premultiply (every viewer and browser does this correctly for PNG conversion), linearize each value, blend with the linearized background value, then convert back to sRGB for display. Instead, most software simply blends in sRGB space, which is wrong.

If the two squares on the left don’t more-or-less match (blur your eyes), then you’re on an ancient Mac, NexT, SGI, or something else that’s non-sRGB. More likely, you’re on a smartphone or other device that is not showing the test image at one pixel per texel. Its faulty filtering makes the alternating black and white lines average to a gray level of 128 at the limit, when it should be 187.

I suspect the reasons most viewers and all browsers I tried are broken in this way is expediency (all that conversion per pixel is expensive, and fractional alphas in PNGs are rare) and lack of understanding, plus possibly legacy users expecting old behaviors. I certainly didn’t fully understand how to interpret PNG data when I started this post, and have had to revise it!

Now I see why OpenEXR, a floating point format with alpha and that saves premultiplied colors, is preferred by film companies and other industries where proper compositing is critical. Simple to display, and premultiplication makes display and compositing much less costly.

Conclusions

  1. Perform interpolation, blending, mipmapping, or other filtering in linear space, not sRGB.
  2. In this linear space, if your computations produce a fractional alpha, make sure the color is premultiplied by this alpha somewhere along the line before converting to sRGB. Update: unless you’re converting to PNG, in which case you want to unmultiply your RGBA before converting to their quasi-sRGB space.
  3. Update: wrong. If you have fractional alphas and you want to store these along with the colors, for later use when compositing, you may get values too high to store in your PNG after unassociating the alpha from the color. Cutouts without partial alphas, or with dim colors, may be storable.
  4. Don’t expect PNG alphas to be used properly for viewing on most viewers or on web browsers. This is not PNG’s fault per se, it’s the browser/viewer’s for not using linearization when compositing.
  5. Test and find out. The PNG test image can help you see what an application does with the data.

 

 

 

Tags: , , , , , ,

Much of my weekend: The MIT Mystery Hunt is a yearly giant weekend puzzle race that has well over a thousand participants. Get a taste here – this year’s was “easier”, in that a team solved it Sunday evening, almost 53 hours after the hunt began. If you know the answers to this or this one (both quite graphics-oriented!), let me know, I got nowhere with them. Yes, that’s all the information you get, and your goal is to find a word or phrase somehow hidden in what you see. Using a supercomputer is entirely fine. I kept saying “Enhance!” but it didn’t help.

There are many other amusing puzzles to poke at in this year’s collection, such as massive tiled sudokus and flag color pie charts. Give it a look, it’s fun to see the sheer scope and warped brilliance of some of these.

I was able to help our “small” team of 35+ to solve the last part of one cool puzzle by using three.js. The puzzle itself is fun, it was a few-hour-long solve for me, then some time writing a three.js program to help find the solution. If you want to fast forward, find the final hidden word if you can… (I couldn’t – a teammate did; I’m an OK puzzler, but sometimes forget the maxim, “look again, and again.”)

More stuff:

  • New interactive 3D graphics books at SIGGRAPH 2015: WebGL Insights, GPU Pro 6 (Kindle right now, hardcover in September). Let me know if I missed anything (see full list here, which also includes links to Google Books previews for these new books).
  • Updated book: 7th edition of the OpenGL SuperBible. I would guess that, with Vulkan coming down the pike, and Apple going with Metal and no longer developing OpenGL (it’s back in 2010 at 4.1 in Mavericks), this will be the final edition. Future students having to learn Vulkan or DirectX 12, well, that won’t be much fun at all…
  • I mentioned yesterday how you can download the SIGGRAPH 2015 Proceedings for free this week. There’s more, in theory. Some of the links there have nothing as of right now. The Posters are worth a skim, especially since I didn’t see them at SIGGRAPH. I also liked the Studio PDF. It starts with a bunch of single-page talks that are fun to snack on, followed by a few random slidesets. Emerging Tech also has longer descriptions than on the ETech page (which has more pics and videos, however). If you gotta catch ’em all, there’s also a PDF for Panels.
  • There have been many news articles recently about not viewing screens at bedtime. Right, sure. Michael Herf (former CTO at Picasa) is the president at f.lux, one company that makes screens vary in overall spectra during the day to ameliorate the problem. He pointed me at a useful-to-researchers bit: their fluxometer site, with spectra for many different displays, all downloadable.
  • Oh, and related, a tip from Michael: Pantone stickers with differing colors (using metameric failure) under different temperature lights, so you can ensure you’re showing work under consistent lighting conditions.
  • I was impressed by HALIDE, an MIT licensed open source project for writing high performance image processing code (including GPU versions) from scratch. Most impressive is their case study for local Laplacian filters (p. 28), showing great performance with considerably less code and time coding vs. Adobe Photoshop’s efforts. Google and others use it extensively (p. 32).
  • Path tracing is all the rage for the film industry; the Arnold renderer started it (AFAIK) and others have followed suit. Here’s an entertaining path trace of interior lighting for a Minecraft scene using the free Chunky path tracer. SPP is samples per pixel:

Chunk progressive render

Tags: , , , , ,

New CRC Books

Well, newish books, from the past year. By the way, I’ve also updated our books list with all relevant new graphics books I could find. Let me know if I missed any.

This post reviews four books from CRC Press in the past year. Why CRC Press? Because they offered to send me books for review and I asked for these. I’ve listed the four books reviewed in my own order of preference, best first. Writing a book is a ton of work; I admire anyone who takes it on. I honestly dread writing a few of these reviews. Still, at the risk of being disliked, I feel obligated to give my impressions, since I was sent copies specifically for review, and I should not break that trust. These are my opinions, not my cat’s, and they could well differ from yours. Our own book would get four out of five stars by my reckoning, and lower as it ages. I’m a tough critic.

I’m also an unpaid one: I spent a few hours with each book, but certainly did not read each cover to cover (though I hope to find the time to do so with Game Engine Architecture for topics I know nothing about). So, beyond a general skim, I decided to choose a few graphics-related operations in advance and see how well each book covered them. The topics:

  • Antialiasing, since it’s important to modern applications
  • Phong shading vs. lighting, since they’re different
  • Clip coordinates, which is what vertex shaders produce

Read the rest of this entry »

Tags: ,

Limits of Triangles

You’re mapping a latitude-longitude texture on to a sphere. Pretty straightforward, right? Compute the UV coordinates at the vertices and let the rasterizer do the work. The only problem is that you can’t get exactly the right answer at the poles. Part of the problem is that getting a good U value at each pole is problematic: U is essentially undefined. It’s like asking what longitude you’re at when you’re sitting at the north pole – you can take your pick, since none is correct.

One way around this is to look at the other vertices in the triangle (i.e., those not at the pole) and average the U values they have. This gives a tessellation at the north pole something like this:

marked_monkey

What’s interesting about this tessellation is that it leaves out half of the horizontal strip of texture near the pole. The sawtooth of triangles will display one half of the texture in this strip, but will not display the triangles covered in red. Even if you form these triangles, adding them to the mesh, they won’t appear. Recall that all points along the upper edge of the texture are located at the same position in world space, the north pole of the model itself. So any red triangles added will collapse; they’ll have zero area, as their two points along the top edge are co-located.

I show a different tessellation along the bottom strip of the texture, a more traditional way to generate the UV coordinates and triangles. Again, all the triangles with edges along the bottom of the texture will collapse to zero area, and half of the texture in the strip will not be rendered. The triangles that are rendered here are somewhat arbitrary – at least the triangles along the top edge have a symmetry to them.

There are two ways I know to improve the rendering. One is to tessellate more: the more lines of latitude you make, the smaller the problem areas at the poles become. The artifacts are still there, but contained in a smaller area. The other, ultimate solution is that you could compute the UV location per pixel, not per vertex.

That works for texturing, if you can afford to fold the spherical mapping into the pixel shader. However, this problem isn’t limited to spheres, and isn’t limited to textures. Another example is the cone and having a normal per vertex. This is a common object, but is surprisingly messy. For a cone pointing up you typically make a separate vertex for each triangle meeting at the tip, with a separate normal. This is similar to the uppermost strip for the sphere mapping above: the normal points out in a direction somewhere between the two bottom normals for the triangle in the cone.

However, you have the same sort of interpolation problem! Here’s a cone with a tessellation of 40 vertices around the top and bottom edges:

cone_normals1

 

From triangle to triangle along the side of the cone, you have normals smoothly interpolating along the bottom of the cone. Moving across a vertical edge near the tip, you have sudden shifts in the normal’s direction as you go from one normal to another. Each zero-area triangle that is formed by two points touching the pole is what “causes” the discontinuity in shading when crossing an edge.

If you start to add “lines of latitude”, to vertically tessellate the surface, things improve. Here are cones with two strips of triangles, three strips, then ten strips:

cone_normals2 cone_normals3 cone_normals10

It’s interesting to me that the first image, two strips of triangles, doesn’t fully improve the bottom part of the cone. Even though there are no zero-area triangles and so the normal is the same for each vertex in this area, the normals change at different rates across the surface and our eyes pick this up as Mach banding of a sort. Three strips gives three bands of poor interpolation, and so on. With ten bands things look good, at least for this lighting and amount of specularity. Increasing the tessellation gets us closer and closer to the ideal per-pixel computation.

Here are some images of the cone mapping to show the discontinuities. The first gives a low tessellation: 10 vertices around, no tessellation vertically

cone_lo_res

Half of the texture is missing on each face of the cone. If you increase the vertical tessellation, like so:

cone_wireframe

You get this:

cone_hi_lat

It’s better, but there’s still a dropout at the tip (it should say “1,0 | 0,0”), plus you can see the lines on the surface are not straight – the vertical lines on the faces wobble. Each quadrilateral is a trapezoid, so using two triangles for this mapping doesn’t match it all that well.

If you increase the tessellation in both directions, you get closer still to a per pixel mapping, the correct answer:

cone_hi_both

This tessellation is 40 points around, 10 points from bottom to top. The very tip still has half its information missing, but otherwise things look reasonable. Cranking the resolution up to 50 around and 50 from bottom to top mostly cleans up the tip (ignoring the lack of high-quality texture sampling):

cone_super_high_tip

This is an old phenomenon, but still a surprising one, at least to me. We’re used to tessellating any higher-order surface into triangles for rasterization or ray tracing. Usually I increase tessellation just to capture geometric details, not normal or texture interpolation. You would think it would be possible to easily set up triangles in such a way that relatively simple mappings would not have problems such as these. You basically want something more like a quadrilateral mapping, where the left edge of the triangle is using, say, U = 0.20 along its edge and the right edge uses U = 0.30. The problem is that the point at the tip has a U of say 0.25, so both edges interpolate from there.

I should note that this problem is solvable by messing with the mappings themselves. For example, you could instead use a different texture and a cube map projected onto a sphere to solve that case, which would also decrease distortion and so require less texture resolution overall.

Woo, Pearce, and Ouellette have a lovely old paper about this and other common rendering bugs and their solutions. They reference a paper by Nelson Max from 1989, “Smooth Appearance for Polygonal Surfaces” for using C1 continuity. This doesn’t seem easy to do on the GPU without a lot of extra data. Has anyone tried solving this general problem, of treating two triangles as a quadrilateral mapping? I don’t really need to solve this problem right now, I just think it’s interesting, something that’s doable in a pixel shader in some way. I’m hoping someone’s created an elegant solution.

Update: and I got a solution of sorts. Thanks to Tom Forsyth for getting the ball rolling on Twitter. Nathan Reed’s post on this topic talks about setup, Iñigo Quílez talks about performing bilinear interpolation in the shader and gives a ShaderToy demo. However, these seem a bit apples and oranges: Nathan is working with projective interpolation, Inigo is working with equi-spaced bilinear interpolation – they’re different. See page 14 on of Heckbert’s master’s thesis, for example.

Tags: , , ,

guest post by Patrick Cozzi, @pjcozzi.

This isn’t as crazy as it sounds: WebGL has a chance to become the graphics API of choice for real-time graphics research. Here’s why I think so.

An interactive demo is better than a video.

WebGL allows us to embed demos in a website, like the demo for The Compact YCoCg Frame Buffer by Pavlos Mavridis and Georgios Papaioannou. A demo gives readers a better understanding than a video alone, allows them to reproduce performance results on their hardware, and enables them to experiment with debug views like the demo for WebGL Deferred Shading by Sijie Tian, Yuqin Shao, and me. This is, of course, true for a demo written with any graphics API, but WebGL makes the barrier-to-entry very low; it runs almost everywhere (iOS is still holding back the floodgates) and only requires clicking on a link. Readers and reviewers are much more likely to check it out.

WebGL runs on desktop and mobile.

Android devices now have pretty good support for WebGL. This allows us to write the majority of our demo once and get performance numbers for both desktop and mobile. This is especially useful for algorithms that will have different performance implications due to differences in GPU architectures, e.g., early-z vs. tile-based, or network bandwidth, e.g., streaming massive models.

WebGL is starting to expose modern GPU features.

WebGL is based on OpenGL ES 2.0 so it doesn’t expose features like query timers, compute shaders, uniform buffers, etc. However, with some WebGL 2 (based on ES 3.0) features being exposed as extensions, we are getting access to more GPU features like instancing and multiple render targets. Given that OpenGL ES 3.1 will be released this year with compute shaders, atomics, and image load/store, we can expect WebGL to follow. This will allow compute-shader-based research in WebGL, an area where I expect we’ll continue to see innovation. In addition, with NVIDIA Tegra K1, we see OpenGL 4.4 support on mobile, which could ultimately mean more features exposed by WebGL to keep pace with mobile.

Some graphics research areas, such as animation, don’t always need access to the latest GPU features and instead just need a way to visualization their results. Even many of the latest JCGT papers on rendering can be implemented with WebGL and the extensions it exposes today (e.g., “Weighted Blended Order-Independent Transparency“). On the other hand, some research will explore the latest GPU features or use features only available to languages with pointers, for example, using persistent-mapped buffers in Approaching Zero Driver Overhead by Cass Everitt, Graham Sellers, John McDonald, and Tim Foley.

WebGL is faster to develop with.

Coming from C++, JavaScript takes some getting use to (see An Introduction to JavaScript for Sophisticated Programmers by Morgan McGuire), but it has its benefits: lighting-fast iteration times, lots of open-source third-party libraries, some nice language features such as functions as first-class objects and JSON serialization, and some decent tools. Most people will be more productive in JavaScript than in C++ once up to speed.

JavaScript is not as fast as C++, which is a concern when we are comparing a CPU-bound algorithm to previous work in C++. However, for GPU-bound work, JavaScript and C++ are very similar.

Try it.

Check out the WebGL Report to see what extensions your browser supports. If it meets the needs for your next research project, give it a try!

Tags: , , ,

512 and counting

I noticed I reached a milestone number of postings today, 512 answers posted to the online Intro to 3D Graphics course. Admittedly, some are replies to questions such as “how is your voice so dull?” However, most of the questions are ones that I can chew into. For example, I enjoyed answering this one today, about how diffuse surfaces work. I then start to ramble on about area light sources and how they work, which I think is a really worthwhile way to think about radiance and what’s happening at a pixel. I also like this recent one, about z-fighting, as I talk about the giant headache (and a common solution) that occurs in ray tracing when two transparent materials touch each other.

So the takeaway is that if you ever want to ask me a question and I’m not replying to email, act like you’re a student, find a relevant lesson, and post a question there. Honestly, I’m thoroughly enjoying answering questions on these forums; I get to help people, and for the most part the questions are ones I can actually answer, which is always a nice feeling. Sometimes others will give even better answers and I get to learn something. So go ahead, find some dumb answer of mine and give a better one.

By the way, I finally annotated the syllabus for the class. Now it’s possible to cherry-pick lessons; in particularly, I mark all lessons that are specifically about three.js syntax and methodology if you already know graphics.

alpha

Tags: ,

« Older entries