GPUs prefer premultiplication

This one’s important, so read it and grok. You either know it already, great, or it’s news and you may not believe me. Even if you don’t believe, keep it in mind for the day you see dark edges around your cutouts or decals, or mipmap levels that are clearly too dark.

The short version: if you want your renderer to properly handle textures with alphas when using bilinear interpolation or mipmapping, you need to premultiply your PNG color data by their (unassociated) alphas.

If you parsed that long jargon-filled sentence and already know it, then go visit Saturday Morning Breakfast Cereal or Dinosaur Comics and enjoy life, there’s probably not much more for you to learn here. If you parsed it and don’t believe you have to preprocess your PNG RGBA texture, skip to The Argument section. Otherwise, here’s what I mean.

Some textures have alpha values. For simplicity, assume every integer you see in this article is in the range 0-255, an 8-bit channel. The alpha value of a texel could be 255, meaning fully opaque, or 0, meaning fully transparent, or somewhere in between. I use 0-255 just because [0,2,0, 2] is easier on the eyes than [0,0.007843,0, 0.007843] or [0/255,2/255,0/255, 2/255]. Ignore sRGB/gamma issues, ignore precision, we’ll mention them later; assume we interpolate the texture data in a linearized (de-gamma’ed) color space.

PNG textures are always “unassociated,” meaning the color RGB data is entirely independent from the alpha value. For example, a half-transparent red texel in a PNG file is stored as RGBA of [255,0,0, 127] – full red, with an alpha representing it being half-transparent. Premultiplication is where you multiply the stored RGB value by the alpha value, treated as a fraction. So the premultiplied version of our red semitransparent texel is [127,0,0, 127], as we multiply the red channel’s 255 by the alpha of 127/255.

What I was somewhat surprised to learn is that, for GPUs, you must premultiply the texture’s RGB value by its alpha before a fragment shader (a.k.a. pixel shader) samples it. I used to think that it didn’t matter – surely you could sample the PNG’s RGBA texture and then perform the premultiplication. Not so.

The Argument

Here’s a simple case, bilinear interpolation between two texels, one semitransparent:

interp1

Raw, untouched, unassociated PNG data is stored in these two texels. The left texel is an opaque red, the right texel is almost entirely transparent (alpha of 2) and green. To get the RGBA value at the dot in between, we sample this texture and perform bilinear interpolation, as usual. The answer we’ll get is the average of the two texels : [127.5,127.5,0, 128.5]. Note that this resulting value is wrong. An almost fully transparent green texel has the same effect on the interpolated color as the fully opaque red texel. The alphas combine sensibly, but the colors do not, because they’re not weighted by the alphas. To interpolate correctly, the colors need to be premultiplied.

However, GPUs can’t currently premultiply before they perform bilinear interpolation. They sample by getting the texels surrounding the location of interest, then interpolate between these texels. A software renderer could get this right, by sampling, premultiplying, then interpolating (that said, from surveying a few, some software renderers also don’t do it correctly). In some circumstances this failure can have a serious effect. See this demo. Notice how the fringe of the cutout flower is black. The original PNG texture is like so:

mush1

The checkerboard background shows where the texels are fully transparent with [0,0,0, 0] – there is no black fringe in the texture itself. In the demo you can see a black fringe as these fully transparent texels are interpolated along the edges:

mush_bad

Here’s another example with a flower texture, using an entirely different renderer (that will remain nameless). These are low-resolution textures, but that just exaggerates the effect; it’s present for any cutout texture that is not premultiplied.

g3d_fringing

By the way,  I’m not picking on Sketchfab at all – they’re refreshingly open about their design dilemmas. I use their site for the demo because of its ease of use.

The black fringing occurs because of unassociated RGBA’s being used for interpolation. Say you have two neighboring texels, [255,0,0, 255] and [0,0,0, 0], red and fully transparent “black” (though of course the color of a fully transparent texel should not matter). The interpolated value is [127.5,0,0, 127.5]. The only correct way to interpret this value is that it’s a premultiplied value: it’s half-transparent, and that alpha value has clearly multiplied the red color so that it’s a dark red. As you get closer and closer to the center of the transparent texel the RGB goes to fully black.

This RGB result is fine if indeed you’re expecting a premultiplied color from your texture sampler – it’s premultiplied, so the “dark red” is really just “regular red multiplied by alpha.” As Larry Gritz notes, “radiance is associated.” Such a sample has a darker red since the red surface’s contribution is less, as noted by the alpha’s lower value. By premultiplying, the fully transparent texels are always “black,” not green or some other color. Going to “black” is exactly what we want, as the more-and-more transparent surface sends out less and less radiance. I put quotes around “black” because the color of the surface is still red, there’s just less surface affecting the sample. A fully transparent texel is “black” because that’s its contribution: it contributes nothing to the final color when “over” compositing is performed. The problems start when we use this color as unassociated from its alpha. Our normal terms for describing texel values don’t work well, which is part of the problem.

Notice how the GPU always returns a premultiplied-looking result, such as [127.5,0,0, 127.5]. I was going to start with this red and fully transparent example, since it explains the fringing problem, but instead used a nearly transparent green to directly show the problem with unassociated interpolation. If you look at the two texel values here, [255,0,0, 255] and [0,0,0, 0], these are the same representation whether you’re using unassociated or premultiplied representations. It’s not clear from this example whether the GPU wants unassociated values as inputs and gives back a premultiplied result, or if premultiplied values are needed for both inputs and outputs. It’s the latter, which the nearly transparent green example shows. (I added this example, as one writer in this thread noted that the black fringing problem’s relationship to my original example isn’t clear; I hope this addition helps.)

The Answer

Because GPUs don’t allow premultiplication before interpolation during sampling, the answer is to premultiply the PNG texture in advance. The RGB color is multiplied by the alpha. We treat the alpha as a fraction from 0.0 to 1.0 by taking the 0-255 alpha value and dividing it by 255, then multiply each RGB component individually by that fraction. Now the two texels in our example are:

interp2

The interpolated location’s RGBA is [127.5,1,0, 128.5], which is what we’d expect: almost entirely red, a tiny bit of green, and an alpha that’s about half transparent. That’s the whole point: GPUs actually sample and interpolate in such a way that they expect premultiplied colors being fed in as textures.

Analysis

Who knew? Well, probably half of you, but I didn’t: this isn’t written down in any textbook I know (including our own), and I recently had to work it out myself. Also, note that it’s not just alpha cutouts affected – any texture, such as a decal, or semitransparent stained glass, or anything else with alphas, must be premultiplied if you want to use the GPU’s native sampling and filtering support.

The tricky part is fixing this bug in your renderer, if you haven’t already. First, if you ever expect semitransparent alphas (between 1 and 254), you have to premultiply the PNG texture before you sample it with the GPU. If you save the resulting premultiplied values at 8 bits per channel, this is destructive, you have lost precision and can’t unassociate the alpha later. For physically-based or other systems where color correction is applied, this precision loss could be noticeable. So, you may be forced to go to 16 bits per channel when you premultiply. To be honest, for highest quality you’ll want to use 16 bits for texture storage if you’re performing physically-based rendering on the GPU. 8-bit PNG data is normally in non-linear gamma encoded form, ready for display. You want to linearize this texture data before sampling it anyway, so that all your lighting and filtering computations are done in linear space. Marc Olano pointed me at Jim Blinn’s old article “A Ghost in a Snowstorm” (collected in this book), which talks about this problem in depth. Throughout this blog post I’ve assumed you’re computing everything in a nice linear space. If not, you’re in trouble anyway, and Blinn’s article talks about some options. Nowadays there’s sRGB sampling support on the GPU, but you still need to premultiply, which will lose you precision for each texel with a semitransparent alpha.

You may have other concerns about incoming PNG data and don’t want to premultiply; see the comments on the demo page to see what I mean. I can relate: the ancient Wavefront OBJ format has multiple interpretations and there’s no one to decide which way it should be interpreted. For example, should a PNG texture assigned as an alpha map be a single channel, RGB, or RGBA? If RGBA, should the color’s red channel or luminance, or the alpha value itself, be interpreted as the alpha channel? Sketchfab allows the user to decide, since there’s no definitive answer and different model exporters do different things.

Assume you indeed premultiply your PNG data in some fashion. The next question is whether your fragment shaders currently return premultiplied or unassociated RGBA values. If your shaders already return premultiplied values, good for you, you’re done – you just have to make sure that you’re treating the incoming texture value as a premultiplied entity.

However, it’s likely you return unassociated values from your fragment shaders. Three.js does, for example. It’s a pretty natural thing to do. For example, you first implement some surface shader, then add semitransparency by modifying the alpha separately. Why bother multiplying the color by the alpha in the fragment shader when the blending unit can do so for you? Changing your code to return a premultiplied RGBA means you have to change the blending mode used. It also means, at least for your own sanity, that all your fragment shaders should return premultiplied values. You don’t want to have to track which shaders return unassociated values and which return premultiplied results. It’s also inefficient to possibly need to switch the blend mode for every transparent object that comes by. If you have external users writing fragment shaders, you have to get them to change over, too.

The alternative is to unassociate the alpha from the texture sample returned by the GPU. That is, the GPU gives you back a premultiplied RGBA when you sample the texture. If the floating-point alpha value is not 0.0 or 1.0, then divide (un-multiply) the RGB value by alpha and use this RGBA throughout the rest of your shader, remembering it’s unassociated. Now you don’t have to change your shader’s output, the blend mode, or all the other shaders so that they return premultiplied values. It’s a bit goofy – in a perfect world we’d premultiply and return premultiplied RGBA values -but often legacy code and a user base work against the right solution.

Weak Solutions

There are other ways to avoid the problem. One is to simply never use bilinear interpolation or mipmapping on such textures. Minecraft can get away with this, since it’s part of its look:

mc_mush

Another solution is to use the alpha test to reject fragments whose floating-point alpha is less than 1.0. This works in that it gets rid of the black fringes, but only for true cutout textures, since all semitransparent texels are all discarded. The edges of the texture are trimmed back to the texel centers, which can look “skeletal” and different than how the asset was created. Update: Angelo Pesce notes that, with a tight alpha test, standard mipmapping can cause the area coverage to shrink as the object gets farther away.

A third solution is to rationalize and imagine the black fringing you get is somehow a feature. It does give a toon-line outline to objects, but it’s not something you can really control; you’re relying on an artifact for your rendering.

There is one preprocess that can help ameliorate the black fringing problem, which is to “bleed” the colors along the edges of the cutout so that the same or average colors are put in the fully transparent texels. Since the PNG has unassociated data, you can put whatever you want in the colors for fully transparent texels. Well, you can put such colors in premultiplied texels with alphas of 0, as Zap Andersson and Morgan McGuire mentioned to me. Morgan notes, “in premultiplied alpha, you can have emissive surfaces that also produce no coverage. This is handy for fireballs and lightning.” But, that’s for a different purpose.

Here’s an example of bleeding a texture:

mush_nonbled  mush_bled1

The original cutout mushroom texture is extended by one texel along its cutout edges. The basic idea is when a transparent edge texel is found, assign it some weighted average of the surrounding opaque colors. Now when you interpolate unassociated color channels, you get a neighbor color in the transparent region that is mostly like the actual region.

See this demo and compare it to the original situation to see the improvement. Here’s a side by side, untouched vs. bled:

mush_badmush_good

Me, I had to implement this solution in Mineways, my free Minecraft model exporter. Most renderers (who will again remain nameless) have this fringing problem, even in their software implementations. I couldn’t fix the renderers, but could at least massage the data a bit to avoid fringing. I originally added this bleeding process back in 2012 for a particular renderer. After extensive testing on a number of renderers I found the fix to be generally useful so yesterday I released a version which always performs bleeding. One nice feature of bleeding is that if a renderer does later move to a premultiplied solution, the fully transparent texels that have been bled on will not affect the correct algorithm at all.

For the specialized case where your texture has a single solid color and only the alphas vary, filling the whole texture with this color works perfectly. The interpolated color is always the same, and alphas interpolate properly.

In general, bleeding is an imperfect solution at best. For example, if you had a red texel next to a green pixel along a cutout edge, the blend texel might be some yellow color. You’ll get a different result than if you did it the right way, using premultiplied colors. Bleeding is difficult to impossible if the texture has semitransparent texels with different colors, since weighting is so very broken with unassociated values. Also, for mipmapping a simple bleed won’t work, as the “black” fully transparent RGBs that are left will get blended in as you go up the mip pyramid. You then have to somehow extend the bleed to fill all the transparent texels in some way.

Premultiplying the texels avoids all filtering problems by properly weighting the samples and means that artists don’t have to waste time fixing their content to work around a bug in the rendering pipeline. There may be reasons you don’t fix this bug, such as precision issues from premultiplying 8-bit values and storing these in 8 bits, or just the sheer amount of work and testing involved in making the fix, but now at least I hope this bug’s better understood.

But, wait, there’s more!

While researching this blog post I looked at some textbooks and asked Zap Andersson, Morgan McGuire, Marc Olano, and others for input. I followed up on the two Blinn articles Marc pointed out to me. I mentioned “A Ghost in a Sandstorm” earlier; the other was “Fun with Premultiplied Alpha.” This article doesn’t discuss alpha filtering problems directly, but points to an earlier Blinn article, “Compositing–Theory” (online here). This one indeed talks about the problem, wading through a few derivations of the right and wrong ways to filter. That’s yet another reason to avoid unassociated values – they won’t filter correctly, e.g., you won’t properly be able to blur a texture with unassociated alphas, something Morgan mentioned to me. Blinn notes how Gouraud interpolation will also fail on unassociated values at the vertices. Put a “green” at a transparent vertex and you’ll get a different rendering than if you put a “black”; premultiplying makes these both “black”, which is the contribution the vertex has to the total shade. Both of these articles are collected in Blinn’s book Dirty Pixels, worth picking up used for cheap.

So, Blinn described this problem back in 1994, but it certainly didn’t sink in for much of the 3D graphics world, and certainly not for interactive rendering. His treatment was pretty equation-intensive and he didn’t talk about what would happen if we did things the wrong way. We all had enough other problems around then, such as gamma-space computations warping the results of shading equations. The PNG format wouldn’t even exist until two years later, so alphas had to come from TIFFs or cutouts from GIFs. For interactive rendering, DOOM came out in 1993, 3dfx’s Voodoo graphics accelerator wouldn’t appear until 1996, and a 24-bit interactive frame buffer was a far-off dream.

Halfway through writing this post today I searched on “premultiplied alpha opengl” to find this blending page that I linked to earlier (and talk about below – it has a bug). Looking at the list of pages returned, the very first hit is John McDonald’s article from almost three years ago. Amazingly, he presents almost exactly the same example, a red opaque texel next to a almost transparent green texel. It kinda makes sense that we’d hit on the same idea, it’s an excellent “see how wrong things can be” case. Anyway, definitely check out his article for a more visual explanation. He himself points to an article by Shawn Hargreaves from 2009, who notes premultiplying gives the correct result and that cutouts then work properly. Shawn also notes in an earlier post some other drawbacks of the bleeding solution I mention, that some codecs and DXT1 compression won’t work with this solution. It took a solid 15 years after Blinn’s article for this alpha problem to be solved again for interactive rendering; Jim Blinn was right, but we weren’t ready before then to need his article.

So, I guess the takeaway is that someone will rediscover this premultiplication fact every three or four years and write a blog post about it. Jim’s article was equation-heavy and didn’t seem relevant to GPUs, Shawn’s involved GPUs but was pretty technical and had no illustrations, John’s was well-illustrated but focused on mipmapping problems. Honestly, I hope my post drives it home and we’re done here, but I suspect not.

Addenda: A few people pointed out that Tom Forsyth explained this problem, the bleeding hack, and the proper solution in a blog post from 2006. Nice, and it fits in with my theory of “we need to rediscover this every 3 years or so.” I probably even read his article back then (I went through a lot of Tom’s writings for Real-Time Rendering) but black fringes around cutouts were way out of my experience at the time – CAD tends to be about solid objects, not cutouts. I wasn’t at a point where it made sense to me. That is why I beat the issue to death in this post and added lots of images, so that even if you the reader don’t care about cutouts now, you might someday remember seeing the black fringing in some post somewhere and know there’s a solution.

One modern text that discusses this problem is Essential Mathematics for Games and Interactive Applications, 3rd edition, which Jim Van Verth (the first author) kindly let me know about. If you “Look Inside” the book, search on “alpha”, and go to page 417, you’ll see the relevant passage. They also have a website with lots of additional articles and resources. In particular, this presentation discusses the same problem from page 43 on, with almost the same example I used! I swear I didn’t plagiarize – I wish I had known about this phenomenon, it would have saved me some confusion. I think the opaque red, almost transparent green case is “just what you do” for an example – make the first texel opaque and set the first color channel, make the second one mostly transparent and set the second color channel. In my initial example I had four texels, red and transparent “black,” but then realized I could boil it down to two and that a slight green would really show off the effect.

The book Real-Time Volume Graphics also covers this problem, drawing a correlation with Gouraud interpolation of transparent vertices; here’s the passage (thanks to a co-worker that recalled it). Note that this means this same core problem with interpolating unassociated alphas means you need to use premultiplied vertex color values so that the rasterizer properly interpolates the triangle’s color and feeds the correct RGB to the fragment shader. This is another argument for just fixing your shaders so they expect premultiplied values throughout.

There’s an article on cutouts fading away due to alpha blending problems (backup here, in case that link fails), but I can’t say I understand the rendering settings giving this error. Someday when I run into it I probably will… Feel free to enlighten me more about this if you know exactly what’s happening. I do get that at the top of the pyramid branches could fade out almost entirely, as the empty areas dominate, but unless there’s an alpha test with a high setting (as Angelo Pesce noted, mentioned earlier), it seems like that’s the right answer.

I noticed that the OpenGL wiki’s blending page I link to has an error. It says to use these settings if your source (and destination) is premultiplied:

glBlendEquationSeparate(GL_FUNC_ADD,GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE,GL_ONE_MINUS_SRC_ALPHA,GL_ONE,GL_ZERO); // not correct for the general case

This is not general – it assumes the destination’s alpha is zero (which means the destination’s fully transparent), as it simply sets the final alpha to be the same as the source alpha.

The proper settings are:

glBlendEquationSeparate(GL_FUNC_ADD,GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE,GL_ONE_MINUS_SRC_ALPHA,GL_ONE,GL_ONE_MINUS_SRC_ALPHA);

This computes the final alpha as the source alpha’s area plus the remaining area times the destination alpha’s coverage, classic Porter & Duff. Might as well get it right since it costs nothing extra to compute. I tried to change the entry on the wiki, but it was reverted – discussion has commenced.

Epilogue

It turns out there’s a method in WebGL that is exactly what we want. Oddly, it’s only available in WegGL, not OpenGL. A coworker discovered it and tried it out after I distributed my blog post inside Autodesk. He found it mentioned in WebGL Insights; you can read the passage on page 21 here. The mode is described more here, in section #5, and section #6 described the blending mode if you change your shader to output premultiplied results instead of unassociated values.

By using this mode it’s a two-line quick fix, if you take the low-impact (and admittedly a bit icky, in that you’re avoiding fixing your shader to use premultiplied RGBA values throughout) route of unassociating (aka “unpremultiplying”, which normal people call “dividing by”) the alpha of the texture unit’s result in your existing shader. Specifically:

gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true); // now your PNG texture will be read as a premultiplied value

and then, immediately after you sample the texture, in your shader do something like:

if ( sample.a > 0.0 ) { sample.rgb /= sample.a; } // unassociate the alpha, to avoid rewriting all transparency shaders in the system

which gives you an unassociated RGBA, if that’s indeed what you return from your shader (and it’s likely you do).

Here’s the correctly composited result, from another angle, without using texture bleeding. Happy ending!

fixed

One more resource: this newer blog post on the problem has some lovely visualizations and explanations, if you need more.

And another good resource: this blog post, especially see figures 5 and 6 for the artist workaround of coloring the fully transparent texels, possibly all the way to the edges so that mipmapping will look OK, maybe. One more article here, giving another good illustration of how this happens.

4 thoughts on “GPUs prefer premultiplication

  1. dahart

    Eric, very nice read, and I think you’re right, premultiplied is usually the right way to go, and it keeps getting rediscovered.

    I recently “discovered” it for the first time, and was stunned how I’d been through a couple decades of film and game shader writing without ever really understanding premultiplied colors! Almost all newbie blending examples on the web use glBlendFunc(SRC_ALPHA, ONE_MINUS_SRC_ALPHA) aka lerp, aka non-premultiplied, aka straight-alpha blending. It’s probably popular because it’s easier and more intuitive, but rarely the best default choice.

    There are a couple extra points to add to your post I think are worth mentioning, and I’ve been thinking of writing this up myself for a while now.

    WebGL’s default blending mode is premultiplied. You have to ask for non-premultiplied. At first I thought that was a weird choice, before I understood all the implications, now I realize it’s the correct default, and why.

    WebGL canvases are always composited over the HTML page. There might even be multiple WebGL canvases on top of each other. (It’s really easy to do and you can do some nice tricks this way). This makes the buffer output of WebGL by default similar to using multi-pass OpenGL to render to a texture and then use that texture to render a scene – not everybody has experience with that kind of setup because it’s a lot more complicated. But now with WebGL it’s the default situation, and if you render transparency and have HTML page content or other canvases underneath, they’ll show through.

    The surprising thing about rendering transparency is that using a OpenGL 2.x or WebGL 1.x pipeline, it is not possible to use non-premultiplied blending with transparent objects to produce a correct alpha channel output. It took some time for that to sink in for me, but let me repeat and demonstrate that using glBlendFunc(SRC_ALPHA,ONE_MINUS_SRC_ALPHA), i.e., non-premultiplied blending, you cannot render the output alpha channel correctly in the presence of transparent objects, whereas premultiplied alpha blending produces the correct result. Even with glBlendFuncSeparate(), you can’t render a correct alpha channel using non-premultiplied blending!!

    Suppose we wanted to look through three panes of green tinted glass that are 50% transparent rgba(0,1,0,0.5), and our blue sky background image is in the HTML DOM behind the canvas because that’s a lot easier to setup than loading the texture. To render just the glass, using unpremultiplied alpha and glBlendFunc(SRC_ALPHA,ONE_MINUS_SRC_ALPHA), we’ll start with a transparent buffer color (0,0,0,0), then add in the panes of glass from back to front:

    0.5 * (0, 0, 0, 0) + 0.5 * (0, 1, 0, 0.5) = (0, 0.5, 0, 0.25)
    0.5 * (0, 0.5, 0, 0.25) + 0.5 * (0, 1, 0, 0.5) = (0, 0.75, 0, 0.375)
    0.5 * (0, 0.75, 0, 0.375) + 0.5 * (0, 1, 0, 0.5) = (0, 0.875, 0, 0.4375)

    Notice that as we keep adding more panes, the green channel approaches 1, and the alpha channel approaches 0.5. If we keep going, we’ll asymptotically arrive at green=1 and alpha=0.5. That’s the right answer for the color channels, but not for alpha! The math makes sense – if I keep interpolating something against 0.5, eventually I’ll get 0.5. But I’m stacking up tinted winows, my alpha channel should be getting more and more opaque and approaching 1!

    Now, let’s do it again using premultiplied colors and glBlendFunc(ONE, ONE_MINUS_SRC_ALPHA). We’ll pre-multiply our green tinted class color now: (0, 0.5, 0, 0.5)

    1 * (0, 0, 0, 0 ) + 0.5 * (0, 0.5, 0, 0.5) = (0, 0.25, 0, 0.25)
    1 * (0, 0.25, 0, 0.25 ) + 0.5 * (0, 0.5, 0, 0.5) = (0, 0.5, 0, 0.5)
    1 * (0, 0.5, 0, 0.5 ) + 0.5 * (0, 0.5, 0, 0.5) = (0, 0.75, 0, 0.75)

    Now both the green channel and alpha channel approach 1, and when we composite our rendering over something else, the overlapping panes of glass will look appropriately opaque.

    There it is. Rendering with premultiplied blending is the only way to produce transparent output correctly! And because WebGL is always compositing your output over other things, it makes the most sense to default to premultiplied blending. And of course, if you use premultiplied blending, you need to feed it premultiplied textures. WebGL is making premultiplied images more relevant and more important to understand.

    Luckily, as you pointed out, with WebGL you can load textures of either flavor and convert them in either direction on the fly at load time.

    There are some other interesting points to make, but this comment is way too long, and now that I’ve written half an article, you’ve motivated me to put together some example images and post it all somewhere besides a comment.

  2. dahart

    So I think I goofed a bit. Premultiplied blending isn’t the only way to get a correct output alpha, and unassociated blending isn’t necessarily wrong. You just have to work harder to get the blending right if you use unassociated colors & blending, and it looks like glBlendFuncSeparate() can be used to make it right.

    In any case, here’s a mockup of what I’m talking about. There are some toggles in the code & fragment shader to change how the blending works and how the canvas is composited:

    https://jsfiddle.net/dahart/m4z61tz1/

    The stack of tinted windows on top are drawn using unassociated colors & blending, and the stack on the bottom uses premultiplied colors & blending.

    Since the canvas is over the page, and the big red rectangle is opaque, then if alpha is correct, you shouldn’t be able to see any clouds or words through the red rectangle.

  3. Mauricio

    You alluded to sRGB texture sampling support on the GPU. This feature fortunately does not affect the alpha channel, i.e. it is assumed to be linear by the hardware. However, it seems that linearization should really should be done *before* premultiplication. How can this be done? You could do it all on the CPU: load the image, linearize and premultiply each pixel, and then create the GPU texture. But that means no longer using the native sRGB texture sampling support, and also requires higher precision than 8-bits per component.

Comments are closed.