Morphological Antialiasing

An Intel research group has put their papers and code up for download. I had asked Alexander Reshetov about his morphological antialiasing scheme (MLAA), as it sounded interesting – it was! He generously sent a preprint, answered my many questions, and even provided source code for a demo of the method. What I find most interesting about the algorithm is that it is entirely a post-process. Given an image full of jagged edges, it searches for such edges and blends these accordingly. There are limits to such reconstruction, of course, but the idea is fascinating and most of the time the resulting image looks much better. Anyway, read the paper.

As an example, I took a public domain image from the web, converted it to a bitonal image so it would be jaggy, then applied MLAA to see how the reconstruction looked. The method works on full color images (though has to deal with more challenges when detecting edges). I’m showing a black and white version so that the effect is obvious. So, here’s a zoom in of the jaggy version:

zoomed, no antialiasing (B&W)

And here are the two smoothed versions:

zoomed, original zoomed, MLAA

Which is which? It’s actually pretty easy to figure: the original, on the left, has some JPEG artifacts around the edges; the MLAA version, to the right, doesn’t, since it was derived from the “clean” bitonal image. All in all, they both look good.

Here’s the original image, unzoomed:

original

The MLAA version:

MLAA

For comparison, here’s a 3×3 Gaussian blur of the jaggy image; blurring helps smooth edges (at a loss of overall crispness), but does not get rid of jaggies. Note the horizontal vines in particular show poor quality:

3x3 Gaussian blur

Here’s the jaggy version derived from the original, before applying MLAA or the blur:

jaggy B&W version

Tags: , , , ,

  1. PolyVox’s avatar

    Thanks Eric, that’s a cool technique. Do you know if they have a GPU implementation? I only saw reference to the CPU one, but I guess that’s not suprising as it’s Intel.

  2. Eric’s avatar

    They don’t have a GPU-based implementation yet, AFAIK, though I know there’s interest in making one.

  3. PolyVox’s avatar

    Let’s hope so – it would be very interesting to see how it compares to the other techniques out there. Espessially given the constraints on traditional antialiasing when using deferred shading.

  4. vence’s avatar

    A GPU implementation has been released. Check on SIGGRAPH Talk ‘Games and Real Time’ and on this webpage : http://igm.univ-mlv.fr/~biri/mlaa-gpu/

  5. IrYoKu’s avatar

    A faster GPU implementation can be found here:
    http://www.iryokufx.com/mlaa/

    Typical execution times are 3.79 ms on Xbox 360 and 0.44 ms on a nVIDIA GeForce 9800 GTX+, for a resolution of 720p. Memory footprint is 2x the size of the backbuffer on Xbox 360 and 1.5x on the 9800 GTX+. Meanwhile, 8x MSAA takes an average of 5 ms per image on the same GPU at the same resolution, 1180% longer (i.e. processing times differ by an order of magnitude).

Reply