Tag Archives: antialiasing

FXAA Rules, OK?

So there are those people out there that punch other people’s punchlines. Someone’s three quarters of the way through telling a joke, and a listener says, “oh, right, this one ends ‘to get to the other side'”. You don’t want to be that guy, but that’s a little bit how I feel writing about FXAA, given that there’s a whole course at SIGGRAPH next month about these sorts of antialiasing techniques. I blame Morgan McGuire’s Twitter feed, as he (and 17 others) retweeted Timothy Lottes’ posting that he had released shader code for FXAA. I’d seen FXAA mentioned before, NVIDIA put it in their DirectX 11 SDK. Which, frankly, is sadly misleading – the implication is that it works only on GTX 200-level hardware and above, when in fact it works on DirectX 9 shader model 3.0 hardware, GLSL 1.20, XBox 360, and Playstation 3, to name a few, and is optimized in various ways for newer GPUs. Anyway, seeing this shader code available, I was interested to try it out. Morgan mentioning that he liked it a lot got me a lot more interested. A few hours later…

So what the heck am I blathering about? To start, there are a number of these ??AA methods that are based on post-processing a color (and sometimes, also normal and depth) buffer. MLAA, morphological antialiasing, was the first used for 3D images, back in 2009. The basic idea is “find edges and smooth them”. The devil’s in the details, which is what the SIGGRAPH course will delve into (and I’ll certainly attend): how wide an area do you search to try to find a straight edge? how do you deal with curves and corners? how do you avoid oversmoothing thin edges, blurring them twice? how does it look frame to frame? and, most important if you want to use it interactively, how do you do this efficiently?

I’ve wanted an MLAA-like solution for two years, since before HPG 2009 when I noticed the MLAA paper on Ke-Sen’s pages and talked to Alexander Reshetov about it (who was very helpful and forthcoming). I even got a junior programmer to attempt to implement it in a shader, but the implementation was quite slow (due to a very wide search area) and ultimately flawed, and we didn’t have time to get back to it. Last year at SIGGRAPH there was a talk by a group in France, led by Venceslas Biri and Adrien Herubel, about implementing MLAA on the GPU, and they released source code. I spent a bit of time with their code, but it was developed on Linux and I had some problems getting it to work on Windows properly. My “I’ll just take a few hours and see where I get” time was gone, and still no easy solution. There were some other interesting bits out there, like the article in GPU Pro 2, Practical Morphological Anti-Aliasing, with even a github project, but there were different versions for DX9 and 10 (and not OpenGL), lots of files involved, and I didn’t want to get involved. Even Humus had a code sample, but I was still a bit shy to committing more time. (Also, his needs geometric information, and I wanted to antialias NPR edges formed by dilation, i.e., image processing, which have no underlying geometry).

Then the FXAA shader code was released: well-commented, with clear integration instructions, just needs a color buffer, and all in one shader file. FXAA is not the solution to all of life’s problems (or is it?), but for me, it’s wonderful. It took me all of an hour to fold into our system as a shader (and then another three debugging why the heck it wasn’t registering properly – our shader system turns out to be very particular about path names). The code runs on just about everything and has extensive comments. There are control knobs for the fiddlers out there, but I haven’t messed with these – it looks great out of the box to me.

So, after all that breathless buildup, here’s the punchline:

On the left is your typical jaggy image, on the right is FXAA. Sure, it’s not perfect – nearly-vertical lines can look considerably better with a wider edge search area (as seen in MLAA), dropouts could be picked up by supersampling or MSAA, thin lines can have problems – but this shader gives a huge improvement with no extra samples, and just one pretty-quick pass (plus – full disclosure – a preprocess of computing the luminance/luma (grayscale) and shoving it in the alpha channel). Less than 1 millisecond cost per frame on a GTX 480? Works on sRGB and linear? Code’s in the public domain? Sign me up!

See lots more examples on Timoty Lottes’ page. Read his whitepaper for algorithm details and his newer posts for tweaks and improvements. An easy-to-use demo of an earlier version of his shader can be downloaded here – just hit the space bar to toggle FXAA on and off. Enjoy!

Visual Stuff

After all the heavy lifting Naty’s been doing in covering conferences, I thought I’d make a light posting of fun visual stuff.

The first one’s not particularly visual, I include it just because the cover and book description was put on the web just a few days ago:

GPU Pro cover

In short, the ShaderX series has moved publishers, from Charles River Media to A.K. Peters. Unfortunately for everyone else in the world, CRM retains the rights to the ShaderX name, hence the confusing rename. This book is ShaderX8, under a new title.

This resource is possibly handy: a map of game studios and educational institutions, searchable by state, city, etc. That said, it’s a bit funky: search by “Massachusetts” and you get a few reasonable hits, plus the Bermuda Triangle. Search on “MA” for State and you get lots of additional hits, mostly mall stores. But, major developers like Harmonix (in Cambridge) don’t show up. So, take it with a grain of salt, but it might be handy in turning up a place or two you might not have found otherwise.

A few weeks back I passed on a link from Morgan McGuire’s worthwhile Twitter blog (the only good use I’ve seen for Twitter so far) for a business-card sized ray tracer created by Andrew Kensler. In case you were too busy to actually compile and run this tiny piece of code, here’s the answer, computed in about a minute, sent on to me by Mauricio Vives. Note the depth of field and soft shadows:

Andrew Kensler's ray tracer

Speaking of ray tracing, I noticed some GPU-side ray tracers are available for iPhone 3GS from Angisoft:

Julia Set ray traced

With the recent posting on Morphological Antialiasing, Matt Pharr pointed me at this cool Wikipedia page on scaling up pixel art. To whet your appetite, here’s an example from that page, the left side being the original image used to generate the right:

Wikipedia pixel art scaling example

In a similar vein, I was highly impressed by the examples created by Potrace, a free, GPL’ed package for deriving Bézier curves from raster images. Here’s an example:

Original, raster head Smoothed head with Potrace

See more examples on Peter Selinger’s Potrace examples page. Doubly impressive is that Peter also carefully describes the algorithms used in the process.

I enjoy collecting images of reality that look like they have rendering artifacts. Here’s one from photos by Morgan McGuire, from the Seattle public library. The ground shadow look undersampled and banded, like someone was trying to get soft shadows by just adding a bunch of point light sources. What’s great is that reality is allowed to get away with artifacts – if this effect was seen in a synthetic image it would come across as unconvincing.

Seattle Public Library by Morgan McGuire

The best thing about reality is that it’s real, not photoshopped. I also enjoy photos where reality looks like computer graphics. Here’s a fine example by Benedict Radcliffe from this entertaining collection:

Wireframe Toyota by Benedict Radcliffe

My one non-visual link for this posting is to Jos Stam’s essay on how photography and photorealism is not necessarily the best way to portray reality.

There are tons of visual toys on the web, a few in true 3D. Some (sent on by John McCormack) I played with for up to a whole minute or more: ECO ZOO – click on everything and know it’s all 3D, don’t forget to rotate around; the author’s bio and info is at ROXIK – needs more polygons, but click and drag on the face. In the end, give your eyes a rest with this instant screen saver (actually, it’s also a bit interactive). This last was done using Papervision3D, an open source library which controls 3D in Flash. More demos here. Maybe there’s actually something to this idea of 3D on the web after all… nah, crazy dream.

OK, I’m done with things that are in some way vaguely, almost educational. Here’s a video, 8 Bit Trip, that’s been making the rounds; a little more info here. Not fantastically entertaining, but I admire the amazing dedication to stop motion animation. 1500 hours?!

Art: Xia Xiaowan makes sculptures by a method reminiscent of volume rendering techniques:

Xia Xiaowan sculpture

More at Google Images.

The Mighty Optical Illusions blog is a great place to get a feed of new illusions. Here are two posts I particularly liked: spinning man (sorry, you’ll actually have to click that link to see it) and more from Kitaoka, e.g.

Kitaoka's rotating snake planets

I love that new illusions are being developed all the time nowadays. I found this next one here; unfortunately, to quote Tom Parmenter, “digital technology is the universal solvent of intellectual property rights” (Copyright 1995). No credit is given at that site, so I don’t know who actually made this one, but it’s lovely:

4 perfectly round circles

One last illusion, from here (again, author unknown), included since it’s such a retina-burner:

Flying City

If you hanker for something real and physical after all these, you might consider making a pseudoscope (instructions here). To be honest, I tried, and I’ll tell you that mirrors from the local craft store are truly bad for this project. So, I can’t say I’ve seen the effect desired yet. Next step for me is finding a good, cheap store for front surface mirrors (the link in the article is broken) – if anyone has suggestions, please let me know.

Morphological Antialiasing

An Intel research group has put their papers and code up for download. I had asked Alexander Reshetov about his morphological antialiasing scheme (MLAA), as it sounded interesting – it was! He generously sent a preprint, answered my many questions, and even provided source code for a demo of the method. What I find most interesting about the algorithm is that it is entirely a post-process. Given an image full of jagged edges, it searches for such edges and blends these accordingly. There are limits to such reconstruction, of course, but the idea is fascinating and most of the time the resulting image looks much better. Anyway, read the paper.

As an example, I took a public domain image from the web, converted it to a bitonal image so it would be jaggy, then applied MLAA to see how the reconstruction looked. The method works on full color images (though has to deal with more challenges when detecting edges). I’m showing a black and white version so that the effect is obvious. So, here’s a zoom in of the jaggy version:

zoomed, no antialiasing (B&W)

And here are the two smoothed versions:

zoomed, original zoomed, MLAA

Which is which? It’s actually pretty easy to figure: the original, on the left, has some JPEG artifacts around the edges; the MLAA version, to the right, doesn’t, since it was derived from the “clean” bitonal image. All in all, they both look good.

Here’s the original image, unzoomed:

original

The MLAA version:

MLAA

For comparison, here’s a 3×3 Gaussian blur of the jaggy image; blurring helps smooth edges (at a loss of overall crispness), but does not get rid of jaggies. Note the horizontal vines in particular show poor quality:

3x3 Gaussian blur

Here’s the jaggy version derived from the original, before applying MLAA or the blur:

jaggy B&W version

Interesting bits

I’ve been collecting links via del.icio.us of things for the blog. Let’s go:

Antialiasing thick lines by using textures is an old technique. Areakkusu’s site is nice in that it has good examples and code.

The Level of Detail blog has a great pointer to Slisesix’s amazing demo. “Demo” as in “Demoscene,” where his program is a mere 4k bytes in size. It’s not animated, not real-time, but shows how distance fields could be used for ambient occlusion approximation. Definitely check out all the links: Alex Evan (of LittleBIGPlanet) has a worthwhile talk, and Iñigo’s presentation is even better: good technical content and real-time programs running inside the slides.

I’d rather avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at least allow a wider range of people to experiment and explore. There are a lot of worthwhile followup comments on this thread.

Oogst has a clever trick he calls interior mapping, for rendering walls, floors, and ceilings for buildings seen from the outside. Define a texture to be used for each interior element, and have the pixel shader compute from the eye direction what would be seen inside. There’s no actual geometry, it’s all just computing the ray intersection using (wait for it) a floor function. Humus has demo code available for this technique, using DirectX 10. Admittedly, the various tiles repeat and there are other limits, but actual interiors are vastly superior to the usual dirty or reflective windows currently used in games, with no extra geometry added.

Bavoil and Sainz have a new approach for Screen-Space Ambient Occlusion, using a more elaborate form of horizon mapping: http://developer.nvidia.com/object/siggraph-2008-HBAO.html. Code’s available in NVIDIA’s DX 10 SDK.

If you missed Jon Olick’s talk at SIGGRAPH about voxel octree representation, Timothy Farrar has a summary. Personally, I think Jon’s research is very much that-research, not something that is immediately practical-but I love seeing how changing capabilities and increased flexibility can lead to different approaches.

On Amazon: 4 graphics books for the price of 2, minus the papery bits. Pharr and Humphrey’s “Physically Based Rendering” (PBR) and Luebke’s “Level of Detail for 3D Graphics” are certainly worthwhile, the other two I don’t know about (though look worthwhile and are well-rated). I don’t know a thing about the electronic media used; I’m guessing the books are DRM’ed, not naked PDFs. Searchable is certainly nice. While it’s too bad you can’t just buy the ones you want (I smell a marketing department having some “what can we get them to pay for what bundle?” meetings, given the negligible physical cost), I did notice an interesting thing on Amazon I hadn’t seen before for each book except PBR: “Upgrade this book for $18.39 more, and you can read, search, and annotate every page online.” You can also upgrade books you’ve previously purchased on Amazon.

On Gamasutra, an article summarizing DirectX 11. I liked it: to the point, and with some useful figures.

Every once in awhile someone will say he has a new graphics rendering method that’s awesome, but won’t explain it because of some reason (usually involving money or fame). Here’s one, from Sunfish Studio: no micropolygons, no point sampling. OK, so that leaves-what?-voxels? If anyone knows what this is about, please comment; I’m curious.

GameDeveloperTools.com is a new site that tracks news and has users rate books. To be honest, a lot more voting needs to happen to make the ratings useful-I’d stick with Amazon for now. The main use is that you can look at specific categories, which are a bit better than Amazon’s somewhat random sorting of graphics books (e.g., our book is in three categories on Amazon, competing against artists’ books on using mental ray and RenderMan).

Finally, this, well, this is not interactive graphics, but is just so cool: parking signs understandable from only certain locations.