Interesting bits

I’ve been collecting links via del.icio.us of things for the blog. Let’s go:

Antialiasing thick lines by using textures is an old technique. Areakkusu’s site is nice in that it has good examples and code.

The Level of Detail blog has a great pointer to Slisesix’s amazing demo. “Demo” as in “Demoscene,” where his program is a mere 4k bytes in size. It’s not animated, not real-time, but shows how distance fields could be used for ambient occlusion approximation. Definitely check out all the links: Alex Evan (of LittleBIGPlanet) has a worthwhile talk, and Iñigo’s presentation is even better: good technical content and real-time programs running inside the slides.

I’d rather avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at least allow a wider range of people to experiment and explore. There are a lot of worthwhile followup comments on this thread.

Oogst has a clever trick he calls interior mapping, for rendering walls, floors, and ceilings for buildings seen from the outside. Define a texture to be used for each interior element, and have the pixel shader compute from the eye direction what would be seen inside. There’s no actual geometry, it’s all just computing the ray intersection using (wait for it) a floor function. Humus has demo code available for this technique, using DirectX 10. Admittedly, the various tiles repeat and there are other limits, but actual interiors are vastly superior to the usual dirty or reflective windows currently used in games, with no extra geometry added.

Bavoil and Sainz have a new approach for Screen-Space Ambient Occlusion, using a more elaborate form of horizon mapping: http://developer.nvidia.com/object/siggraph-2008-HBAO.html. Code’s available in NVIDIA’s DX 10 SDK.

If you missed Jon Olick’s talk at SIGGRAPH about voxel octree representation, Timothy Farrar has a summary. Personally, I think Jon’s research is very much that-research, not something that is immediately practical-but I love seeing how changing capabilities and increased flexibility can lead to different approaches.

On Amazon: 4 graphics books for the price of 2, minus the papery bits. Pharr and Humphrey’s “Physically Based Rendering” (PBR) and Luebke’s “Level of Detail for 3D Graphics” are certainly worthwhile, the other two I don’t know about (though look worthwhile and are well-rated). I don’t know a thing about the electronic media used; I’m guessing the books are DRM’ed, not naked PDFs. Searchable is certainly nice. While it’s too bad you can’t just buy the ones you want (I smell a marketing department having some “what can we get them to pay for what bundle?” meetings, given the negligible physical cost), I did notice an interesting thing on Amazon I hadn’t seen before for each book except PBR: “Upgrade this book for $18.39 more, and you can read, search, and annotate every page online.” You can also upgrade books you’ve previously purchased on Amazon.

On Gamasutra, an article summarizing DirectX 11. I liked it: to the point, and with some useful figures.

Every once in awhile someone will say he has a new graphics rendering method that’s awesome, but won’t explain it because of some reason (usually involving money or fame). Here’s one, from Sunfish Studio: no micropolygons, no point sampling. OK, so that leaves-what?-voxels? If anyone knows what this is about, please comment; I’m curious.

GameDeveloperTools.com is a new site that tracks news and has users rate books. To be honest, a lot more voting needs to happen to make the ratings useful-I’d stick with Amazon for now. The main use is that you can look at specific categories, which are a bit better than Amazon’s somewhat random sorting of graphics books (e.g., our book is in three categories on Amazon, competing against artists’ books on using mental ray and RenderMan).

Finally, this, well, this is not interactive graphics, but is just so cool: parking signs understandable from only certain locations.

3 thoughts on “Interesting bits

  1. Mauricio

    Ah, but are the parking garage signs equally readable sitting in a Hummer (high) or a VW Beetle (low)? Maybe they don’t have that problem in Melbourne… 🙂

  2. Marco Salvi

    Eric,

    No micropolygons, no point sampling..that can be anything that is not evaluated through point sampling, that is triangles, implicit surfaces, voxels, etc..

    I literally spent 30 seconds on ep.espacenet.com and I found a few patents that are probably related to sunfish studio rendering technology. To ‘sample’ primitives they use something called interval analysis (never heard about it before).

    Here’s a list of patents:
    http://v3.espacenet.com/results?sf=a&DB=EPODOC&PA=sunfish&PGS=15&CY=ep&LG=en&ST=advanced&IN=HAYES+NATHAN

    Marco

  3. TimothyFarrar

    I think the sunfish studio rendering technique works by adaptive hierarchical subdivision of the scene, working in a 2D image space kdtree. Guessing they compute parameters such as depth, color, and position for each of the corners of the regions of the tree subdivision, then knowing the deltas of those parameters, adaptively choose a new division based on the on the most important ones (such as depth to find object boundaries). At some point the subdivision is smaller than a pixel and they only have to subdivide subpixel regions far enough to get a semi-accurate of the image which exists in that pixel of which they use to compute the output color of the pixel.

Comments are closed.