ray tracing

You are currently browsing articles tagged ray tracing.

Seven more:

  • Michael Abrash has an in-depth article on rasterization on Larrabee. Perhaps a little too in-depth at times; just skim past the assembly instructions. I also found myself asking, “why do that?” – the key is to just keep reading. He tries to make his examples simple and comprehensible, but at the cost of sometimes feeling like they’re oversolving the problem. They aren’t, it’s just that the solution is in fact used in different circumstances in order to be efficient.
  • SIGGRAPH has an interactive rendering event summary page. This page is more for the art production side of things, though; Naty’s coursetalks, and production sessions summaries are more comprehensive and more useful for programmer attendees.
  • NVIDIA has a number of events they’re involved in at SIGGRAPH 2009. Here’s the list.
  • I love this sort of madness: a business-card ray tracer that does depth of field.
  • Accumulated SSAO: the idea of reprojection, of using previous results by finding where they lie on this frame’s view, is one that seems a tad expensive for interactive rendering. It’s hard to know anything about performance and quality from this page, but I thought it was interesting to see.
  • I mentioned Processing in the last post. Another language-related resource for graphics and game programming is pygame, a set of Python modules for writing games. A friend said he found this system to be pretty great, that he could whip up a fairly involved game idea in a few hours.
  • Scribblenauts sounds like the coolest game that will ever come out, period. Even if it’s only 1/10th as good as the previews read, it looks to be pretty darn entertaining.

Tags: , , , , ,

Pete Shirley’s organizing an interactive ray tracing Birds of a Feather meeting at SIGGRAPH 2009. The details, as copied from here:

Interactive Ray Tracing
A variety of academic and industry leaders provide presentations and demos, with questions and discussions encouraged.

Tuesday, 5 – 6 pm
Sheraton New Orleans
Waterbury Ballroom
Peter Shirley
pshirley (at) nvidia.com

I’ll be there to help out. Pete’s already lined up demos from NVIDIA, Intel, Mental, an Imageworks affiliate, Breda University (Arauna), and Caustic. Right now we’re searching out academic groups or anyone else that want to show what they’re doing in the area. If you’ve got something to show or know someone that does, please contact Pete and me.

Tags: , ,

I love the movie sequel title “2 Fast 2 Furious”. How clever, and a great way to guarantee there will never be a third movie. Well, there was, but they had to go the colon route, “… : Tokyo Drift”.

Which is indicative of nothing, as I don’t think I’ve ever actually seen any of these movies. I was reminded of the title as my goal today is to whip through the backlog of 72 potential blog resource links I’ve been gathering on del.icio.us. [Well, as it turns out, I got through 39 of them (the fresher ones), 33 to go…]

ShaderX^7 has been published. We hope to give it an overview sometime soon (mine’s on backorder from Amazon.com).

From various source I heard that OnLive got a bit of notice at GDC. Think: pure server-side computation of all graphics for a game, i.e., a cloud computing model. Now even your grandma’s computer or even a rigged-out TV can play Crysis, assuming the net bandwidth is there. Which of course makes me think: what about latency? Lag for how other players see your action is always there, and causes mismatches (“how did I instantly die?”). But increasing lag for you seeing the consequences of your own actions seems like a non-starter for shooters, at least.

Mark DeLoura has a great two-part article on what game engines are licensed for titles. First part is a general survey, second is about the technology involved. I found it interesting to see what people cared about, e.g. multicore is on people’s minds. Nothing too shocking here, but it’s fantastic to see what is getting used, and why, in this marketplace.

Related to this, I happened across a list of game engines on wikipedia. Not massively useful (e.g. no sense of what’s popular), but a starting place.

John Ratcliff has a graphics math library available for download with an unrestrictive reuse license. He recently added best fit methods for AABB’s and OBB’s.

I was interested to look at the open source, cross-platform (!) model viewer GLC. I’ve wanted something like this for doing some experiments with mesh manipulation. Not a bad viewer, but that’s all it is at this point, unfortunately: you can’t even export to a different 3D format. The search continues… If you know a reasonable open source 3D file viewer/converter out there, please tell me. I should probably bite the bullet and just use Blender, but this application is way overkill.

CUDA voxel rendering – pretty impressive!

I liked this post on optimization mainly because of the line “I went in and found out that some title bar was getting rendered 140 times every time you refreshed the screen”. I can entirely relate (though 140 must be some kind of record): too many times I’ve put output debugging statements showing updates, only to see 2,3,6 updates happening. I once started on a project and in the first few weeks increased performance by 100%, simply by noting the main draw path was being executed twice each frame.

Speaking of performance, there’s an article on volume rendering optimizations when using a ray-casting approach on the GPU.

Wolfenstein source code for the free iPhone version, along with Carmack’s documentation on the project, is available.

Software patents are only slightly dumber than business method patents, which are patently absurd. I hadn’t noticed until now, but there was recently a ruling on a business method patent, In re Bilski, which has been used to strike down software patents.

A detailed data and execution flow diagram for the new DirectX 11 pipeline front-end is available from Jolly Jeffers.

People are still making ray-tracing specific hardware; witness Caustic Graphics. They have a rather amazing claim: “The CausticOne, however, thrives in incoherent raytracing situations: encouraging the use of multiple secondary rays per pixel. Its level of performance is not affected by the degree of incoherence.” Good trick. That said, I can’t say I see any large customer base for such a product. This seems like a company designed for acquisition, similar to Ageia. Fine by me, best of luck to them.

I’m happy to learn that the Humus site now has a news blog. This is a great site for demos of advanced techniques, and for honest comments about strengths and limitations of various approaches.

Another blog: The Geeks of 3D. Tracks demos, APIs, SDKs, and graphics card releases. Handy – some of the links here I found there.

There was a nice little article on data alignment on Gamasutra. Proper alignment is a key element in getting high performance.

I was trying to find the name of the projection of equidistant latitude and longitude lines for a surrounding spherical environment. From this interesting page (click on the “Wall Maps of the World” text) I found it: Plate Carrée.

Predicting the future is so much more interesting than predicting the past. I love this: MIPS per $1000. It’s entertaining to equate raw computing power with structured processing. By the same equivalence, I should be able to hook up 1700 mice in parallel to get a human brain.

A great line from a GPU review: “Nvidia’s new line of unbelievably expensive cards will block out the sun, and ray-trace its own shadow in real time.”

Faber College’s motto is “Knowledge is Good”. Learning about the idea of metamers would have saved this article from confusion. Coming back to this article now, I see all the comments have been removed, and an apologia trying to convert confusion into enlightenment added, but I think this still misses the point. Sure, there is a color associated with a single wavelength of light. But, my guess is that 99.99% of the colors we perceive arrive at any location on the eye as light with a spectral mix of wavelengths, not a single wavelength (Naty will correct me if I’m wrong). Unless you’re Dr. Evil and deal with sharks with frickin’ laser beams on their heads on a daily basis. Hmmm, I’m probably forgetting some other single-wavelength phenomena, like fluorescence. Anyway, the article did lead me to look up more information on metamers on Wikipedia, where I learnt about metameric failure, a term I hadn’t heard before. One more reason a simple RGB representation of color isn’t sufficient.

Cute thing: Snapily lets you turn some set of images or video into lenticular prints.

I don’t have a lot to say about what I do at Autodesk. Here’s a tidbit.

Art for the day, crayons as pixels.

Tags: , , , , , , , , , , , , , , , , ,

NVIRT Slides

Austin Robison kindly shared with some of us his NVIRT talk slides (see my previous blog post). It sounds like these would take a few weeks to show up on the NVIDIA website somewhere. So, until then, I’ve put them up for viewing on our website. No favoritism; it’s just interesting information.

I liked his dichotomy for rasterization vs. ray tracing: rasterization is fast, but needs cleverness to support complex visual effects; ray tracing robustly supports complex visuals but needs cleverness to be fast. Sure, there are any number of counter-arguments to this split, but it has a nugget of truth at its core.

Tags: ,

I’m at I3D 2009; tonight at the dinner Austin Robison at NVIDIA announced NVIRT, which is NVIDIA’s ray-casting engine. I say “casting” as the idea is that you feed it objects, hand it a ray generator and it gives you back the ray intersections desired. Certainly it can be used for ray traced rendering, and the constructs presented make it clear they have thought through this aspect: rays can terminate on the first intersection found (useful for shadow rays), or can return the closest intersection point (eye/reflection/refraction rays). Rays can continue on when a fully transparent object is hit. Objects can be put in any efficiency structure you wish, and structures could be contained by other structures (Jim Arvo’s metahierachies idea). For example, you could put static geometry in a k-d tree, which is highly efficient but expensive to update, while placing dynamic objects in a bounding volume hierarchy, which usually can be updated more easily (though losing efficiency over time) by growing bounds. You have control over what efficiency methods are used.

They’re thinking of this SDK in more general ray-casting terms: collision detection, AI queries, and baking illumination or other characteristics onto surfaces. I can certainly imagine uses for engineering simulation. It runs on CUDA, but hides CUDA programming from the user. By the way, the switching time between CUDA and the graphics API will someday soon be a lot less that it is now.

This SDK will be released sometime this Spring (it will also be incorporated with NVIDIA’s NVSG scene graph SDK, as a separate release). The SDK will come with lots of samples, including source fo a basic ray-tracing renderer. All in all, an interesting development. The catch is, of course, that CUDA does not run on anything but NVIDIA hardware. Nonetheless, this is a fascinating first step. Austin says this effort is a serious attempt by NVIDIA to put this sort of engine in the hands of developers, not some “let’s see if this research sticks” half-baked release. Hearing him talk about the bits of inside information their group learnt about the operation of the GPU, and the corresponding boosts in performance, makes me wonder if other GPU-based ray tracers out there will be able to get near their performance.

I have a bunch of links saved up, which I’ll dump here someday soon, as well as more about I3D 2009 (see Jeremy Schopf’s blog in the meantime). For now I’ll just mention one quick link: Morgan McGuire’s twitter blog. No, it’s not a “I’m drinking a latte and using my iPhone” twitter blog. I like the idea a lot: it’s where he simply puts any great links he’s run across, with a quick description for each. Low maintenance, minimal effort, and useful & interesting, at least to me. It’s about game design and related topics (and unrelated ones) as much as graphics. This is one of those “everyone who finds cool stuff on the internet should do this” concepts, as far as I’m concerned. Sure, there’s del.icio.us and similar social bookmarking sites, but a blog lets me know when there’s something new from someone I respect.

Morgan is one of those uncommon people who has considerable industry experience (e.g. “Titan Quest”) while also being in the academic world. He’s a coauthor of the new book Creating Games, which I had been jumping around inside and sampling snippets, and am now sitting down and reading for real. It is aimed at being a book for teaching a college course on making games, both board- and video-, giving a number of schedules for 3 to 4 week projects and worksheets for these. However, these are appendices; the focus of the book is well-informed surveys of a wide range of game design and creation practices. The first chapter has a great startup project for small groups in a class: “here are some dice and pieces of different colors, some paper – go, make a game in 7 minutes.” Anyway, not graphics related per se, but there’s certainly a lot about the computer games industry inside, much of it technical and practical. My favorite illustration so far is the dependence graph amongst the art assets for Spiderman 3, Figure 3.8 – daunting. You can look inside at Amazon. Me, I’m an avid boardgamer (I was up too late last night playing Dominion with Morgan and Naty Hoffman – consider me entirely biased), so I’m enjoying reading it and thinking maybe I should try to design a game…

Tags: , , , ,

Daniel Pohl has a new article at his site about his efforts to ray trace the game Quake Wars at interactive rates. This article is not heavy-duty, and has some interesting tidbits and visualizations. For example, it turns out that cut-out textures (used for making trees, for example) are pretty expensive for his ray tracer. The problem is that a new ray must be spawned after each intersection is detected. The headache for ray tracing (at least in this system) is that the texture itself is not accessed at the time of intersection – deferred shading, essentially. So the ray tracer does not know that the ray has, in fact, not hit anything (i.e., hit a fully-transparent texel) and could continue unaffected. He also talks about other optimizations that have helped and might help in the future.

What I find exciting about Daniel’s work is that he’s working with data that was optimized for rasterization, not for ray tracing. If ray tracing was suddenly 10x faster than GPU rasterization with existing hardware in, say, DirectX 11 (keep dreaming), it wouldn’t matter that much in the short-term. For most companies there’s a lot of investment in training, workflows, and tools for producing games. For example, look how long it took normal mapping to become a mainstream feature, well after all new GPUs could support it (around 2002 with SM 2.0). So, ray tracing existing models I see as useful for determining whether ray tracing is feasible for current games, while also finding pain-points (such as cutout textures) that will be present in artist-generated content for some time to come.

Tags: ,

The ever-amazing Ke-Sen Huang already has paper pages up for I3D 2009 and Eurographics 2009. Both conferences are currently in that twilight zone where the authors have been notified (and are putting notifications and preprints on their web pages) but the official paper list has not yet been published.

There are already several interesting papers there: Approximating Dynamic Global Illumination in Image Space (available here) extends the popular SSAO (screen-space ambient occlusion) technique to support directional occlusion and single-bounce diffuse reflection. Automatic Linearization of Nonlinear Skinning (available here) introduces a method to automatically place virtual bones, resulting in quality similar to dual quaternion skinning but using traditional linear skinning. Multiresolution Splatting for Indirect Illumination (available here) speeds up reflective shadow maps by using a multiresolution data structure. Bounding volume hierarchies are important for many algorithms (including ray tracing), so a method to rapidly construct them on the fly is useful. Such a method is detailed in Fast BVH Construction on GPUs (paper web page here). The final paper has a somewhat self-explanatory title: Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye (available here).

Two papers, although lacking preprints as of yet, have particularly interesting titles, and I look forward to reading them: Soft Irregular Shadow Mapping: Fast, High-Quality, and Robust Soft Shadows and Real-Time Fluid Simulation using Discrete Sine/Cosine Transforms.

Tags: , , , , ,

Tim Sweeney is a cofounder of Epic Games and lead developer behind the graphics engines for the Unreal series of games. Jon Stokes has a meaty interview with him, up on Ars Technica; go read it!

fourth edition might be C++?Tim talks about how the GPU has become general enough that we will soon be able to get away from rasterization as the only rendering algorithm. Back ten years ago, dealing with an API to do all interactive graphics was limiting. Widening it out with programmable shaders gives more flexibility, but at the cost of complexity of managing the programming environment. Nowadays you’re programming two separate computers that talk to each other. The shift to parallel programming is already a major change in how we need to think about computers, one that hasn’t become a core concept for most of us, yet (myself included; I’m doing my best to wrap my head around Intel’s Threading Building Blocks, for example). Doing such programming in a few different languages is a “feature” we’d all love to see go away.

With Larrabee, CUDA, and compute shaders, the trends of more flexibility continue, though in different flavors. It seems unlikely to me that the pipeline model itself for rendering will fade in popularity any time soon, though rasterization (traditional GPUs) vs. tiling (Larrabee, handhelds) will continue to be a debate. Tim mentions voxel rendering techniques (really, heightfield, in the old games) as something that died once the GPU took over. True. Such techniques are making a return on the GPU even today, via relief mapping and adaptive tessellation. We’re also seeing volume rendering by marching along rays; if an algorithm can be refit to work on a GPU, it will find some use.

So I agree, the increase in flexibility will be all to the good in letting programmers again do much more than render textured opaque triangles via a Z-buffer really fast and most everything else not-so-fast. Frankly, I believe much of the buzz about interactive ray tracing is more an expression of yearning by us graphics programmers that we could actually program again, vs. calling an API. The April Fool’s Day spoof about ray tracing in DirectX 11 fooled a number of people I know, I believe because they wished it were true. Having hacked my fair share of rendering algorithms, I certainly see the appeal.

I think Tim’s a bit overoptimistic on the time frame in which such changes will occur. First, everyone needs to get this future hardware. Sure, NVIDIA points out there are 70 million CUDA-capable graphics cards out there today, but no one is floating CUDA-based programs as alternative interactive renderers at this point (though NVIDIA’s experiments with CUDA ray tracing are wonderful to see). DirectX 9 graphics cards will be around for years to come. As significant, making such techniques part of the normal development toolchain also takes awhile. I think of how long normal (dot-product) bump mapping, introduced around 2001, took to become a feature that was used in games: first most GPUs had to support it, then tools had to generate and manage the maps, then artists had to be trained to use the tools, etc.

When the second edition of our book came out, it was a few hundred pages longer than the first. I held out the hope to Tomas that our third edition would be shorter. My logic was that, with programmable shaders coming to the fore, we wouldn’t have to cover all the little variants that were possible, but rather could just present pure algorithms and not worry about the implementation details.

This came true to some extent. For example, we could cut out chunks of text about extremely specific ways to efficiently compute the Fresnel term, or give examples showing how assembly instructions are packed together in a pixel shader. There was now plenty of space on the GPU for shader instructions, so such detail was nonsensical. It would be like a programming languages book listing all the programs that could be written in the language. We still do have to spend time dealing with the vagaries of the APIs, such as the relatively space-inefficient ways in which triangles are fed through the pipeline (e.g. a “compact” representation of a cube must use 24 separate vertices, when all that is really needed are 8 points and 6 normals).

Counterbalancing such cuts in text, we found we had many more algorithms to write about. With both the increase in abilities in each successive generation of GPUs and APIs, coupled with research into ways to efficiently map algorithms onto new architectures, the book became considerably longer (and certainly heavier, since each illustration’s atoms now each needed 3 bytes instead of 1). So, I’m not holding out much hope for a shorter edition next time around-there’s just so much cool stuff that we can now do, and more yet to come.

Incidentally, we had asked Tim for a pithy quote for our new Hardware chapter. He said he didn’t have anything, but passed on one from Bily Zelsnack. This quote was tempting, but instead we used it in our last chapter: “Pretty soon, computers will be fast,” which I just love for some reason. It may sometimes take 20 seconds to open a file folder on Windows today, but I remain hopeful that someday, someday…

Tags: , , , ,

I edit (maybe once a year) the Ray Tracing News. I usually compile a list for subscribers of what’s happening at each SIGGRAPH that’s ray-tracing related. I thought I’d pass on this year’s list; much of it is RTRT (real-time ray tracing) related.

Tags: ,

Newer entries »