ray tracing

You are currently browsing articles tagged ray tracing.

The Ray Tracing Gems early proposals deadline is June 21, a week away (the final deadline is October 15th). Submit a one-page proposal by June 21 and there’s the extra incentive offered by NVIDIA, a Titan V graphics card to the top five proposals (which I finally looked up – if you don’t want it, trade it in for a nice used car). Anyway, call for proposals for the book is here.

While some initial impetus for making such a book is the new DXR/VKRT APIs, we want the book to be broader than just this area, e.g.,  ray tracing methods using various hardware platforms and software, summaries of the state of the art, best practices, etc. In the spirit of Graphics Gems, GPU Gems, and the Journal of Computer Graphics Techniques, I see our book as a way to inform readers about implementation details and other elements that normally don’t make it into papers. For example, if you have a technique that was not long enough, or too technically involved, to publish in a journal article, now is your chance. Mathematics journals publish short results all the time – computer graphics journals, not so much.

I would also like to see summaries for various facets of the field of ray tracing. For example, I think of Larry Gritz’s article “The Importance of Being Linear” from GPU Gems 3 as a great example of this type of article. It is about gamma correction – not a new topic by any stretch – but its wonderful and thoughtful exposition reached many readers and did a great service for our field. I still point it out to this day, especially since it is open access (a goal for Ray Tracing Gems, too).

You can submit more than one proposal – the more the better, and short proposals are fine (encouraged, in fact). That said, no “Efficient Radiosity for Daylight Simulation in Closed Environments” papers, please; that’s been done (if that paper doesn’t ring a bell, you owe it to yourself to read the classic WARNING: Beware of VIDEA! page). In return, we promise fair reviewing and not to roll the die.

Update: a proposal is just a one-page or less summary of some idea for a paper, and can be written in any format you like: Word, PDF, plain text, etc. Proposals are not required, either by June 21 or after. They’re useful to us, though, as a way to see what’s coming, let each prospective contributor know if it’s a good topic, and possibly connect like-minded writers together. Also, a proposal that “wins” on June 21 does not mean the paper itself will automatically be accepted – each article submitted will be judged on its merits. The main thing is the paper itself, due October 15th. Send proposals to [email protected] – we look forward to what you all contribute!

Monument to the Anonymous Peer Reviewer,

Monument to the Anonymous Peer Reviewer

Tags: ,

Given the recent DXR announcements, Tomas Akenine-Möller and I are coediting a book called Ray Tracing Gems, to come out at GDC 2019. See the Call for Participation, which pretty much says it all. The book is in the spirit of the Graphics Gems series and journals such as JCGT. Articles certainly do not have to be about DXR itself, as the focus is techniques that can be applied to interactive ray tracing. The key date is October 15th, 2018, when submissions are due.

To self-criticize a tiny bit, the first sentence of the CFP:

Real-time ray tracing – the holy grail of graphics, considered unattainable for decades – is now possible for video games.

would probably be more factual as “Real-time ray tracing for video games – … – is now possible.” But, the book is not meant to be focused on just video game techniques (though video games are certainly likely to be the major user). I can see ray tracing become more a standard part of all sorts of graphics programs, e.g., much faster previewing for Blender, Maya, and the rest.

As far as “considered unattainable for decades” goes, interactive ray tracing has been attained long ago, just not for (non-trivial) video games or other interactive applications. My first encounter with an interactive ray tracer was AT&T’s Pixel Machine back in 1987. I had put out the Standard Procedural Databases on Usenet the week before SIGGRAPH, and was amazed to see that they had grabbed them and were rendering some in just a few seconds. But the real excitement was a little postage-stamp (well, maybe 6 stamps) sized rendering, where you could interactively use a mouse to control a shiny sphere’s position atop a Mandrill plane texture.

The demoscene has had real-time ray tracers since 1995, including my favorite, a 252 byte program (well, 256, but the last four bytes are a signature, “BA2E”) from 2001 called Tube by 3SC/Baze. Enemy Territory: Quake Wars was rendered using ray tracing on a 20-machine system by Daniel Pohl at Intel a decade ago. OptiX for NVIDIA GPUs has been around a long time. Shadertoy programs usually perform ray marching. Imagination Technologies developed ray tracing support for mobile some years back. There are tons more examples, but this time it feels different – DXR looks here to stay, with lots of momentum.

Ray tracing is, in my opinion, more easily adopted by computer-aided design and modeling programs, as users are willing to put up with slower frame rates and able to wait a few seconds every now and then for a better result. Systems such as KeyShot have for some years used only ray tracing, performing progressive rendering to update the screen on mouse up. Modelers such as Fusion 360 allow easy switching to progressive ray tracing locally, or for finished results can render at higher speeds on the cloud. I think DXR will make these few seconds into a handful of milliseconds, and near-interactive into real-time.

In a sense, this history misses the point: for interactive rendering we use whatever gives us the best quality in an allotted amount of time. We usually don’t, and probably shouldn’t, trace rays everywhere, just for the purity of it. Rasterization works rapidly because of coherence exploited by the GPU. Ray tracing via DXR is a new piece of functionality, one that looks general enough and with support enough that it has the potential to improve quality, simplify engine design, and reduce the time spent by artists in creating and revising content (often the largest expense in a video game).

Long and short, DXR is the start of an exciting new chapter in interactive rendering, and we look forward to your submissions!

Tags:

One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos:

Tags:

Next in the continuing series. In this episode Jaimie finds that the world is an illusion and she’s a butterfly’s dream, while Wilson works out his plumbing problems.

Tags: , , , , ,

512 and counting

I noticed I reached a milestone number of postings today, 512 answers posted to the online Intro to 3D Graphics course. Admittedly, some are replies to questions such as “how is your voice so dull?” However, most of the questions are ones that I can chew into. For example, I enjoyed answering this one today, about how diffuse surfaces work. I then start to ramble on about area light sources and how they work, which I think is a really worthwhile way to think about radiance and what’s happening at a pixel. I also like this recent one, about z-fighting, as I talk about the giant headache (and a common solution) that occurs in ray tracing when two transparent materials touch each other.

So the takeaway is that if you ever want to ask me a question and I’m not replying to email, act like you’re a student, find a relevant lesson, and post a question there. Honestly, I’m thoroughly enjoying answering questions on these forums; I get to help people, and for the most part the questions are ones I can actually answer, which is always a nice feeling. Sometimes others will give even better answers and I get to learn something. So go ahead, find some dumb answer of mine and give a better one.

By the way, I finally annotated the syllabus for the class. Now it’s possible to cherry-pick lessons; in particularly, I mark all lessons that are specifically about three.js syntax and methodology if you already know graphics.

alpha

Tags: ,

  • Fairly new book: Practical Rendering and Computation with Direct3D 11, by Jason Zink, Matt Pettineo, and Jack Hoxley, A.K.Peters/CRC Press, July 2011 (more info). It’s meant for people who already know DirectX 10 and want to learn just the new stuff. I found the first half pretty abstract; the second half was more useful, as it gives in-depth explanation of practical examples that show how the new functionality can be used.
  • Two nice little Moore’s Law-related articles appeared recently in The Economist. This one is about how the law looks to have legs for a number of more years, and presents a graph showing how various breakthroughs have kept the law going over the past decades. Moore himself thought the law might hold for ten years. This one talks about how computational energy efficiency is doubling every 18 months, which is great news for mobile devices.
  • I used to use MWSnap for screen captures, but it doesn’t work well with two monitors and it hangs at times. I finally found a replacement that does all the things I want, with a mostly-good UI: FastStone Capture. The downside is that it actually costs money ($19.95), but I’m happy to have purchased it.
  • Ray tracing vs. rasterization, part XIV: Gavan Woolery thinks RT is the future, DEADC0DE argues both will always have a place, and gives a deeper analysis of the strengths and weaknesses of each (though the PITA that transparency causes rasterization is not called out) – I mostly agree with his stance. Both posts have lots of followup comments.
  • This shows exactly how far behind we are in blogging about SIGGRAPH: find the Beyond Programmable Shading course notes here – that’s just a mere two months overdue.
  • Tantalizing SIGGRAPH Talk demo: KinectFusion from Microsoft Research and many others. Watch around 3:11 on for the great reconstruction, and the last minute for fun stuff. Newer demo here.
  • OnLive – you should check it out, it’ll take ten minutes. Sign up for a free account and visit the Arena, if nothing else: it’s like being in a sci-fi movie, with a bunch of games being played by others before your eyes that you can scroll through and click on to watch the player. I admit to being skeptical of the whole cloud-gaming idea originally, but in trying it out, it’s surprisingly fast and the video quality is not bad. Not good enough to satisfy hardcore FPS players – I’ve seen my teenage boys pick out targets that cover like two pixels, which would be invisible with OnLive – but otherwise quite usable. The “no download, no GPU upgrade, just play immediately” aspect is brilliant and lends itself extremely well to game trials.

OnLive Arena

Tags: , , , , , ,

Seven things:

  • There’s a post on speculative contacts by Paul Firth, a way of simplifying and stabilizing collision detection that has been used in Little Big Planet. Particularly nice is that demos are built into the page, so you can try the various methods out and see the problems and performance for yourself. This author has followed up with “Collision Detection for Dummies“, a great overview, and “Physics Engines for Dummies“, again with interactive demos.
  • The Gamedev Coder Diary has a worthwhile summary of the current state of deferred shading vs. deferred lighting (aka “light pre-pass”) techniques, discussing problems and strengths of each.
  • The CODE517E blog has had a number of good posts lately, including an article on deferred rendering myths, another on stable cascaded shadow maps, an accumulation-buffer-like way of making super-high resolution images for printing (with some worthwhile analysis of problems it engenders with mipmap sampling and with view shifting – fun to think about), an extensive rundown of programming languages for videogames, and a summary of tools he uses (quite the long list – I’m still working through those I hadn’t seen before).
  • On the topic of languages, Havok put together a page collecting the Lua tutorial talks at GDC 2011.
  • The Boeing 777 model (almost 400 million polygons) ray traced at interactive rates on a consumer-level PC, using CUDA. CentiLeo is an out-of-core GPU ray tracer, see this page for some of the slides from the (rather long) video. That said, don’t be fooled by the start of the video: those sequences are generated at 15 seconds a frame and played back at 60 FPS (so 500-1000x from being real-time). Still, the preview mode is indeed interactive, and the Boeing is a huge model. On the other end of things, here’s a fun demoscene ray trace. By the way, Ray Tracey’s blog is good for keeping up on new ray tracing videos and demos and other related topics.
  • A poster accepted to SIGGRAPH 2011 by Ohlarik and Cozzi gives a clever little method of properly drawing lines on surfaces for GIS applications. It converts lines to “walls”, then marks those pixels where there is a visibility change of the wall (i.e., one pixel of the wall is visible, a neighboring pixel is not), with a correction for terrain silhouette edges. One more trick for the bag.
  • More about the look and feel of games than the technical nerdy stuff I cover here, Topi Kauppinen’s blog pointed me to Susy Oliveira’s sculptures, which are pretty amusing (finally, perfect models for 3D web browsers). There have been similar works by other artists (e.g. Eric Testroete’s head), but the more the merrier.

Tags: , , , , , , , , ,

Just in time for SIGGRAPH (so I wouldn’t get those “when’s the next issue coming out?” questions), here it is.

Tags:

There will be a Birds of a Feather gathering at SIGGRAPH 2010 about GPU Ray Tracing: Wednesday, 4:30-6 pm, Room 301 A.

A brief description from Austin Robison: We won’t have a projector or desktop machines set up, but please feel free to bring your laptops to show off what you’ve been working on! Additionally, I’ve created a Google Group mailing list that I hope we can use, as a community, to share insights and ask questions about ray tracing on GPUs not tied to any specific API or vendor. Please sign up and share your news, experiences and ideas: http://groups.google.com/group/gpu-ray-tracing.

Tags: , ,

I was waiting around a bit for my younger son’s doctor’s appointment this morning, so I decided to edit a book. I finished it just now, it’s called Another Introduction to Ray Tracing. It’s 471 pages in book form. You can download it for free, or order a paperback copy from PediaPress for $22.84 plus shipping. I won’t earn a dime from it, but since it took me less than two hours to make, no problem.

So what’s happening here? Due to investigating Alphascript and Betascript publishing a month ago, reporting it on Slashdot, and following up on a lot of great comments, I learnt a number of interesting tidbits. Here’s a rundown.

First, VDM Publishing itself is sort of a vanity press, but with no cost to the author. It seeks out authors of PhD theses and similar, asking for permission to publish. This is not all that unreasonable: because the works are only published on demand, the authors do not have to pay anything, they even get a few hardcopies for free. Here’s an example from our field that I reported on in February. That said, it’s mostly a win for VDM Publishing, who charge steep prices for the resulting works. Such not-quite-books mix in with other books on Amazon. It takes a bit of searching to realize that the work is a thesis and likely could be downloaded for free. A bit misleading, perhaps, but not all that horrifying. Caveat Emptor.

VDM Publishing also has an imprint called LAP, Lambert Academic Press, which does the same thing, publishing theses such as this one by Nasim Sedaghat. With a little Googling you can find Nasim, and then find the related paper for free.

VDM’s imprints Alphascript and Betascript Publishing I’ve already described, they’re little more than random repackagers of Wikipedia articles. Here’s an example book. I posted one-star reviews for a few of these books on Amazon; what’s funny is that the owner of the firm actually responded to my criticism (with a one-size-fits-all response in slightly broken English).

Four weeks ago Alphascript had 38,909 and Betascript 18,289 books listed on Amazon. To my surprise they now have 39,817 and 18,295 books, a total increase of only (only!) 914 new books – looks like they’re slowing down. They’ll have to work hard to catch up with Philip M. Parker’s 107,182 books or his publishing firm ICON Group International, with 473,668 books. The New York Times has an interesting article about this guy.

Betascript Publishing has two books found on Amazon related to ray tracing: Ray Tracing (Graphics) and Rasterization (which includes a section on ray tracing). The ray tracing book is 88 pages long and $46, more than 50 cents a page. My book, at $22.84 for 471 pages, is less than a nickel a page. So my new book’s better per pound. I actually worked a little compiling my book, making logical groupings, picking relevant articles, creating chapter headings, the whole nine yards (never did figure out how to make a cover from an existing Wikipedia image, though). The exercise showed me the limits of Wikipedia as a book-making resource: the individual articles are fine for what they are, some are wonderful, and editing them in a somewhat logical flow has some merit. However, there’s no coherence to the final product and there are large gaps between one article and the next. How to generate rays for a given camera? Sorry, not in my book.

Still, it was great to learn of PediaPress and the ability to make my own Wikipedia book for free. Poking around their site, I even found a book on 3D computer graphics, called 3D Computer Graphics (catchy, neh?). Seeing others making books, I decided to share my own, so now it’s official. Mind you, I haven’t actually read through my book, nor even really checked the flow of articles – no time for that. I mostly grouped by subject and title after identifying likely pages. That said, I do like having a PDF file of all these articles that I can search through.

Obviously authors are not about to be replaced by Betascript books any time soon. If you want to read a real introduction to the topic, a book like Ray Tracing from the Ground Up might serve you better, even if it is a whole dime a page. This cost/benefit ratio for a good book is something I’ll never get over, that books are sold at prices that are equivalent to the cost for just an hour or two for a computer programmer’s time and yet yield so much in the right hands.

Tags: , ,

« Older entries