SIGGRAPH 2010

You are currently browsing articles tagged SIGGRAPH 2010.

I’m finally back from a nice post-SIGGRAPH vacation in the Vancouver area. Both our computers broke early on in the trip, so it was a true vacation.

I hope to post on a bunch of stuff soon, but wanted to first mention something now available: the slides and videos presented in the popular SIGGRAPH course “Advances in Real-Time Rendering in 3D Graphics”. Find them here, and the page for previous years (well, currently just 2010) here. Hats off to Natalya Tatarchuk and all the speakers for quickly making this year’s presentations available.

Tags: ,

Naty and I (mostly Naty!) collected the links for most courses and a few talks given at SIGGRAPH 2010; see our page here. Enjoy! If you have links to any other courses and talks, please do send them on to me or post them as a comment.

Personally, I particularly liked the “Practical Morphological Anti-Aliasing on the GPU” talk. It’s good to see the technique take around 3.5 ms on an NVIDIA 295 GTX, and the author’s site has a lot of information (including code).

Tags: , ,

So one problem with SIGGRAPH is that you hear about the cool thing that you missed and didn’t even know about until it was too late. Here’s one that’s getting repeated: the Computer Animation Festival’s Live Real-Time Demos session. Hall B, 4:30-5:15 pm Tuesday and Wednesday; I just caught the tail-end of Monday’s show and it was worth seeing, so I’ll go back for the rest tomorrow.

What else didn’t you miss yet? Hmmm, in Emerging Technologies Sony’s 360-degree autostereoscopic display is cute, I’ve heard the 3D multitouch table is very worthwhile, and you must try out the Meta Cookie (have someone take your picture while you’re in the headgear, it’s something your grandchildren will want to see). I was also interested to see QuintPixel from Sharp, as it justified their earlier Quattron “four primary colors” display.

More later – Mental Images reception time.

Tags:

The Fast Forward event at SIGGRAPH is set of very short presentations Sunday evening that runs through all the papers at SIGGRAPH. Lately SIGGRAPH has become a “big tent”, including a wide range of fields. This year there are, by my count, 133 SIGGRAPH papers, giving say 50 seconds to each presentation in the two-hour period. This is a pleasant-enough way to cull through all the papers and find which ones to see, and there is the occasional witty presentation, but to be honest, I’m a bit worn out on the method – too slow! In the past few years I find myself looking at my watch halfway through and thinking “egads, still another hour?” and my monocle pops from my eye with comic effect.

So I liked seeing that CGW is hosting a 3 minute 44 second video summary of some of the SIGGRAPH papers. Only 23 papers summarized, but I love that each gets just a sentence – you’re in, you’re out, and you have some sense if it’s a paper you need to see. I wish I had this for all the papers. Second in awesomeness would be a single web page that lists all the abstracts together, for a quick skim. I should write a Perl script that makes one from ACM’s SIGGRAPH 2010 TOC. Also at CGW’s site is a 2 minute 41 second (plus long credits) video summary of the Emerging Technologies area, purely visual – nice, it gives me a little taste, prepping my senses for what I will see there and want to learn more about.

Tags:

There will be a Birds of a Feather gathering at SIGGRAPH 2010 about GPU Ray Tracing: Wednesday, 4:30-6 pm, Room 301 A.

A brief description from Austin Robison: We won’t have a projector or desktop machines set up, but please feel free to bring your laptops to show off what you’ve been working on! Additionally, I’ve created a Google Group mailing list that I hope we can use, as a community, to share insights and ask questions about ray tracing on GPUs not tied to any specific API or vendor. Please sign up and share your news, experiences and ideas: http://groups.google.com/group/gpu-ray-tracing.

Tags: , ,

With less than two weeks until the conference, here’s my final pre-SIGGRAPH roundup of all the game development and real-time rendering content. This is to either to help convince people who are still on the fence about attending (unlikely at this late date) or to help people who are trying to decide which sessions to go to (more likely). If you won’t be able to attend SIGGRAPH this year, this might at least help you figure out which slides, videos, and papers to hunt for after the conference.

First of all, the SIGGRAPH online scheduler is invaluable for helping to sort out all the overlapping sessions (even if you just “download” the results into Eric’s lower-tech version). The iPhone app may show up before the conference, but given the vagaries of iTunes app store approval, I wouldn’t hold my breath.

The second resource is the Games Focus page, which summarizes the relevant content for game developers in one handy place. It makes a good starting point for building your schedule; the rest of this post goes into additional detail.

My previous posts about the panels and the talks, and several posts about the courses go into more detail on the content available in these programs.

Exhibitor Tech Talks are sponsored talks by various vendors, and are often quite good. Although the Games Focus page links to the Exhibitor Tech Talk page, for some reason that page has is no information about the AMD and NVIDIA tech talks (the Intel talk on Inspecting Complex Graphics Scenes in a Direct X Pipeline, about their Graphics Performance Analyzer tool, could be interesting). NVIDIA does have all the details on their tech talks at their SIGGRAPH 2010 page; the ones on OpenGL 4.0 for 2010, Parallel Nsight: GPU Computing and Graphics Development in Visual Studio, and Rapid GPU Ray Tracing Development with NVIDIA OptiX look particularly relevant. AMD has no such information available anywhere: FAIL.

One program not mentioned in the Games Focus page is a new one for this year: SIGGRAPH Dailies! where artists show a specific piece of artwork (animation, cutscene sequence, model, lighting setup, etc.) and discuss it for two minutes. This is a great program, giving artists a unique place to showcase the many bits of excellence that go into any good film or game. Although no game pieces got in this year, the show order includes great work from films such as Toy Story 3, Tangled, Percy Jackson, A Christmas Carol, The Princess and The Frog, Ratatouille, and Up. The show is repeated on Tuesday and Wednesday overlapping the Electronic Theater (which also should not be missed; note that it is shown on Monday evening as well).

One of my favorite things about SIGGRAPH is the opportunity for film and game people to talk to each other. As the Game-Film Synergy Chair, my primary responsibility was to promote content of interest to both. This year there are four such courses (two of which I am organizing and speaking in myself): Global Illumination Across Industries, Color Enhancement and Rendering in Film and Game Production, Physically Based Shading Models in Film and Game Production, and Beyond Programmable Shading I & II.

Besides the content specifically designed to appeal to both industries, a lot of the “pure film” content is also interesting to game developers. The Games Focus page describes one example (the precomputed SH occlusion used in Avatar), and hints at a lot more. But which?

My picks for “film production content most likely to be relevant to game developers”: the course Importance Sampling for Production Rendering, the talk sessions Avatar in Depth, Rendering Intangibles, All About Avatar, and Pipelines and Asset Management, the CAF production sessions Alice in Wonderland: Down the Rabbit Hole, Animation Blockbuster Breakdown, Iron Man 2: Bringing in the “Big Gun”, Making “Avatar”, The Making of TRON: LEGACY, and The Visual Style of How To Train Your Dragon, and the technical papers PantaRay: Fast Ray-Traced Occlusion Caching, An Artist-Friendly Hair Shading System, and Smoothed Local Histogram Filters. (unlike much of the other film production content, paper presentation videos are always recorded, so if a paper presentation conflicts with something else you can safely skip it).

Interesting, but more forward-looking film production stuff (volumetric effects and simulations that aren’t feasible for games now but might be in future): the course Volumetric Methods in Visual Effects, the talk sessions Elemental Training 101, Volumes and Precipitation, Simulation in Production, and Blowing $h!t Up, and the CAF production session The Last Airbender: Harnessing the Elements: Earth, Air, Water, and Fire.

Speaking of forward-looking content, SIGGRAPH papers written by academics (as opposed to film professionals) tend to fall in this category (in the best case; many of them are dead ends). I haven’t had time to look at the huge list of research papers in detail; I highly recommend attending the Technical Papers Fast-Forward to see which papers are worth paying closer attention to (it’s also pretty entertaining).

Some other random SIGGRAPH bits:

  • Posters are of very mixed quality (they have the lowest acceptance bar of any SIGGRAPH content) but quickly skimming them doesn’t take much time, and there is sometimes good stuff there. During lunchtime on Tuesday and Wednesday, the poster authors are available to discuss their work, so if you see anything interesting you might want to come back then and ask some questions.
  • The Studio includes several workshops and presentations of interest, particularly for artists.
  • The Research Challenge has an interesting interactive haunted house concept (Virtual Flashlight for Real-Time Scene Illumination and Discovery) presented by the Square Enix Research and Development Division.
  • The Geek Bar is a good place to relax and watch streaming video of the various SIGGRAPH programs.
  • The SIGGRAPH Reception, the Chapters Party, and various other social events throughout the week are great opportunities to meet, network, and talk graphics with lots of interesting and talented people from outside your regular circle of colleagues.

I will conclude with the list of game studios presenting at SIGGRAPH this year: Activision Studio Central, Avalanche Software, Bizarre Creations, Black Rock Studio, Bungie, Crytek, DICE, Disney Interactive Research, EDEN GAMES, Fantasy Lab, Gearbox, LucasArts, Naughty Dog, Quel Solaar, tri-Ace, SCE Santa Monica Studio, Square Enix R&D, Uber Entertainment, Ubisoft Montreal, United Front Games, Valve, and Volition. I hope for an even longer list in 2011!

Tags: ,

After my last SIGGRAPH post, I spent a little more time digging around in the SIGGRAPH online scheduler, and found some more interesting details:

Global Illumination Across Industries

This is another film-game crossover course. It starts with a 15-minute introduction to global illumination by Jaroslav Křivánek, a leading researcher in efficient GI algorithms. It continues with six 25-30 minutes talks:

  • Ray Tracing Solution for Film Production Rendering, by Marcos Fajardo, Solid Angle. Marcos created the Arnold raytracer which was adopted by Sony Pictures Imageworks for all of their production rendering (including CG animation features like Cloudy with a Chance of Meatballs and VFX for films like 2012 and Alice in Wonderland). This is unusual in film production; most VFX and animation houses  use rasterization renderers like Renderman.
  • Point-Based Global Illumination for Film Production, by Per Christensen, Pixar. Per won a Sci-Tech Oscar for this technique, which is widely used in film production.
  • Ray Tracing vs. Point-Based GI for Animated Films, by Eric Tabellion, PDI/Dreamworks. Eric worked on the global illumination (GI) solution which Dreamworks used in Shrek 2; it will be interesting to hear what he has to say on the differences between the two leading film production GI techniques.
  • Adding Real-Time Point-based GI to a Video Game, Michael Bunnell, Fantasy Lab. Mike was also awarded the Oscar for the point-based technique (Christophe Hery was the third winner). He actually originated it as a real-time technique while working at NVIDIA; while Per and Christophe developed it for film rendering, Mike founded Fantasy Lab to further develop the technique for use in games.
  • Pre-computing Lighting in Games, David Larsson, Illuminate Labs. Illuminate Labs make very good prelighting tools for games; I used their Turtle plugin for Maya when working on God of War III and was impressed with its speed, quality and robustness.
  • Dynamic Global Illumination for Games: From Idea to Production, Anton Kaplanyan, Crytek. Anton developed the cascaded light propagation volume technique used in CryEngine 3 for dynamic GI; the I3D 2010 paper describing the technique can be found on Crytek’s publication page.

The course concludes with a 5 minute Q&A session with all speakers.

An Introduction to 3D Spatial Interaction With Videogame Motion Controllers

This course is presented by Joseph LaViola (director of the University of Central Florida Interactive Systems and User Experience Lab) and Richard Marks from Sony Computer Entertainment (principal inventor of the Eyetoy, Playstation Eye, and Playstation Move). Richard Marks gives two 45-minute talks, one on 3D Interfaces With 2D and 3D Cameras and one on 3D Spatial Interaction with the PlayStation Move. Prof. LaViola discusses Common Tasks in 3D User Interfaces, Working With the Nintendo Wiimote, and 3D Gesture Recognition Techniques.

Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations

After an introduction to the topic of collision detection and proximity queries, this course goes over recent research in collision detection for games including articulated, deformable and fracturing models. It concludes with optimization-oriented talks such as GPU-Based Proximity Computations (presented by Dinesh Manocha, University of North Carolina at Chapel Hill, one of the most prominent researchers in the area of collision detection), Optimizing Proximity Queries for CPU, SPU and GPU (presented by Erwin Coumans, Sony Computer Entertainment US R&D, primary author of the Bullet physics library, which is widely used for both games and feature films), and PhysX and Proximity Queries (presented by Richard Tonge, NVIDIA, one of the architects of the AGEIA  physics processing unit – the company was bought by NVIDIA and their software library formed the basis of the GPU-accelerated PhysX library).

Advanced Techniques in Real-Time Hair Rendering and Simulation

This course is presented by Cem Yuksel (Texas A&M University) and Sarah Tariq (NVIDIA). Between them, they have done a lot of the recent research on efficient rendering and simulation of hair. The course covers all aspects of real-time hair rendering: data management, the rendering pipeline, transparency, antialiasing, shading, shadows, and multiple scattering. It concludes with a discussion of real-time dynamic simulation of hair.

Ray Tracing Solution for Film Production Rendering
Fajardo

2:40 pm
Point-Based Global Illumination for Film Production
Christensen

3:05 pm
Ray Tracing vs. Point-Based GI for Animated Films
Tabellion

3:30 pm
Break 

3:45 pm
Adding Real-Time Point-based GI to a Video Game
Bunnell

4:15 pm
Pre-computing Lighting in Games
Larsson

4:45 pm
Dynamic Global Illumination for Games: From Idea to Production Kaplanyan

5:10 pm
Conclusions, Q & A
Ray Tracing Solution for Film Production Rendering

Fajardo

2:40 pm

Point-Based Global Illumination for Film Production

Christensen

3:05 pm

Ray Tracing vs. Point-Based GI for Animated Films

Tabellion

3:30 pm

Break

3:45 pm

Adding Real-Time Point-based GI to a Video Game

Bunnell

4:15 pm

Pre-computing Lighting in Games

Larsson

4:45 pm

Dynamic Global Illumination for Games: From Idea to Production Kaplanyan

5:10 pm

Conclusions, Q & A

All

All

Tags: ,

For anyone still working on their SIGGRAPH 2010 schedule, SIGGRAPH now has an online scheduler available. They are also promising an iPhone app, but this has not yet materialized. Most courses (sadly, only one of mine) now have detailed schedules. These reveal some more detail about two of the most interesting courses for game and real-time rendering developers:

Advances in Real-Time Rendering in 3D Graphics and Games

The first half, Advances in Real-Time Rendering in 3D Graphics and Games I (Wednesday, 28 July, 9:00 AM – 12:15 PM, Room 515 AB) starts with a short introduction by Natalya Tatarchuk (Bungie), and continues with four 45 to 50-minute talks:

  • Rendering techniques in Toy Story 3, by John Ownby, Christopher Hall and Robert Hall (Disney).
  • A Real-Time Radiosity Architecture for Video Games, by Per Einarsson (DICE) and Sam Martin (Geomerics)
  • Real-Time Order Independent Transparency and Indirect Illumination using Direct3D 11, by Jason Yang and Jay McKee (AMD)
  • CryENGINE 3: Reaching the Speed of Light, by Anton Kaplayan (Crytek)

The second half, Advances in Real-Time Rendering in 3D Graphics and Games II (Wednesday, 28 July, 2:00 PM – 5:15 PM, Room 515 AB) continues with five more talks (these are more variable in length, ranging from 25 to 50 minutes):

  • Sample Distribution Shadow Maps, by Andrew Lauritzen (Intel)
  • Adaptive Volumetric Shadow Maps, by Marco Salvi (Intel)
  • Uncharted 2: Character Lighting and Shading, by John Hable (Naughty Dog)
  • Destruction Masking in Frostbite 2 using Volume Distance Fields, by Robert Kihl (DICE)
  • Water Flow in Portal 2, by Alex Vlachos (Valve)

And concludes with a short panel (Open Challenges for Rendering in Games and Future Directions) and Q&A session by all the course speakers.

Beyond Programmable Shading

The first half,  Beyond Programmable Shading I (Thursday, 29 July, 9:00 AM – 12:15 PM, Room 515 AB) includes seven 20-30 minute talks:

  • Looking Back, Looking Forward, Why and How is Interactive Rendering Changing, by Mike Houston (AMD)
  • Five Major Challenges in Interactive Rendering, by Johan Andersson (DICE)
  • Running Code at a Teraflop: How a GPU Shader Core Works, by Kayvon Fatahalian (Stanford)
  • Parallel Programming for Real-Time Graphics, by Aaron Lefohn (Intel)
  • DirectCompute Use in Real-Time Rendering Products, by Chas. Boyd (Microsoft)
  • Surveying Real-Time Beyond Programmable Shading Rendering Algorithms, by David Luebke (NVIDIA)
  • Bending the Graphics Pipeline, by Johan Andersson (DICE)

The second half, Beyond Programmable Shading II (Thursday, 29 July, 2:00 PM – 5:15 PM, Room 515 AB) starts with a short “re-introduction” by Aaron Lefohn (Intel) continues with five 20-35 minute talks:

  • Keeping Many Cores Busy: Scheduling the Graphics Pipeline, by Jonathan Ragan-Kelley (MIT)
  • Evolving the Direct3D Pipeline for Real-Time Micropolygon Rendering, by Kayvon Fatahalian (Stanford)
  • Decoupled Sampling for Real-Time Graphics Pipelines, by Jonathan Ragan-Kelley (MIT)
  • Deferred Rendering for Current and Future Rendering Pipelines, by Andrew Lauritzen (Intel)
  • PantaRay: A Case Study in GPU Ray-Tracing for Movies, by Luca Fascione (Weta) and Jacopo Pantaleoni (NVIDIA)

and closes with a 15-minute wrapup (What’s Next for Interactive Rendering Research?) by Mike Houston (AMD) followed by a 45-minute panel (What Role Will Fixed-Function Hardware Play in Future Graphics Architectures?) by all the course speakers Mike Houston, Kayvon Fatahalian, and Johan Andersson, joined by Steve Molnar (NVIDIA) and David Blythe (Intel) (thanks to Aaron Lefohn for the update).

Both of these courses look extremely strong, and I recommend them to any SIGGRAPH attendee interested in real-time rendering (I definitely plan to attend them!)

Four presentations by DICE is an unusually large number for a single game developer, but that isn’t the whole story; they are actually doing two additional presentations in the Stylized Rendering in Games course, for a total of six!

Tags: , ,

I don’t often go to SIGGRAPH panels, but this year’s list includes three that look very tempting. Here they are, sorted by date:

Future Directions in Graphics Research

Sunday, 25 July, 3:45 PM – 5:15 PM

The SIGGRAPH website description says, “This panel presents the results of an NSF-funded workshop on defining broader, fundamental long-term research areas for potential funding opportunities in medical imaging and device design, manufacturing, computational photography, scientific visualization, and many other emerging areas in graphics research.” It’s important to know where the funding is going into computer graphics research, and what the researchers think the most promising future directions are. The panelists include some of the most prominent and influential computer graphics professors: Jessica Hodgins from Carnegie Mellon,  James Foley (first author of “Computer Graphics: Principles and Practice”) from Georgia Tech,  Pat Hanrahan (who probably has his name on more SIGGRAPH papers than anyone in the world) from Stanford University, and  Donald P. Greenberg (whose list of former students would make a great first draft for a “who’s who” of computer graphics) from Cornell.

CS 292: The Lost Lectures; Computer Graphics People and Pixels in the Past 30 Years

Monday, 26 July, 3:45 PM – 5:15 PM

This is a unique idea for a panel – in the 1980′s, Ed Catmull and Jim Blinn taught a hugely influential course on computer graphics. Among many others, it inspired Richard Chuang who went on to found PDI. While teaching the course, Ed Catmull was building Lucasfilm’s computer graphics group, which later became Pixar. The panelists are Ed Catmull and Richard Chuang, who according to the website description “use video from the course to reflect on the evolution of computer graphics – from the genesis of Pixar and PDI to where we are today.” Catmull in particular is an amazing speaker – this looks well worth attending.

Large Steps Toward Open Source

Thursday, 29 July, 9:00 AM – 10:30 AM

Several influential film industry groups have open-sourced major bits of internal technology recently. This panel discusses why they did it, what were the benefits and where were the challenges. This is definitely relevant to the game industry – would it make sense for us to do the same? (Insomniac is already leading the way – I wish they had a representative on this panel). Panelists include Rob Bredow (CTO of Sony Pictures Imageworks, which has recently launched several important open source initiatives),  Andy Hendrickson (CTO of Walt Disney Animation Studios, which has recently done the same, most notably including the Ptex texture mapping system),  Florian Kainz (Principal R&D Engineer at Industrial Light & Magic and the key individual behind OpenEXR, which ILM open-sourced in 2003),  and Bill Polson (Lead of Production Engineering at Pixar Animation Studios). Pixar doesn’t currently have any open-source initiatives that I know of – does Bill’s participation mean that they are about to announce one?

Tags: , ,

After the courses, the next best source of good SIGGRAPH material for games and real-time graphics professionals is the Talks (formerly called Sketches), and this year is no exception. The final list of Talks can be found on the SIGGRAPH Talks webpage, as well as in the Advance Program PDF. I will summarize the most relevant sessions here, sorted by date:

Avatar for Nerds

Sunday, 25 July, 2-3:30 pm

  • A Physically Based Approach to Virtual Character Deformations (Simon Clutterbuck and James Jacobs from Weta Digital Ltd.) – I saw an early version of this presentation at Digital Domain a few weeks ago – although they use an expensive physical muscle simulation, they bake the results into a pose-space deformation-like representation; this kind of approach could work for games as well (pose-space deformation approaches in general offer a useful way to “bake” expensive deformations; their use in games should be further explored).
  • Rendering “Avatar”: Spherical Harmonics in Production (Nick McKenzie, Martin Hill and Jon Allitt from Weta Digital Ltd.) – The website says “Application of spherical harmonics in a production rendering environment for accelerated final-frame rendering of complex scenes and materials.” This sounds very similar to uses of spherical harmonics in games; making this talk likely to yield applicable ideas.
  • PantaRay: Directional Occlusion for Fast Cinematic Lighting of Massive Scenes (Jacopo Pantaleoni, Timo Aila, and David Luebke from NVIDIA Research; Luca Fascione, Martin Hill and Sebastian Sylwan from Weta Digital Ltd.) – the website mentions “…a novel system for precomputation of ray-traced sparse, directional occlusion caches used as a primary lighting technology during the making of Avatar.” Like the previous talk, this sounds very game-like; these are interesting examples of the convergence between graphics techniques in film and games going in the less common direction, from games to film rather than vice-versa. Note that  several of the authors of this talk are speaking at the “Beyond Programmable Shading” course, and there is also a paper about PantaRay (called “A System for Directional Occlusion for Fast Cinematic Lighting of Massive Scenes”).

Split Second Screen Space

Monday, 26 July, 2-3:30 pm

  • Screen Space Classification for Efficient Deferred Shading (Neil Hutchinson, Jeremy Moore, Balor Knight, Matthew Ritchie and George Parrish from Black Rock Studio) – website sez, “This talk introduces a general, extendible method for screen classification and demonstrates how its use accelerated shadowing, lighting, and post processing in Disney’s Split/Second video game.” This sounds like a useful extension to SPU-based screen tile classification methods; I wonder if it is cross-platform.
  • How to Get From 30 to 60 Frames Per Second in Video Games for “Free” (Dmitry Andreev from LucasArts) – well, this title is promising a lot! The website description doesn’t say much more than the title, but if LucasArts actually uses it in production this might be useful.
  • Split-Second Motion Blur (Kenny Mitchell, Matt Ritchie and Greg Modern from Black Rock Studio) – the description mentions “image and texture-space sampling techniques”, so this is probably a combination of blurring road textures in the direction of motion with screen-space techniques. Split-Second looks good; an overall description of their motion blur system should be interesting to hear.
  • A Deferred-Shading Pipeline for Real-Time Indirect Illumination (Cyril Soler and Olivier Hoel from INRIA Rhone-Alpes; Frank Rochet from EDEN GAMES) – there have been screen-space indirect illumination (approximation) techniques published before, but none used in games that I know of; there could be some useful ideas here.

APIs for Rendering

Wednesday, 28 July, 2-3:30 pm

  • Open Shading Language (Larry Gritz, Clifford Stein, Chris Kulla and Alejandro Conty from Sony Pictures Imageworks) – this Open-Source project from Sony Pictures Imageworks is interesting in that it is a shading language designed from the ground up for ray-tracing renderers. Probably not of immediate relevance to games, but some day…
  • REYES using DirectX 11 (Andrei Tatarinov from NVIDIA Corporation) – the website summary claims that this REYES implementation uses “not only the compute power of GPU, but also the fixed-function stages of the graphics pipeline.” This is something I have wanted to see someone try for a long time; the typical pure-Compute approaches to GPU-accelerated REYES seem wasteful, given the similarities between the existing fixed function units and some of the operations in the REYES algorithm. It will be interesting to see how efficient this implementation ends up being.
  • WebGLot: High-Performance Visualization in the Browser (Dan Lecocq, Markus Hadwiger, and Alyn Rockwood from King Abdullah University of Science and Technology) – although anything that makes it easier for browser-based games to use the GPU is interesting, I’m not familiar enough with the existing approaches to judge how new this stuff is.

Games & Real Time

Thursday, 29 July, 10:45 am-12:15 pm

  • User-Generated Terrain in ModNation Racers (James Grieve, Clint Hanson, John Zhang, Lucas Granito and Cody Snyder from United Front Games) – from all accounts, the system for user-generated tracks and terrain in ModNation Racers is impressive; a description of this system by its developers is well worth attending.
  • Irradiance Rigs (Hong Yuan from University of Massachusetts Amherst; Derek Nowrouzezahrai from University of Toronto; Peter-Pike Sloan from Disney Interactive Studios) – this looks like an extension of light-probe lighting techniques; it promises better results for large objects and / or near lighting. These techniques are very common in games, and this talk looks likely to be useful.
  • Practical Morphological Anti-Aliasing on the GPU (Venceslas Biri and Adrien Herubel from Université Paris-Est; Stephane Deverly from Duran Duboi Studio) – since God of War III produced great visuals from an SPU implementation of Morphological Antialiasing, there has been much interest in the games industry for  more GPU-friendly version of the algorithm, for use on XBox 360 or high-end PCs. Its hard to tell from the short description on the website whether the version in this talk is any good, but it might well be worth attending the talk to find out.
  • Curvature-Dependent Reflectance Function for Rendering Translucent Materials (Hiroyuki Kubo from Waseda University; Yoshinori Dobashi from Hokkaido University; Shigeo Morishima from Waseda University) – this sounds similar to the paper Curvature-Based Shading of Translucent Materials, such as Human Skin by Konstantin Kolchin (we discuss it in the section on “Wrap Lighting” in RTR3, since it is essentially an attempt to put wrap lighting on a physically sound footing). Since in most cases curvature can be precomputed, this could be a cheap way to get more accurate subsurface scattering effects.

A lot of the film production talk sessions also look interesting, even without an explicit game or real-time connection; I have often found useful information at such talks in previous years. These sessions include “Elemental Training 101″, “All About Avatar”, “Rendering Intangibles”, “Volumes and Precipitation”, “Simulation in Production”, “Blowing $h!t Up”, “Pipelines and Asset Management” and “Fur, Feathers and Trees”.

Tags: , ,

« Older entries