You are currently browsing the archive for the Miscellaneous category.

From the Solid Angle (Arnold renderer) guys, an ad in this month’s Cinefex:

Tags: ,

Go get your hotel reservation for SIGGRAPH 2012 now. Reservations can be cancelled with no cost up to July 19th, so if you think there’s the slightest chance you’ll go, grab a room now.

The HQ hotel and the Figueroa are already gone. The Ritz Milner is certainly cheap and pretty close, but some TripAdvisor reviews eventually scared me off. I then switched to the Luxe City Center (used to be a Holiday Inn), as I prefer being close to the convention center. But I recall complaints about things like the WIFI stinking at the old Holiday Inn, and the place is even pricier now. So I rebooked yet again into The Sheraton, as a pretty good compromise among the factors of cost, quality, and location – being more downtown can be good, as you’re closer to restaurants and other nighttime activities.

Tags: ,

In the past few week I’ve learned of a number of ways to access our book’s content. Some are just plain new, others I simply didn’t know about. Here’s a summary of sources I know, listed from lowest to highest price.

I’ve heard we’ll eventually have a “rent for six months” option for the Kindle version, which makes sense for students. Frankly, the Kindle and Google prices hit me as high: you don’t actually own anything, in the sense that you can’t sell it later. Our publisher says Amazon controls their Kindle price – beats me how that works. On the other hand, electronic versions have the advantages of weight (none extra) and searchability. Me, I love having my own internal PDF version of the book that I can search and copy & paste from. It’s unfortunate that PDFs are too easy to pass on to others.

Personally, I like the Google eBook and Books24x7 concept the best, where you can access the book from any web browser by simply logging in (no installation needed, no need to authorize the device, etc.). This method of access seems to be at a good balance point between reader usability and author/publisher protection.

HPG is a great little conference squarely aimed at interactive rendering techniques, including areas such as hardware and ray tracing. It will be June 25-27 in Paris (France, not Texas), colocated with another excellent gathering of researchers, the Eurographics Symposium on Rendering. See the HPG call for participation and EGSR CFP for more information.

Entirely gratuitous image follows, a voxelized and 3d printed you-know-what (from here):

Tags: , ,

Full Disclosure Update: in the original post, I forgot to mention my affiliation with the SIGGRAPH 2012 committee (I’m the Games Chair).

I’ve given several presentations at SIGGRAPH, and have spoken to many other game developers who have done the same. We have all found it to be an amazing experience; fun, career-enhancing, educational, and somehow simultaneously ego-boosting and humbling.

While there are many other conferences (GDC being uppermost in many game developer’s minds) SIGGRAPH holds a special place for anyone whose work involves computer generated visuals. For almost 40 years, SIGGRAPH has united the many disparate communities working in computer graphics, including academic research, CAD, fine arts, architecture, medical and scientific visualization, games, CG animation, and VFX. Each year the conference attracts the top technical and creative minds of the field for a week-long pressure cooker of learning, discussing, presenting, arguing, networking, and brainstorming about everything to do with computer graphics.

SIGGRAPH 2012 will take place in Los Angeles this August. There is a great opportunity for game developers to present at this year’s conference, but time is short since one of the most important deadlines is less than two weeks away.

Presenting at SIGGRAPH is a lot easier than most people think. While it is true that the quality bar is high, there are several programs that are seeking exactly the kind of practical, real-world advances and innovations that happen all the time in game development. Of these, the SIGGRAPH talk program is the most friendly to game developers; proposals for these 20-minute talks are easy to prepare and the topics covered vary from rendering and shading techniques through tool and workflow improvements to specific look development and production case studies. As a general rule of thumb, If it’s high-quality work and the kind of thing a graphics programmer or technical artist would do, chances are it would make a good SIGGRAPH talk proposal.

The general submission deadline for talks is in just under two weeks, on February 21. That isn’t a lot of time, but fortunately talk submissions only require preparing a one-page PDF abstract and filling out some web forms (additional materials can help if you have them – more details can be found on the talk submission page). Still, getting approval from management typically takes time, so you shouldn’t delay if you are interested. To get an idea of the level of detail expected in the abstract, and of the variety of possible talks, here are some film and game Talk abstracts from recent years: Making Faces – Eve Online’s New Portrait Rendering, MotorStorm Apocalypse: Creating Explosive and Dynamic Urban Off Road Racing, It’s Good to Be Alpha, Kami Geometry Instancer: putting the “smurfy” in Smurf Village, Practical Occlusion Culling in Killzone 3, and High Quality Previewing of Shading and Lighting for Killzone3.

If you are reading this, please consider submitting the coolest thing you’ve done last year as a Talk; the small time investment will repay itself many times over.

Good luck with your submissions!

Tags: ,

Like the title says, IEEE Computer Graphics & Applications has a call for papers on the topic of scattering: acquisition, modeling, and rendering. Deadline is August 25th, for inclusion in their May/June 2013 issue. See the complete CFP here.

A few days ago, I urged (besides other actions), submitting responses to the RFIs from the White House Office of Science and Technology Policy regarding access to research. I myself responded to the the RFI regarding peer-reviewed scholarly publications (I didn’t feel qualified to respond to the other one regarding access to research data sets since I don’t use those as much in my work). The reply I sent is after the break – please note that this is my (Naty’s) personal opinion, and may not reflect Eric and Tomas’ positions.

Read the rest of this entry »

Tags: ,

Ke-Sen Huang, who in a perfect world would be given a stipend just to maintain his wonderful pages, has been on the job collecting I3D 2012 papers. See them here.


Shapeways has an amusing concept: take two headshot photos – front and side – and in a few minutes you can make a 3D version that can then be sent to a 3D printer there. The cost in the video was less than $25, plus shipping etc.

Tags: ,

Inspired by Bing (a person, not a search engine) and by the acrobatics I saw tonight in Shanghai, time for a blog post.

So what’s up with graphics APIs? I’ve been working on a project for a fast 3D graphics system for Autodesk for about 4 years now; the base level (which hides the various flavors of DirectX and OpenGL) is used by Maya, Max, AutoCAD, Inventor, and other products. There are various higher-level optimizations we’ve added (and why Microsoft’s fxc effect compiler suddenly got a lot slower is a mystery), with some particularly nice work by one person here in the area of multithreading. Beyond these techniques, minimizing the raw number of calls to the API is the primary way to increase performance. Our rule of thumb is that you get about 1000-1500 calls a frame (CAD isn’t held to a 60 FPS rule, but we still need to be interactive). The usual tricks are to sort by state, and to shove as much geometry and processing as possible into a single draw call and so avoid the small batch problem. So, how silly is that? The best way to make your GPU run fast is to call it as little as possible? That’s an API with a problem.

This is old news, Tim Sweeney railed against API limitations 3 years ago (sadly, the article’s gone poof). I wrote about his ideas here and added my own two cents. So where are we since then? DirectX 11 has been out awhile, adding three more stages to the pipeline for efficient tessellation of higher-order surfaces. The pipeline’s feeling a bit unwieldy at this point, with a lot of (admittedly optional) stages. There are still some serious headaches for developers, like having to somehow manage to put lighting and material shading in the same pixel shader (one good argument for deferred lighting and similar techniques). Forget about optimization; the arcane API knowledge needed to get even a simple rendering on the screen is considerable.

I haven’t heard anything of a DirectX 12 in the works (except maybe this breathless posting, which I feel obligated to link to since I’m in China this month), nor can I imagine what they’d add of any significance. I expect there will be some minor XBox 72o (or whatever it will be called) -related tweaks specific to that architecture, if and when it exists. With the various CPU+GPU-on-a-chip products coming out – AMD’s Fusion family, NVIDIA’s Tegra 2, and similar from other companies (I think I counted 5, all totaled) – some access costs between the two processors become much cheaper and so change the rules. However, the API still looks to be the bottleneck.

Marketwise, and this is based entirely upon my work in scapulimancy, I see things shifting to mobile. If that isn’t at least the 247th time you’ve heard that, you haven’t been wasting enough time on the internet. But, it has some implications: first, DirectX 12 becomes mostly irrelevant. The GPU pipeline is creaky and overburdened enough right now, PC games are an important niche but not the focus, and mobile (specifically, iPad and other tablets) is fine with the functionality defined thus far by existing APIs. OpenGL ES will continue to evolve, but I doubt we’ll see for a good long while any algorithmically (vs. data-slinging) new elements added to the API that the current OpenGL 4.x and DX11 APIs don’t offer.

Basically, API development feels stalled to me, and that’s how it should be: mobile’s more important, PCs are a (large but slowly evolving) niche, and the current API system feels warped from a programming standpoint, with peculiar constructs like feeding text strings to the API to specify GPU shader effects, and strange contortions performed to avoid calling the API in order to coax the GPU to run fast.

Is there a way out? I felt a glimmer while attending HPG 2011 this year. The paper “High-Performance Software Rasterization on GPUs” by Samuli Laine and Tero Karras was one of my (and many attendees’) favorites, talking about how to efficiently implement a basic rasterizer using CUDA (code’s open sourced). It’s not as fast as dedicated hardware (no surprise there), but it’s at least in the same ball-park, with hardware being anywhere from 1.5x to 8.1x faster for their test cases, median being 3.6x. What I find exciting is the idea that you could actually program the pipeline, vs. it being locked away. They discuss ideas for optimization such as loosening the “first in, first out” rule for triangles currently enforced by all APIs. With its “yet another language” dependency, I can’t say I hope GPGPU is the future (and certainly CUDA isn’t, since it cuts out non-NVIDIA hardware vendors, but from all reports it’s currently the best way to experiment with GPGPU). Still, it’s nice to see that the fixed-function bits of the GPU, while important, are not an insurmountable limit in considering more flexible and general interactive rasterization programming models. Or, ray tracing – always have to stick that in there.

So it’s “forward to the past”, looking at traditional algorithms like rasterization and ray tracing and how to gain efficiency (both in raw speed and in development time) on various modern architectures. That’s ultimately what it’s about for me, at least: spending lots of time fighting the API, gluing together strings to make shaders, and all the other craziness is a distraction and a time-waster. That said, there’s a cost/benefit calculation implicit in all of this. For example, using C# or Java is way more productive than C++, I’d say about 2x, mostly because you’re not tracking down memory problems like leaks and access uninitialized or non-existent values. But, there’s so much legacy C++ code around that it’s still the language of graphics, as previously discussed here. Which means I expect none of the API weirdness to change for a solid decade, at the minimum. Please do go ahead and prove me wrong – I’d be thrilled!

Oh, and acrobatics? Hover your cursor over the image. BTW, the ERA show in Shanghai is wonderful, unlike current APIs.

Tags: , , , ,

« Older entries § Newer entries »