Tag Archives: patents

Seven Things for August 18, 2015

Seven things:

  • Stephen Hill’s great collection of SIGGRAPH 2015 links.
  • As he and others have noted, the entire SIGGRAPH 2015 proceedings is available to all for free download until the end of this week. Grab it now if you’re not a SIGGRAPH member. SIGGRAPH members always have Digital Library access to SIGGRAPH-sponsored conferences, even if not Digital Library subscribers, e.g. here’s the SIGGRAPH 2014 proceedings.
  • New term: froxel – frustum voxel. Alex Evans mentioned it in his fascinating talk in the Advances in RTR course; on page 83 he notes, “The term originated at the Sony WWS ATG group, I believe.” Diagram. He’s semi-right that Shadertoy programs do ray marching through froxels at their simplest; a speedup for Shadertoy is using the minimum distance-field distance found to any object as a minimum step size (e.g., line 126 of this demo, most of which they live-coded during the wonderful Shadertoy studio workshop).
  • Evidence that patents appear to not spur research and innovation, even for big pharma. I like The Economist, as it tries to weigh the evidence for & against some idea, vs. knee-jerking it one way or the other.
  • Folklore 1: Jim Blinn confirmed that the teapot model was scaled down vertically because it looked nicer that way, not that the pixels were non-square (incorrectly propagated here and here). Jim & 3D printed teapot.
  • Which reminds me: here’s my random set of pics from SIGGRAPH 2015, with captions. I like, “Hundreds of beautiful designs, and only one or two that suck.” Update: more photos from Mauricio Vives, along with WebGL specific shots. Need more? Everyone’s.
  • Folklore 2: (Updated and corrected) Rendering equation: Kajiya’s used S as a subscript, in PoDIS Glassner used an omega symbol because it looks like a hemisphere, since that’s what was being integrated over. Wikipedia uses it.

unnamed1

Seven more tomorrow.

Why not?

I like to ask researchers whether they think the release of code should be encouraged, if not required, for technical papers. My argument (stolen from somewhere) is, “would you allow someone to publish an analysis of Hamlet but not allow anyone to see Hamlet itself?” The main argument for publishing the code (beyond helping the world as a whole) is that people can check your work, which I hear is a part of this science stuff in “computer science.”
       
Often they’re against it. The two reasons I hear are “my code sucks” and “we’ve patented the technique.” I can also imagine, “I don’t want those commercial fatcats stealing my code,” to which I say, “put some ridiculous license on it, then.” If the reason is, “I want to publish to enhance my resume and reputation, but I also want to keep it all secret because I’m going to make money off it,” then choose A or B, you can’t have both (or shouldn’t, in my Utopian fantasy world).

Continue reading

Seven Things for July 26th, 2011

The harddrive on my main computer died, which has the odd effect of making me have more time for blogging (and less for screwing around on random stuff). So, seven things:
  • First, if you’re going to HPG 2011, I’ll save you five minutes of searching for where it is: it’s at the Goldcorp Centre for the Arts, Google map here. Note also that things don’t start until 1:30 on Friday.
  • SIGGRAPH parties? I know nothing, except that the official SIGGRAPH reception is 9 to 11 PM Monday at the convention center, and the ACM SIGGRAPH Chapters Party is 8:30 PM to 2 AM on, oh, Monday again. Odd scheduling.
  • Timothy Lottes cannot be stopped: FXAA 3.11 is out (with improvements for thin lines), and 3.12 will soon appear. Note that the shader has a signature change, so your calling shader code will have to change, too.
  • At the Motorola developer site there’s a quick summary of various image compression types used for mobile phones and PCs.
  • Sebastien Hillaire implement the God Rays effect from GPU Gems 3, showing results and problems. Code and executable available for download.
  • I’ve been enjoying some worthwhile articles on patents and copyrights lately, both new and old. Worth a mention: Myrhvold madnessa comic (a bit old but useful) on copyright – a good overview; The Public Domain, a free book by a law professor who helped establish Creative Commons; the July 2011 CACM (behind the paywall, though) had a nice article on why the U.S. dropped “opt-in” copyright back in 1989 (blame Europe). Best idea gleaned, from The Public Domain: the length of copyright is meant to motivate people to create works for payment, so a retroactive increase in the length of copyright  (e.g., to protect Mickey Mouse) makes no sense – it creates no motivation for works already created.
  • Polygon Pictures’ office corridor would be a bad place to be if you worked way too many hours. Otherwise, nice!

More With the Links

I love the movie sequel title “2 Fast 2 Furious”. How clever, and a great way to guarantee there will never be a third movie. Well, there was, but they had to go the colon route, “… : Tokyo Drift”.

Which is indicative of nothing, as I don’t think I’ve ever actually seen any of these movies. I was reminded of the title as my goal today is to whip through the backlog of 72 potential blog resource links I’ve been gathering on del.icio.us. [Well, as it turns out, I got through 39 of them (the fresher ones), 33 to go…]

ShaderX^7 has been published. We hope to give it an overview sometime soon (mine’s on backorder from Amazon.com).

From various source I heard that OnLive got a bit of notice at GDC. Think: pure server-side computation of all graphics for a game, i.e., a cloud computing model. Now even your grandma’s computer or even a rigged-out TV can play Crysis, assuming the net bandwidth is there. Which of course makes me think: what about latency? Lag for how other players see your action is always there, and causes mismatches (“how did I instantly die?”). But increasing lag for you seeing the consequences of your own actions seems like a non-starter for shooters, at least.

Mark DeLoura has a great two-part article on what game engines are licensed for titles. First part is a general survey, second is about the technology involved. I found it interesting to see what people cared about, e.g. multicore is on people’s minds. Nothing too shocking here, but it’s fantastic to see what is getting used, and why, in this marketplace.

Related to this, I happened across a list of game engines on wikipedia. Not massively useful (e.g. no sense of what’s popular), but a starting place.

John Ratcliff has a graphics math library available for download with an unrestrictive reuse license. He recently added best fit methods for AABB’s and OBB’s.

I was interested to look at the open source, cross-platform (!) model viewer GLC. I’ve wanted something like this for doing some experiments with mesh manipulation. Not a bad viewer, but that’s all it is at this point, unfortunately: you can’t even export to a different 3D format. The search continues… If you know a reasonable open source 3D file viewer/converter out there, please tell me. I should probably bite the bullet and just use Blender, but this application is way overkill.

CUDA voxel rendering – pretty impressive!

I liked this post on optimization mainly because of the line “I went in and found out that some title bar was getting rendered 140 times every time you refreshed the screen”. I can entirely relate (though 140 must be some kind of record): too many times I’ve put output debugging statements showing updates, only to see 2,3,6 updates happening. I once started on a project and in the first few weeks increased performance by 100%, simply by noting the main draw path was being executed twice each frame.

Speaking of performance, there’s an article on volume rendering optimizations when using a ray-casting approach on the GPU.

Wolfenstein source code for the free iPhone version, along with Carmack’s documentation on the project, is available.

Software patents are only slightly dumber than business method patents, which are patently absurd. I hadn’t noticed until now, but there was recently a ruling on a business method patent, In re Bilski, which has been used to strike down software patents.

A detailed data and execution flow diagram for the new DirectX 11 pipeline front-end is available from Jolly Jeffers.

People are still making ray-tracing specific hardware; witness Caustic Graphics. They have a rather amazing claim: “The CausticOne, however, thrives in incoherent raytracing situations: encouraging the use of multiple secondary rays per pixel. Its level of performance is not affected by the degree of incoherence.” Good trick. That said, I can’t say I see any large customer base for such a product. This seems like a company designed for acquisition, similar to Ageia. Fine by me, best of luck to them.

I’m happy to learn that the Humus site now has a news blog. This is a great site for demos of advanced techniques, and for honest comments about strengths and limitations of various approaches.

Another blog: The Geeks of 3D. Tracks demos, APIs, SDKs, and graphics card releases. Handy – some of the links here I found there.

There was a nice little article on data alignment on Gamasutra. Proper alignment is a key element in getting high performance.

I was trying to find the name of the projection of equidistant latitude and longitude lines for a surrounding spherical environment. From this interesting page (click on the “Wall Maps of the World” text) I found it: Plate Carrée.

Predicting the future is so much more interesting than predicting the past. I love this: MIPS per $1000. It’s entertaining to equate raw computing power with structured processing. By the same equivalence, I should be able to hook up 1700 mice in parallel to get a human brain.

A great line from a GPU review: “Nvidia’s new line of unbelievably expensive cards will block out the sun, and ray-trace its own shadow in real time.”

Faber College’s motto is “Knowledge is Good”. Learning about the idea of metamers would have saved this article from confusion. Coming back to this article now, I see all the comments have been removed, and an apologia trying to convert confusion into enlightenment added, but I think this still misses the point. Sure, there is a color associated with a single wavelength of light. But, my guess is that 99.99% of the colors we perceive arrive at any location on the eye as light with a spectral mix of wavelengths, not a single wavelength (Naty will correct me if I’m wrong). Unless you’re Dr. Evil and deal with sharks with frickin’ laser beams on their heads on a daily basis. Hmmm, I’m probably forgetting some other single-wavelength phenomena, like fluorescence. Anyway, the article did lead me to look up more information on metamers on Wikipedia, where I learnt about metameric failure, a term I hadn’t heard before. One more reason a simple RGB representation of color isn’t sufficient.

Cute thing: Snapily lets you turn some set of images or video into lenticular prints.

I don’t have a lot to say about what I do at Autodesk. Here’s a tidbit.

Art for the day, crayons as pixels.