<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Gamefest 2010 Presentations</title>
	<atom:link href="http://www.realtimerendering.com/blog/gamefest-2010-presentations/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.realtimerendering.com/blog/gamefest-2010-presentations/</link>
	<description>Tracking the latest developments in interactive rendering techniques</description>
	<lastBuildDate>Mon, 17 Jun 2013 03:17:13 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.1</generator>
	<item>
		<title>By: Real-Time Rendering &#183; Update on Splinter Cell: Conviction Rendering</title>
		<link>http://www.realtimerendering.com/blog/gamefest-2010-presentations/comment-page-1/#comment-1677</link>
		<dc:creator>Real-Time Rendering &#183; Update on Splinter Cell: Conviction Rendering</dc:creator>
		<pubDate>Sun, 11 Jul 2010 16:55:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=1495#comment-1677</guid>
		<description>[...] my recent post about Gamefest 2010, I discussed Stephen Hill&#8217;s great presentation on the rendering [...]</description>
		<content:encoded><![CDATA[<p>[...] my recent post about Gamefest 2010, I discussed Stephen Hill&#8217;s great presentation on the rendering [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: self_shadow</title>
		<link>http://www.realtimerendering.com/blog/gamefest-2010-presentations/comment-page-1/#comment-1662</link>
		<dc:creator>self_shadow</dc:creator>
		<pubDate>Tue, 29 Jun 2010 15:25:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=1495#comment-1662</guid>
		<description>Some minor corrections on the Conviction talk:

1. The HZB visibility system actually runs entirely on the GPU, if you discount iterating over the results.

2. The capsules were just an artistic representation. It’s not, in fact, achieved via a cylinder and two spheres.

Instead, we transform the receiver point into ‘ellipsoid’-local space, scale the axes and lookup into a 1D texture (using distance to centre) to get the zonal harmonics for a unit sphere, which are then used to scale the direction vector. This works very well in practice due to the softness of the occlusion. It’s also pretty similar to “Hardware Accelerated Ambient Occlusion Techniques on GPUs” (http://sites.google.com/site/perumaal/) although they work purely with spheres, which may simplify some things. I checked the P4 history, and our implementation was before their publication, so I’m not sure if there was any direct inspiration. I’m pretty sure our initial version also predated Real-time Soft Shadows in Dynamic Scenes using Spherical Harmonic Exponentiation since I remember attending SIGGRAPH that year and teasing a friend about the fact that we had something really simple.

3. We don’t currently do cross-bilateral upsampling.

Instead, we just take the most representative sample by comparing the full-res depth and object-ID against the surrounding down-sampled values. This amounts to pixel replication most of the time, which obviously isn’t as good, but is often hard to notice in areas of smooth gradation. Near the end I did try performing a bilinearly-interpolated lookup for pixels with a matching ID and nearby depth but there were failure cases, so I dropped it due to lack of time. I will certainly be looking at performing more sophisticated upsampling or simply increasing the resolution (as some optimisations near the end paid off) next time around.</description>
		<content:encoded><![CDATA[<p>Some minor corrections on the Conviction talk:</p>
<p>1. The HZB visibility system actually runs entirely on the GPU, if you discount iterating over the results.</p>
<p>2. The capsules were just an artistic representation. It’s not, in fact, achieved via a cylinder and two spheres.</p>
<p>Instead, we transform the receiver point into ‘ellipsoid’-local space, scale the axes and lookup into a 1D texture (using distance to centre) to get the zonal harmonics for a unit sphere, which are then used to scale the direction vector. This works very well in practice due to the softness of the occlusion. It’s also pretty similar to “Hardware Accelerated Ambient Occlusion Techniques on GPUs” (<a href="http://sites.google.com/site/perumaal/" rel="nofollow">http://sites.google.com/site/perumaal/</a>) although they work purely with spheres, which may simplify some things. I checked the P4 history, and our implementation was before their publication, so I’m not sure if there was any direct inspiration. I’m pretty sure our initial version also predated Real-time Soft Shadows in Dynamic Scenes using Spherical Harmonic Exponentiation since I remember attending SIGGRAPH that year and teasing a friend about the fact that we had something really simple.</p>
<p>3. We don’t currently do cross-bilateral upsampling.</p>
<p>Instead, we just take the most representative sample by comparing the full-res depth and object-ID against the surrounding down-sampled values. This amounts to pixel replication most of the time, which obviously isn’t as good, but is often hard to notice in areas of smooth gradation. Near the end I did try performing a bilinearly-interpolated lookup for pixels with a matching ID and nearby depth but there were failure cases, so I dropped it due to lack of time. I will certainly be looking at performing more sophisticated upsampling or simply increasing the resolution (as some optimisations near the end paid off) next time around.</p>
]]></content:encoded>
	</item>
</channel>
</rss>