<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The evils of fps</title>
	<atom:link href="http://www.realtimerendering.com/blog/the-evils-of-fps/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.realtimerendering.com/blog/the-evils-of-fps/</link>
	<description>Tracking the latest developments in interactive rendering techniques</description>
	<lastBuildDate>Mon, 17 Jun 2013 03:17:13 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.1</generator>
	<item>
		<title>By: Naty</title>
		<link>http://www.realtimerendering.com/blog/the-evils-of-fps/comment-page-1/#comment-94</link>
		<dc:creator>Naty</dc:creator>
		<pubDate>Sat, 25 Jul 2009 04:01:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=125#comment-94</guid>
		<description>The discussion assumes the researchers do the measurement properly. It is quite possible to correctly measure true GPU costs of individual operations at sub-millisecond precision; tools such as PIX do it all the time.</description>
		<content:encoded><![CDATA[<p>The discussion assumes the researchers do the measurement properly. It is quite possible to correctly measure true GPU costs of individual operations at sub-millisecond precision; tools such as PIX do it all the time.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: baxissimo</title>
		<link>http://www.realtimerendering.com/blog/the-evils-of-fps/comment-page-1/#comment-93</link>
		<dc:creator>baxissimo</dc:creator>
		<pubDate>Sat, 25 Jul 2009 03:00:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=125#comment-93</guid>
		<description>I don&#039;t know.  When I see a msec value for a GPU technique it always makes me a bit nervous.  How can I be sure they actually timed the GPU part of the operation to the end and didn&#039;t just time how long it took to submit the commands to the command stream?   Whereas for FPS, assuming they&#039;re measuring the whole period from frame start to frame start, then I know there&#039;s less wiggle room for measurement error.</description>
		<content:encoded><![CDATA[<p>I don&#8217;t know.  When I see a msec value for a GPU technique it always makes me a bit nervous.  How can I be sure they actually timed the GPU part of the operation to the end and didn&#8217;t just time how long it took to submit the commands to the command stream?   Whereas for FPS, assuming they&#8217;re measuring the whole period from frame start to frame start, then I know there&#8217;s less wiggle room for measurement error.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mauricio</title>
		<link>http://www.realtimerendering.com/blog/the-evils-of-fps/comment-page-1/#comment-91</link>
		<dc:creator>Mauricio</dc:creator>
		<pubDate>Thu, 23 Jul 2009 22:48:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=125#comment-91</guid>
		<description>I should add that this above comment has implications for profiling.  For example, a profile may indicate that Present() is taking a long time, when the GPU is in fact spending that time processing previous work, not the back buffer presentation.</description>
		<content:encoded><![CDATA[<p>I should add that this above comment has implications for profiling.  For example, a profile may indicate that Present() is taking a long time, when the GPU is in fact spending that time processing previous work, not the back buffer presentation.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mauricio</title>
		<link>http://www.realtimerendering.com/blog/the-evils-of-fps/comment-page-1/#comment-90</link>
		<dc:creator>Mauricio</dc:creator>
		<pubDate>Thu, 23 Jul 2009 22:45:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=125#comment-90</guid>
		<description>&quot;Therefore, SSDO is actually 3% more costly than SSAO, rather than 2.4%.&quot;

Hmmm... I think another example would have been more compelling.

It&#039;s also worth mentioning *how* you determine the time spent.  A naive implementation will measure a &quot;frame&quot; as the time between the first rendering call and presenting the back buffer (e.g. Present()), or the time it takes to make a single graphics API call (e.g. DrawIndexedPrimitive()).  Since the CPU and GPU are working asynchronously, that won&#039;t work: the call may return immediately while the GPU processes it.  Generally you want to measure over several frames, since the CPU is only allowed to get a few frames &quot;ahead&quot; of the GPU.</description>
		<content:encoded><![CDATA[<p>&#8220;Therefore, SSDO is actually 3% more costly than SSAO, rather than 2.4%.&#8221;</p>
<p>Hmmm&#8230; I think another example would have been more compelling.</p>
<p>It&#8217;s also worth mentioning *how* you determine the time spent.  A naive implementation will measure a &#8220;frame&#8221; as the time between the first rendering call and presenting the back buffer (e.g. Present()), or the time it takes to make a single graphics API call (e.g. DrawIndexedPrimitive()).  Since the CPU and GPU are working asynchronously, that won&#8217;t work: the call may return immediately while the GPU processes it.  Generally you want to measure over several frames, since the CPU is only allowed to get a few frames &#8220;ahead&#8221; of the GPU.</p>
]]></content:encoded>
	</item>
</channel>
</rss>