<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Why submit when you can blog?</title>
	<atom:link href="http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/</link>
	<description>Tracking the latest developments in interactive rendering techniques</description>
	<lastBuildDate>Mon, 17 Jun 2013 03:17:13 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.1</generator>
	<item>
		<title>By: In review &#8211; My 1st year (and a bit) of blogging &#124; dickyjim</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2293</link>
		<dc:creator>In review &#8211; My 1st year (and a bit) of blogging &#124; dickyjim</dc:creator>
		<pubDate>Tue, 15 Jan 2013 13:34:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2293</guid>
		<description>[...] like to think that my first year (and a bit) has been successful. Based on a post from Eric@realtimerrendering: One survey (from Caslon Analytics) gives 126 days for the average lifetime of a typical [...]</description>
		<content:encoded><![CDATA[<p>[...] like to think that my first year (and a bit) has been successful. Based on a post from Eric@realtimerrendering: One survey (from Caslon Analytics) gives 126 days for the average lifetime of a typical [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2162</link>
		<dc:creator>Eric</dc:creator>
		<pubDate>Fri, 15 Jun 2012 18:21:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2162</guid>
		<description>Eric E.: there&#039;s one blog I follow that mostly fits your requirements, https://twitter.com/#!/morgan3d, Morgan McGuire&#039;s (by coincidence, the editor-in-chief of JCGT). His feed is purely cool links that have to do with graphics or games, and why they&#039;re cool. I haven&#039;t seen any others with this high a signal to noise ratio.

Yes, ompf.org is a tragedy, so very sad - even the Wayback Machine&#039;s last save was only January 2010. For mirroring, I had the same idea for JCGT, that there should just be a single link to a &quot;download everything&quot; RAR archive of the whole site (ZIP archives can be a maximum of &quot;only&quot; 4 GB in size, so RAR&#039;s the way to go). The main danger is old mirrors getting mistaken for &quot;the real thing&quot;. I had this happen with the Ray Tracing News (which has a single &quot;here&#039;s the website&quot; zip), of people saying &quot;where&#039;s the new issue?&quot; when the issue was on the main site but not the particular mirror.</description>
		<content:encoded><![CDATA[<p>Eric E.: there&#8217;s one blog I follow that mostly fits your requirements, <a href="https://twitter.com/#!/morgan3d" rel="nofollow">https://twitter.com/#!/morgan3d</a>, Morgan McGuire&#8217;s (by coincidence, the editor-in-chief of JCGT). His feed is purely cool links that have to do with graphics or games, and why they&#8217;re cool. I haven&#8217;t seen any others with this high a signal to noise ratio.</p>
<p>Yes, ompf.org is a tragedy, so very sad &#8211; even the Wayback Machine&#8217;s last save was only January 2010. For mirroring, I had the same idea for JCGT, that there should just be a single link to a &#8220;download everything&#8221; RAR archive of the whole site (ZIP archives can be a maximum of &#8220;only&#8221; 4 GB in size, so RAR&#8217;s the way to go). The main danger is old mirrors getting mistaken for &#8220;the real thing&#8221;. I had this happen with the Ray Tracing News (which has a single &#8220;here&#8217;s the website&#8221; zip), of people saying &#8220;where&#8217;s the new issue?&#8221; when the issue was on the main site but not the particular mirror.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Steve Worley</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2161</link>
		<dc:creator>Steve Worley</dc:creator>
		<pubDate>Thu, 14 Jun 2012 03:30:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2161</guid>
		<description>It&#039;s important to preserve the information on (useful) blogs, but similarly, preserving useful forums is also important. I give the painful example of ompf.org, which hosted the definitive realtime raytracing community for years. It suddenly disappeared in November 2011.  There are some partial archives, mostly from 2009, but much if not most of the great technical info and discussion is gone. 
What&#039;s the solution to that problem? Part of it may be to simply not trust small hosts, but that&#039;s not always possible.

Perhaps one useful method which could help (but not solve) the disappearance problem for blogs, forums and journals would be for each to host a current archive of itself. For example, JCGT could include all of its papers and code into a big (perhaps very big) archive file, downloadable at any time (by BitTorrent if size becomes an issue), and updated say quarterly.  The loss of ompf.org would have been minor if any other site could restore a read-only mirror of all the ompf threads from such a periodic archive. Even a giant raw text dump of the ex-forum would still blunt most of the pain.</description>
		<content:encoded><![CDATA[<p>It&#8217;s important to preserve the information on (useful) blogs, but similarly, preserving useful forums is also important. I give the painful example of ompf.org, which hosted the definitive realtime raytracing community for years. It suddenly disappeared in November 2011.  There are some partial archives, mostly from 2009, but much if not most of the great technical info and discussion is gone.<br />
What&#8217;s the solution to that problem? Part of it may be to simply not trust small hosts, but that&#8217;s not always possible.</p>
<p>Perhaps one useful method which could help (but not solve) the disappearance problem for blogs, forums and journals would be for each to host a current archive of itself. For example, JCGT could include all of its papers and code into a big (perhaps very big) archive file, downloadable at any time (by BitTorrent if size becomes an issue), and updated say quarterly.  The loss of ompf.org would have been minor if any other site could restore a read-only mirror of all the ompf threads from such a periodic archive. Even a giant raw text dump of the ex-forum would still blunt most of the pain.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric Enderton</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2160</link>
		<dc:creator>Eric Enderton</dc:creator>
		<pubDate>Wed, 13 Jun 2012 23:11:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2160</guid>
		<description>According to the game industry panel at I3D 2012:  Game developers disseminate their technical ideas by blogs, Twitter, and beer.  A blog post is quick, and it&#039;s easy to include source code.  Using Web3D it can even be live running source code.  Review and selection is by who tweets your blog post.  I also heard that what game developers care about most when they see a technical article is, &quot;How quickly can I get the code running, so I can kick the tires?&quot;

This is foreign to me.  I would rather read JCGT and benefit from the work of editors.  But maybe I&#039;m just behind the times.  Actually if each tweet was two sentences and a link, instead of just a link, I&#039;d be sold:  one sentence to summarize the idea, one sentence to give a review.  (Like Halliwell&#039;s Film Guide.)</description>
		<content:encoded><![CDATA[<p>According to the game industry panel at I3D 2012:  Game developers disseminate their technical ideas by blogs, Twitter, and beer.  A blog post is quick, and it&#8217;s easy to include source code.  Using Web3D it can even be live running source code.  Review and selection is by who tweets your blog post.  I also heard that what game developers care about most when they see a technical article is, &#8220;How quickly can I get the code running, so I can kick the tires?&#8221;</p>
<p>This is foreign to me.  I would rather read JCGT and benefit from the work of editors.  But maybe I&#8217;m just behind the times.  Actually if each tweet was two sentences and a link, instead of just a link, I&#8217;d be sold:  one sentence to summarize the idea, one sentence to give a review.  (Like Halliwell&#8217;s Film Guide.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2159</link>
		<dc:creator>Eric</dc:creator>
		<pubDate>Wed, 13 Jun 2012 19:44:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2159</guid>
		<description>As far as &quot;will the journal&#039;s contents disappear?&quot;, Morgan McGuire, the editor-in-chief, replies:

&quot;The archival storage that I&#039;m setting up is backed indefinitely by the endowment and a Mellon grant. According to a big sign outside of the library the college is a Library of Congress official archive (we have the original drafts of the declaration of independence, the US constitution, all congressional archives, etc.) and this new initiative is the digital extension of the physical storage. So I bet the Williams archiving plan is at least as good as the ACM&#039;s... and the college has been successfully keeping records since 1753, so their track record is pretty good.&quot;</description>
		<content:encoded><![CDATA[<p>As far as &#8220;will the journal&#8217;s contents disappear?&#8221;, Morgan McGuire, the editor-in-chief, replies:</p>
<p>&#8220;The archival storage that I&#8217;m setting up is backed indefinitely by the endowment and a Mellon grant. According to a big sign outside of the library the college is a Library of Congress official archive (we have the original drafts of the declaration of independence, the US constitution, all congressional archives, etc.) and this new initiative is the digital extension of the physical storage. So I bet the Williams archiving plan is at least as good as the ACM&#8217;s&#8230; and the college has been successfully keeping records since 1753, so their track record is pretty good.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2158</link>
		<dc:creator>Eric</dc:creator>
		<pubDate>Wed, 13 Jun 2012 16:33:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2158</guid>
		<description>Oh, and I forgot to mention one mechanism that I personally think is worthwhile for archiving JCGT: the &quot;collected works&quot; book. As an example, JGT made an &quot;Editors&#039; Choice&quot; book, http://www.amazon.com/Graphics-Tools-The-Editors-Choice/dp/1568812469, articles we thought were particularly worthwhile. There are certainly options for print on demand, or making a book of one or two years&#039; worth of material, etc. It&#039;s an interesting question whether we&#039;d need to use a traditional publisher (just so we&#039;d get an ISBN number, etc.) or can roll our own print-on-demand version. A traditional publisher does have the advantage of marketing to libraries.</description>
		<content:encoded><![CDATA[<p>Oh, and I forgot to mention one mechanism that I personally think is worthwhile for archiving JCGT: the &#8220;collected works&#8221; book. As an example, JGT made an &#8220;Editors&#8217; Choice&#8221; book, <a href="http://www.amazon.com/Graphics-Tools-The-Editors-Choice/dp/1568812469" rel="nofollow">http://www.amazon.com/Graphics-Tools-The-Editors-Choice/dp/1568812469</a>, articles we thought were particularly worthwhile. There are certainly options for print on demand, or making a book of one or two years&#8217; worth of material, etc. It&#8217;s an interesting question whether we&#8217;d need to use a traditional publisher (just so we&#8217;d get an ISBN number, etc.) or can roll our own print-on-demand version. A traditional publisher does have the advantage of marketing to libraries.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2157</link>
		<dc:creator>Eric</dc:creator>
		<pubDate>Wed, 13 Jun 2012 16:24:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2157</guid>
		<description>The Journal of Graphics Tools updated the &quot;homepage&quot; for each article as needed, and I expect as a minimum we&#039;ll continue this practice. For example, here&#039;s one with both addenda and a comment from a reader: http://jgt.akpeters.com/papers/vanOverVeldWyvill96/</description>
		<content:encoded><![CDATA[<p>The Journal of Graphics Tools updated the &#8220;homepage&#8221; for each article as needed, and I expect as a minimum we&#8217;ll continue this practice. For example, here&#8217;s one with both addenda and a comment from a reader: <a href="http://jgt.akpeters.com/papers/vanOverVeldWyvill96/" rel="nofollow">http://jgt.akpeters.com/papers/vanOverVeldWyvill96/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: sebastien lagarde</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2156</link>
		<dc:creator>sebastien lagarde</dc:creator>
		<pubDate>Wed, 13 Jun 2012 15:21:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2156</guid>
		<description>For the point about statistics interest, I found them motivating to continue to write because not everyone give feedback, even to say they enjoy or not to read an article. And knowing which of your topic are popular can help to know topic of interest.

For the article update point, nothing to had, but just some real example to show (which match what you say):

- Taking the sample of &quot;lean mapping&quot; paper from Marc Olano, it is as you describe:
http://www.csee.umbc.edu/~olano/papers/lean/
Error are reported on an homepage with other links. It is ok as long as you check the homepage.

- Taking the sample of the &quot;Frequency Domain Normal Map Filtering&quot; paper form Han and al.
The web page report no errata
http://graphics.berkeley.edu/papers/Han-FDN-2007-07/index.html
However, Morten Mikkelsen found error in this paper, have contact Han to confirm and suggest correction in his thesis
http://image.diku.dk/projects/media/morten.mikkelsen.08.pdf
Sadly original paper or the website, don&#039;t mention the error, nor any link to Morten Mikkelsen work.

- Last case, update of paper like &quot;Stupid spherical harmonics trickz&quot; from Peter Pike Sloan
http://www.ppsloan.org/publications/StupidSH36.pdf
I like this way because this paper have a version number, the history can be found at the end of the paper and only the last corrected paper is available, so no way to miss the update.

I am agree that there is no all-in-one solutions or better solutions. Reediting paper is difficult, reprinting paper is impossible. This is why I like blog.
Maybe JCGT should have a homepage by article with a list of update link and correction ?

Just my two cents</description>
		<content:encoded><![CDATA[<p>For the point about statistics interest, I found them motivating to continue to write because not everyone give feedback, even to say they enjoy or not to read an article. And knowing which of your topic are popular can help to know topic of interest.</p>
<p>For the article update point, nothing to had, but just some real example to show (which match what you say):</p>
<p>- Taking the sample of &#8220;lean mapping&#8221; paper from Marc Olano, it is as you describe:<br />
<a href="http://www.csee.umbc.edu/~olano/papers/lean/" rel="nofollow">http://www.csee.umbc.edu/~olano/papers/lean/</a><br />
Error are reported on an homepage with other links. It is ok as long as you check the homepage.</p>
<p>- Taking the sample of the &#8220;Frequency Domain Normal Map Filtering&#8221; paper form Han and al.<br />
The web page report no errata<br />
<a href="http://graphics.berkeley.edu/papers/Han-FDN-2007-07/index.html" rel="nofollow">http://graphics.berkeley.edu/papers/Han-FDN-2007-07/index.html</a><br />
However, Morten Mikkelsen found error in this paper, have contact Han to confirm and suggest correction in his thesis<br />
<a href="http://image.diku.dk/projects/media/morten.mikkelsen.08.pdf" rel="nofollow">http://image.diku.dk/projects/media/morten.mikkelsen.08.pdf</a><br />
Sadly original paper or the website, don&#8217;t mention the error, nor any link to Morten Mikkelsen work.</p>
<p>- Last case, update of paper like &#8220;Stupid spherical harmonics trickz&#8221; from Peter Pike Sloan<br />
<a href="http://www.ppsloan.org/publications/StupidSH36.pdf" rel="nofollow">http://www.ppsloan.org/publications/StupidSH36.pdf</a><br />
I like this way because this paper have a version number, the history can be found at the end of the paper and only the last corrected paper is available, so no way to miss the update.</p>
<p>I am agree that there is no all-in-one solutions or better solutions. Reediting paper is difficult, reprinting paper is impossible. This is why I like blog.<br />
Maybe JCGT should have a homepage by article with a list of update link and correction ?</p>
<p>Just my two cents</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2155</link>
		<dc:creator>Eric</dc:creator>
		<pubDate>Wed, 13 Jun 2012 13:49:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2155</guid>
		<description>Sebastien: statistics are an interesting point (and something an online journal could provide), though I&#039;m not sure of their use beyond, &quot;it&#039;s nice to see a bunch of people have visited my page.&quot; Since readers can&#039;t easily compare such statistics for different blog entries from different authors, there&#039;s no ability for them to find the important or popular offerings. Even if available, for a blog popular doesn&#039;t mean useful: an advantage of a journal is that you know none of the articles will be vacation photos or pictures of the new baby.

Corrections are definitely something I&#039;ve thought about for journal articles. My feeling is that corrections should be put on the &quot;homepage&quot; for the article, the page which contains the abstract and links to related resources for the paper (source code, video, etc.). I&#039;m of two minds when it comes to actually correcting the paper itself. I guess this would be fine overall, if the paper is clearly shown as &quot;version 1.1&quot; on the first page, and all corrections are explicitly listed in the paper at its end (so that it&#039;s self-contained and a reader could see what was fixed). If a paper became popular, but during the course of its life had 5 different versions, it could be a bit confusing. It would certainly be important to note which revision you&#039;re referencing. My main concern is avoiding overburdening the managing editor of the journal, who would have to reissue the PDF when errata was reported.

Larry: yes, this is a definite concern, whether anything electronic just disappears if people owning the site lose interest/go to jail/get divorced/etc. If you think about it, this is true for *all* internet resources, Wikipedia included. I&#039;ve been thinking that about my own little projects, and I suspect you&#039;d done the same: &quot;what if a piano falls on my head today?&quot; The Graphics Gems repository and the Ray Tracing News would probably live on, as they&#039;re hosted at acm.org, but they would certainly never get updated. Naty and Tomas could cover resources hosted on realtimerendering.com, but that&#039;s minimal support. I&#039;m actually surprised there aren&#039;t companies (that I&#039;ve heard of) that deal with this sort of thing. It&#039;s certainly a problem in general: http://www.economist.com/node/21553410. The laws in this area are a muddle, too, e.g. http://www.economist.com/node/21553011

I guess my default answer is &quot;The Wayback Machine&quot;. It&#039;s surprisingly thorough for some sites: Graphics Gems, the Ray Tracing News, and this site are all archived there, for example, including the downloadable code distribution (i.e., not just the pages). I personally don&#039;t like the &quot;only one backup&quot; feel of this, though - what if the Wayback Machine goes away? (And how it hasn&#039;t been sued out of existence yet by some idiot, I don&#039;t know.) Perhaps the answer is to allow mirroring of the entire JCGT site by anyone who wants to do so - this is worth considering. What do you think? It&#039;s also entirely possible to offer the journal as a printed entity, for example http://store.kagi.com/cgi-bin/store.cgi?storeID=6CZKD_LIVE&amp;lang=en offers the open access journal JMLR as print copies. But, given tight library budgets, I don&#039;t foresee libraries making such archival copies. Hmmm, maybe it&#039;s good that Elsevier forces libraries to buy bundled subscriptions, so that the more obscure journals are available... The field of computer graphics has had to deal with this problem for decades, if you think about it. Some useful and important papers have been &quot;published&quot; in course notes, which are then very hard to find years later. I think of Jim Arvo&#039;s &quot;Backwards Ray Tracing&quot; paper, which is luckily still at his website, http://www.ics.uci.edu/~arvo/papers.html (in postscript - that&#039;s a different problem), even though he&#039;s passed on.

By the way, a problem with the old JGT is that, while the articles themselves are tucked away in libraries, the code and related resources (much of the reason for the journal) are not. Taylor &amp; Francis, the current owner of JGT, restored the old publisher&#039;s &quot;homepages&quot; for the articles, but have not given readers a way to actually reach these. This I consider a danger of having a publisher: if the imprint changes hands, the new owners may not maintain non-print resources (or even offer back issues). For JGT, there are two ways to get to the old pages with code. One entry point is here: http://jgt.akpeters.com/issues/, but this link may not last. The Wayback Machine also has them: http://web.archive.org/web/20110707101838/http://jgt.akpeters.com/</description>
		<content:encoded><![CDATA[<p>Sebastien: statistics are an interesting point (and something an online journal could provide), though I&#8217;m not sure of their use beyond, &#8220;it&#8217;s nice to see a bunch of people have visited my page.&#8221; Since readers can&#8217;t easily compare such statistics for different blog entries from different authors, there&#8217;s no ability for them to find the important or popular offerings. Even if available, for a blog popular doesn&#8217;t mean useful: an advantage of a journal is that you know none of the articles will be vacation photos or pictures of the new baby.</p>
<p>Corrections are definitely something I&#8217;ve thought about for journal articles. My feeling is that corrections should be put on the &#8220;homepage&#8221; for the article, the page which contains the abstract and links to related resources for the paper (source code, video, etc.). I&#8217;m of two minds when it comes to actually correcting the paper itself. I guess this would be fine overall, if the paper is clearly shown as &#8220;version 1.1&#8243; on the first page, and all corrections are explicitly listed in the paper at its end (so that it&#8217;s self-contained and a reader could see what was fixed). If a paper became popular, but during the course of its life had 5 different versions, it could be a bit confusing. It would certainly be important to note which revision you&#8217;re referencing. My main concern is avoiding overburdening the managing editor of the journal, who would have to reissue the PDF when errata was reported.</p>
<p>Larry: yes, this is a definite concern, whether anything electronic just disappears if people owning the site lose interest/go to jail/get divorced/etc. If you think about it, this is true for *all* internet resources, Wikipedia included. I&#8217;ve been thinking that about my own little projects, and I suspect you&#8217;d done the same: &#8220;what if a piano falls on my head today?&#8221; The Graphics Gems repository and the Ray Tracing News would probably live on, as they&#8217;re hosted at acm.org, but they would certainly never get updated. Naty and Tomas could cover resources hosted on realtimerendering.com, but that&#8217;s minimal support. I&#8217;m actually surprised there aren&#8217;t companies (that I&#8217;ve heard of) that deal with this sort of thing. It&#8217;s certainly a problem in general: <a href="http://www.economist.com/node/21553410" rel="nofollow">http://www.economist.com/node/21553410</a>. The laws in this area are a muddle, too, e.g. <a href="http://www.economist.com/node/21553011" rel="nofollow">http://www.economist.com/node/21553011</a></p>
<p>I guess my default answer is &#8220;The Wayback Machine&#8221;. It&#8217;s surprisingly thorough for some sites: Graphics Gems, the Ray Tracing News, and this site are all archived there, for example, including the downloadable code distribution (i.e., not just the pages). I personally don&#8217;t like the &#8220;only one backup&#8221; feel of this, though &#8211; what if the Wayback Machine goes away? (And how it hasn&#8217;t been sued out of existence yet by some idiot, I don&#8217;t know.) Perhaps the answer is to allow mirroring of the entire JCGT site by anyone who wants to do so &#8211; this is worth considering. What do you think? It&#8217;s also entirely possible to offer the journal as a printed entity, for example <a href="http://store.kagi.com/cgi-bin/store.cgi?storeID=6CZKD_LIVE&#038;lang=en" rel="nofollow">http://store.kagi.com/cgi-bin/store.cgi?storeID=6CZKD_LIVE&#038;lang=en</a> offers the open access journal JMLR as print copies. But, given tight library budgets, I don&#8217;t foresee libraries making such archival copies. Hmmm, maybe it&#8217;s good that Elsevier forces libraries to buy bundled subscriptions, so that the more obscure journals are available&#8230; The field of computer graphics has had to deal with this problem for decades, if you think about it. Some useful and important papers have been &#8220;published&#8221; in course notes, which are then very hard to find years later. I think of Jim Arvo&#8217;s &#8220;Backwards Ray Tracing&#8221; paper, which is luckily still at his website, <a href="http://www.ics.uci.edu/~arvo/papers.html" rel="nofollow">http://www.ics.uci.edu/~arvo/papers.html</a> (in postscript &#8211; that&#8217;s a different problem), even though he&#8217;s passed on.</p>
<p>By the way, a problem with the old JGT is that, while the articles themselves are tucked away in libraries, the code and related resources (much of the reason for the journal) are not. Taylor &#038; Francis, the current owner of JGT, restored the old publisher&#8217;s &#8220;homepages&#8221; for the articles, but have not given readers a way to actually reach these. This I consider a danger of having a publisher: if the imprint changes hands, the new owners may not maintain non-print resources (or even offer back issues). For JGT, there are two ways to get to the old pages with code. One entry point is here: <a href="http://jgt.akpeters.com/issues/" rel="nofollow">http://jgt.akpeters.com/issues/</a>, but this link may not last. The Wayback Machine also has them: <a href="http://web.archive.org/web/20110707101838/http://jgt.akpeters.com/" rel="nofollow">http://web.archive.org/web/20110707101838/http://jgt.akpeters.com/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Larry Gritz</title>
		<link>http://www.realtimerendering.com/blog/why-submit-when-you-can-blog/comment-page-1/#comment-2154</link>
		<dc:creator>Larry Gritz</dc:creator>
		<pubDate>Wed, 13 Jun 2012 07:48:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.realtimerendering.com/blog/?p=3108#comment-2154</guid>
		<description>What mechanisms are being put into place to ensure that JCGT (or any online journal; what does PLOS do?) doesn&#039;t have all the shortcomings of  blogs?  What if it runs out of money, loses its volunteers, is sued by somebody who claims prior ownership of &quot;JCGT.com&quot;, or runs afoul of the deities in any number of other ways?  At the end of the day, SOMEBODY is footing the bill for a server to host web pages at a particular address.  If the server and/or the dollars go away, so do the papers.

This was not a problem for print journals; the publishing company could fold, the editors could die, people could stop submitting, but all the prior issues would still be physically preserved in thousands of university libraries.  An important paper would more or less always be available (at worst findable with some work, travel, or interlibrary loan), and in a medium that preserved its historically accurate form.</description>
		<content:encoded><![CDATA[<p>What mechanisms are being put into place to ensure that JCGT (or any online journal; what does PLOS do?) doesn&#8217;t have all the shortcomings of  blogs?  What if it runs out of money, loses its volunteers, is sued by somebody who claims prior ownership of &#8220;JCGT.com&#8221;, or runs afoul of the deities in any number of other ways?  At the end of the day, SOMEBODY is footing the bill for a server to host web pages at a particular address.  If the server and/or the dollars go away, so do the papers.</p>
<p>This was not a problem for print journals; the publishing company could fold, the editors could die, people could stop submitting, but all the prior issues would still be physically preserved in thousands of university libraries.  An important paper would more or less always be available (at worst findable with some work, travel, or interlibrary loan), and in a medium that preserved its historically accurate form.</p>
]]></content:encoded>
	</item>
</channel>
</rss>