_ __ ______ _ __
' ) ) / ' ) )
/--' __. __ , --/ __ __. _. o ____ _, / / _ , , , _
/ \_(_/|_/ (_/_ (_/ / (_(_/|_(__<_/ / <_(_)_ / (_
To answer your question, yes, the average projected area of a convex body is
1/4 its surface area. You can check out my SIGGRAPH course notes from 1990 for
a (nearly) rigorous proof, or almost any book on geometric probability. I gave
at least one reference to the theorem in my chapter of the ray tracing book.
The 1990 course notes are not available on my web page yet, but they will be as
soon as I find the file!!! I'm such a pack rat I know I have it somewhere!
[And indeed he found it, check out http://www.cs.caltech.edu/~arvo/papers.html
and get the "Ray Tracing with Meta-Hierarchies" paper. There are other great
ray tracing (and other) papers here which have appeared only in course notes or
hard to find proceedings, e.g. "The Ray Tracing Kernel" and "Backward Ray
Tracing". -EAH]
--------
It turned out that the first entry I received won:
From: Han-Wen Nienhuys
http://www.stack.nl/~hanwen
I'm not sure this is what you want, but the expectancy of the area of a
projection of any convex object compared to the area of its surface is a fixed
ratio. The ratio can be easily calculated, take a sphere:
projection: pi R^2
object area: 4 pi R^2
ratio: 1/4
The area of the boundingbox is 6*(2*R)^2, =24*R^2 and the expected area of the
projection is 24*R^2/4=6R^2, so the ratio of sphereprojection vs.
cubeprojection is pi/6.
I am just doing this from the top of my head. Last year's Dutch University
Mathematics Competition had a similar question (expected area of an isotropic
projection of a cube).
The answer suggested partitioning the convex object in triangles and then
proving the theorem for arbitrary triangles.
The idea is this (let ExpArea be the expected area of a projection along a
isotropically distributed direction, and "f" is the ratio we are trying to
find):
1. For any triangle T.
ExpArea(T) = f * Area(T)
f does not depend on T.
2. For any convex polytope (made of triangles T_i)
P = Union_i T_i
So ExpArea(P) = ExpArea(Union T_i) = Sum_i ExpArea(T_i)
= Sum_i f*Area(T_i) = f * Area(Union_i T_i)
= f * Area(P)
(maybe I am losing a factor 2 somewhere, because a convex set has back
and/or front side. It doesn't matter, any factors will be absorbed in f)
3. If C is a convex object, it can be approximated by a triangularization
P, so
ExpArea(C) = ExpArea(P) = f*Area(P) = f * Area(C)
~ ~
The triangles can be made smaller, and the approximations (~) will be
better. In the limit,
ExpArea(C) = f * Area(C)
for any convex C. f is fixed (does not depend on C)
4. f can be easily calculated: A sphere is (radius 1) is convex,
ExpArea(Sphere) = pi
Area (Sphere) = 4 * pi
So f is 1/4
5. Area(BoundingCube) = 4 * 6 = 24, So ExpArea(BoundingCube) = 1/4 * 24 = 6
ExpArea(Sphere) / ExpArea(BoundingCube) = pi / 6
I know that this is only a so-so proof (if it is worth the name), step 1 and
especially step 3 would involve some advanced math, but this is how you would
prove it elegantly.
The result also generalizes to lower (2) and higher dimensions.
--------
In case you didn't follow that, here's a talk through:
To state this a little more loosely, step one says that an arbitrary triangle
and its average projected area have a constant ratio. This could use a little
more proof, as he says, but here's an intuitive way to know that this is true.
First, this is definitely true for the class of all equilateral triangles -
there is some answer to what the average projected area is for some one
equilateral triangle, and scaling up this equilateral triangle to a different
size won't change the ratio "f" between the new area and new average projected
area. So, now make an arbitrary triangle out of a million tiny equilateral
triangles; sure, the edges won't be perfect (add another billion tiny
triangles, then, i.e. take it to the limit) but the point is that the ratio
"f" for equilateral triangles then applies to arbitrary triangles (or any 2D
figure, in fact). So, now we informally know step 1 is true, that "f" is a
constant.
Step 2 says that a convex object made of polygons will have a constant "f"
ratio, too, same as in step 1. This is because from any angle, no part of the
object covers another forward facing part of the object (it's convex). So, you
can sum up all the triangles' areas since they do not interfere with each
other.
Step 3 is sort of like my step 1 informal proof: it says that any convex
object (e.g. curved) can be tessellated into triangles, something I think we
all believe.
Step 4 is the clever part: a sphere is a convex object, and the nice thing
about a sphere is that its projected area is the same from any direction. So
now we simply work backward through the steps. For a sphere the projected area
vs. the surface area is 1/4th, so "f" must be 1/4, and since we proved this
"f" is constant we now know that it holds for all convex objects, so we can
compute the answer to the problem in Step 5. If this proof seems bass-ackwards
to you, check Arvo's paper for a formal integration of the projected area of an
arbitrary triangle. I personally like the (looser) proof as is, since we never
had to even integrate to prove it.
Note that the value of 1/4th is for a convex 3D object. The ratio is 1/2 for
an arbitrary 2D figure's average projected area to its actual area (because we
do not compute its actual area by adding its front area to its backface area),
if both sides of the 2D figure are visible (i.e. no culling).
This result is handy for all sorts of things. Goldsmith and Salmon use it to
compute a cost function for the relative chances of hitting a bounding volume,
so helping them to optimize their hierarchy. Or, say you want to test a
graphics rasterizer by tossing arbitrarily oriented polygons on the screen
which will cover an average of 50 pixels each. By this theorem you know that
(using a parallel projection) the triangles need to cover 100 pixels when
viewed straight on.
--------
And the "all mimsy were the borogoves" comment of the day on this problem...
From: Ken Musgrave
Just got back from a visit to Pixar, where Ronen remarked that they were having
a conversation recently, the conclusion of which was "there's very little
volume of a 48-dimensional sphere in a unit bounding box." Think about it: as
the dimension D goes to infinity, the sphere's volume goes to zero. "All those
corners to cut off," Ronen credited someone (I forget whom) with noting.
-------------------------------------------------------------------------------
Ray Tracing Roundup
Much research related information which I usually would have listed here has
moved to ACM TOG's Resources area,
http://www.acm.org/tog/overview.html#resources.
--------
Internet Raytracing Competition
The Internet Raytracing Competition is an open competition held every two
months. Participants may use any 3D rendering program.
http://www.povray.org/irtc
[This is kinda fun, and overwhelming in the number of entries submitted, e.g.
I see about 110 images in the latest contest. You can vote on the entries;
something to do while waiting for compiles to finish. Many images have "how I
did it" information attached. -EAH]
--------
POV-Ray Mirrors
This is a list of known mirrors of ftp.povray.org and ways you can
access the mirrored files.
ftp or http://wuarchive.wustl.edu/graphics/graphics/mirrors/ftp.povray.org
ftp://ftp.shu.ac.uk/pub/computing/packages/raytrace/ftp.povray.org
ftp or http://www.hensa.ac.uk/ftp/mirrors/povray
ftp or http://ftp.nectec.or.th/pub/mirrors/ray-tracing
ftp://ftp.etsimo.uniovi.es/pub/raytrace
NB: not all sites carry all files. For example, etsimo does not carry the
competition, mirrors, images or Hall-Of-Fame. The shu UK mirror, as far as I
know, only carries a subset also.
--------
Errata for "Principles of Digital Image Synthesis"
You can find the errata listing on the web at:
http://www.research.microsoft.com/research/graphics/glassner/work/projects/pdis/errata.htm
Thanks to all the bug-spotters on the To: line!
-Andrew Glassner (glassner@microsoft.com)
--------
It is my pleasure to announce that my PhD dissertation is now available on the
net. The dissertation is entitled:
Mathematical Models and Monte Carlo Algorithms
for Physically Based Rendering
It's all about global illumination models, Monte Carlo methods, variance
reduction techniques, path tracing, light tracing and bidirectional path
tracing. You can find the abstract, an overview of the chapters, the table of
contents, the colour images and the dissertation itself at:
http://www.cs.kuleuven.ac.be/~ericl/thesis/
Even if you're not interested in the text you can just have a look at the
pretty pictures. Enjoy! Kind regards,
Eric Lafortune
--------
RayTraced Evolution
For info about RTEvol (RayTraced Evolution), a free package for
procedural-modelling using Lindenmeyer-grammars and interpreted C-like
macros/functions, look at this URL:
http://www.rz.tu-ilmenau.de/~juhu/GX/RTEvol/
Primarily I use this package to test some raytracing-acceleration techniques,
but now it expands to a fully programmable system. It is an ideal tool for
quick generation of test-scenes.
Have fun,
--JuHu (cn1@irz301.inf.tu-dresden.de)
--------
Pete Shirley's moved to Utah, and his useful page has moved with him:
http://www.cs.utah.edu/~shirley
This page includes a focussed link list for Monte Carlo rendering techniques
and a list of computer graphics related periodicals and conferences.
--------
Grammar of Ornament now on CD ROM
I love the Dover book "The Grammar of Ornament" by Owen Jones. So I was
happy to find that this book is now available on CD ROM; see
http://www.dimagin.com
This CD ROM is a labor of love; the creator, Stephen Hubbard, says it took 5
man years, $65K, and 3 girl friends to make.
--------
Flatland now on the Web
While I'm on the topic of classics, the classic volume "Flatland" has been put
on the web by Tom Banchoff, check out:
http://www.geom.umn.edu/~banchoff/Flatland/
You might also want to look at the projects Banchoff has been involved with
(he's well-known for his visualizations related to topology, 4D spaces, etc):
http://www.geom.umn.edu/~banchoff/projects.html
--------
ZRend
[ZRend is also mentioned in RTNv7n5; it is part of the Graphics Gems V
distribution, and it has been improved since then by the author. -EAH]
The key features of this program are:
(a) Reads a variant of the standard Neutral File Format (NFF) as input
(b) Outputs a 24 bit color TARGA file
(c) Uses accurate fixpoint arithmetic in all its calculations
(d) Works on X windows environment. We have tested it on Sun Sparc machines.
(e) High Performance. The renderer has almost realtime performance.
This software is available through the Web and via anonymous ftp. The software
includes sample files, including the teapot. We would appreciate any feedback
you can give us on this software. We are quite impressed by the quality of the
images produced. Enjoy the use of this utility!
http://www.cerc.wvu.edu/public_domain_sw.html
- Raghu Karinthi (raghu@cs.wvu.edu)
--------
Application Challenges to Computational Geometry
This tech. report is now available at:
http://graphics.lcs.mit.edu/~seth/pubs/taskforce/techrep.html
- Seth Teller
[it's a report on the challenge of applying computational geometry to practical
problems, among other things. Also check out
http://www.cs.duke.edu/~jeffe/compgeom/ for comments on this report and for
many other computational geometry resource links. -EAH]
-------------------------------------------------------------------------------
Papers of Interest, by Eric Haines
OK, this is just a brief list of some papers I enjoyed in 1996 that others
might have overlooked. I won't bother listing SIGGRAPH '96 papers, since I
assume all researchers get this. Anyway, here goes:
%A Andrew Woo
%A Andrew Pearce
%A Marc Ouellette
%T It's Really Not a Rendering Bug, You See...
%J IEEE Computer Graphics and Applications
%V 16
%N 5
%D Sept. 1996
%P 21-25
I found this is also on the web at
http://ada.computer.org:80/pubs/cg&a/articles/g50021.pdf - very nice! See
http://ada.computer.org:80/pubs/cg&a/backissu.htm for other IEEE CG&A back
issues and articles/abstracts available online.
If you render, read this one. They point out many common bugs you'll run
across. I had already run into many of these bugs, but a few were new ones I
hadn't hit yet - it's great to get a warning of what to watch out for, sort of
an "FBI's Most Wanted" list of rogues. Their "poor texture interpolation" bug
(where a textured quadrilateral with one short side is badly rendered if split
into two triangles) actually has an interesting limiting case: a tessellation
of a cone. A good textured cone tessellation needs to have quadrilaterals for
the faces, not triangles: at the cone tip you need two vertices, with the XYZ
locations identical but the normals and texture coordinates different. Systems
which show textures by splitting up quads into triangles and rendering these
will improperly display such quadrilaterals.
----
%A Krzysztof S. Klimaszewski
%A Thomas W. Sederberg
%T Faster Ray Tracing Using Adaptive Grids
%J IEEE Computer Graphics and Applications
%V 17
%N 1
%D Jan/Feb 1997
%P 42-51
The short summary is: automatic bounding volume hierarchy with grids at each
level. To make this scheme more profitable the authors discuss methods of
merging bounding volumes and of nesting grids directly within voxels. It's
nice to see some serious ray tracing efficiency research still being done, and
the techniques and results are interesting and well-presented. Even if you
just use grids, their section on setting the resolution of grids is a nice
take-away (the gist is an idea I've heard of before, but I don't recall seeing
in print: instead of giving a box of dimensions of say 1,4,9 a constant
grid resolution along each axis, instead scale the resolutions proportionally
with the box's dimensions. Timing results are given showing this to be a
better method.)
----
%A Frederic Cazals
%A George Drettakis
%A Claude Puech
%T Filtering, Clustering and Hierarchy Construction: a New Solution for Ray
Tracing Complex Scenes
%J Computer Graphics Forum (Eurographics '95)
%V 14
%N 3
%C Maastricht, Netherlands
%D 1995
%P C371-C382
This one's available on-line at
http://flamingo.stanford.edu/~cazals/xfc_research.html. It's got some
interesting ideas on forming efficient access structures. Specifically, it
uses techniques such as histogramming and filtering of the object sizes to
split up and then cluster objects of differing scales in order to efficiently
ray trace scenes. I enjoyed it because it's a new way of thinking about the
problem.
-------------------------------------------------------------------------------
Synthetic Textures and Genetic Algorithms, by Steve Worley (steve@worley.com)
[Steve and I had a long email exchange on this topic. In my last note to him I
was discussing Sims-like methods of generating synthetic textures (use genetic
algorithms to combine equations, with the observer picking which equation
survives to the next generation) and bemoaning the fact that minor evolutionary
changes in equations usually caused huge textural changes. Such large changes
mean that you have little sense of being in control of the process, and so
there is little sense of steering generations towards more interesting and
appealing results. Contrast this to Latham's work, where you get a strong
sense of ancestry if you look through the choices you made in evolving a
strange 3D creature. BTW, there is now a fantastic screen saver program for
Windows 95 which Latham worked on, check it out at http://www.artworks.co.uk/.
--EAH]
You make a great point about the fickleness of genetic algorithms. One problem
is that if you the human need to judge each image, you have to see all these
garbage tests. Using some automatic quality selection algorithm instead will
prune for you, so while the generation effort is still wasted, it's automatic
and you just let things cook a bit longer. But as we said, the quality of
texture is more a human subjective thing. We need to stay in the loop! Here a
fast CPU is still good for fast updates so the human can make their choices
faster I guess...
But a more promising method of using genetic algorithms is to limit their
scope. By feeding in a set of very complex primitives, simple combinations can
make new and impressive surfaces using only simple combinations. Consider the
impossibility of making a fractal noise cloudy surface with a genetic algorithm
that can do only + - / *, if, and floor. It just wouldn't happen, though those
operations are sufficient. But when it has noise() as a primitive, magic can
happen!
So I see a way in which say a limited number of complex primitives (say two
fractal noises, a sawtooth, and a radial ramp) and a limited number of simple
operations (+-/* if.. sin,) get mixed together, with rules to prevent
super-concatenation. 3*noise(x)+x+noise(sawtooth(x)) is fine, but
sawtooth(noise(x+4*sin(noise(radial(x+1))))) is not. Maybe even limit it to
simple binomial combinations.
The idea of this is that by limiting the genetic forms, you can limit the
complexity of the textures (keeping them efficient) and you also limit the
number of changes that can occur during a mutation or breeding. You give up
some of the rich number of possible surfaces you might get by surprise, but the
extra control may be reasonable. Have you used Kai's Power Tools texture
explorer? It seems to have only a very few base algorithms, with only a few
controls, though they also use a large selection of spline color maps. It's
not overly complex, yet people play with it for hours.
Another aspect of genetic algorithms is perhaps more interesting. Forget
changing the ALGORITHM, take any current man-made texture. Nearly all are
driven by a list of parameters like "turbulence value" or "scale size", etc.
What set of parameters looks best? Answer: jiggle a test test of parameters
some way and let the user pick images until these parameters get set to some
local user maximum happiness value. I mention this a bit in the postscript of
my chapter in the texturing book. There's something to this. I don't know
just how to do it yet. But it'd be very useful, both for end user as well as
for developer.
[and here's another interesting bit from another note from Steve. -EAH]
My main idea is that the algorithm for choosing the new parameters is not
obvious, and depending on the application it may even vary. So I may write a
METAalgorithm which basically has a bunch of methods to try. It tries them all
and notices which ones tend to make better pictures (as scored by the user).
Then it starts using those selection methods more often. This can happen
adaptively even as the user plays around. Maybe at first the user is just
exploring, looking for many different types of appearances, until he finds a
cool one he wants to refine. Suddenly his goals change; he wants one that's
"like THAT but better", as opposed to "something cool not like these". Anyway,
just a long-deferred project. I haven't thought about it for a year or so.
-------------------------------------------------------------------------------
Ray Tracing, What is it Good For? by Arijan Siska (arijan@kette.fer.uni-lj.si)
I'm using 3DS a lot these days (in my own animation company (got my degree in
December)), and believe me I sure do not miss the slowness of raytracing. And
it also seems that most of the reflections can be faked with automatic
reflection maps (ones that get rendered on actual scene prior to the rendering
of the real picture).
Basically I've compiled a list of reasons why would one use raytracing at all.
Here it goes:
1. Exact primitives (spheres & ...) - can be faked on scanconversion
algorithm with adaptive subdivision based on pixel coverage.
2. No problem shadows. Wide lights (real softshadows).
3. Heightfields are supposed to be easier to trace than scanconvert zillions
of polygons.
4. CSG objects.
5. Extended camera models (cylindrical camera, spherical camera (fisheye) & ...)
6. Easy & real reflection & refraction
I doubt these (or other) reasons will ever push scanline based renderers like
3DS out of the picture.
Real world animation (visualization, games, ...) will always need faster
results even with god knows how fast machines, academic research will without a
doubt move to global methods like path tracing and particle tracing.
There is no easy (cheap) way to avoid aliasing in raytracing. This is going to
be a dominating factor when raytracing moves (and it will, I'm sure) into
mainstream production software. In production, quality is very important.
[Incidentally, check out Blue Sky Productions, http://www.blueskyprod.com/, for
a company which uses pure ray tracing for production work. -EAH]
-------------------------------------------------------------------------------
Fast Soft Shadows, by Steve Worley
[a fragment of a note from Steve. This is an idea I've thought about and
always wanted to try but never got around to do - nice to hear it actually
works pretty well! Sure, it might not catch the soft shadows all the time just
due to bad luck, but for the most part I can imagine it working. -EAH]
...a neat sampling scheme specifically for soft shadows. If a pixel is in
shadow but its neighbor is not, both are sampled even more. If one of the
pixels is in PARTIAL shadow, its neighbors are also sampled more. Pass #1
basically uses a single sample per pixel, but it identifies any shadow
boundaries. The recursive "fill" algorithm used in pass #2 helps a lot since
it works very hard on the soft areas but doesn't fire much more in solid light
or solid dark regions. It worked surprisingly well. I meant to do the same
for depth of field, but it was trickier and I never got around it. However, I
really find the shadow "area growth" sampling to work exceedingly well; looking
at a grey scale image of the number of samples taken for each pixel, the soft
shadow regions are easily noticed.
This is good; you only do the horrible soft shadow work where the shadow
really is soft.
-------------------------------------------------------------------------------
Progressive Ray Tracing and Fast Previews, by Eric Haines
Here's an interesting form of progressive ray tracing: you first render
everything at say 8x8 resolution and toss gouraud shaded polygons on the
screen, using the samples to generate 8x8 sized polygons to toss to the screen.
Then you look for large differences and resolve them. My system started with
the 8x8's spiralling outwards from the middle of the picture (since the
interesting stuff is usually at the center), then resolved on a variety of
different types of corner comparisons: did the objects at the corners differ?
Did the shadows change state? Did the colors differ? etc. Just doing color
differences was not good enough, as you would waste a lot of time refining
textures when what you really would rather have seen was object boundaries.
Whenever a sample was refined, its neighbors would also be put in the queue so
that, while you watched, an entire object or shadow edge would resolve itself.
It was nice, in that you'd get a 95% good sphereflake, for example, in 1/10th
the rendering time. As a bonus, you'd actually get somewhat soft reflections
for awhile during the process (since reflective areas were not refined first,
so they would be represented by gouraud blends of their samples).
As discussed in RTNv8n2, this "soft reflection" idea can actually be used in
object space (instead of image space, as above). Microcosm (reviewed in
RTNv7n5) (now called Megahedron, and available from Syndesis
http://www.syndesis.com) actually implements this technique.
-------------------------------------------------------------------------------
Free Radiosity Renderer (inc. C++ source code), by Ian Ashdown
(iashdown@ledalite.com)
The C++ source code for HELIOS, a fully-functional radiosity renderer for
MS-Windows 3.1, is now available for downloading on the 'net ...
"Radiosity" is a computer graphics technique that enables you to synthesize
photorealistic images. Whereas ray tracing techniques excel in the rendition
of point light sources, specular reflections, and refraction effects, radiosity
methods accurately model area light sources, diffuse reflections, color
bleeding between surfaces, and detailed shading within shadows. They are in a
sense complementary approaches to photorealistic rendering.
Folklore had it that you needed a graphics workstation with gigabytes of RAM or
even a supercomputer to do radiosity rendering. This is no longer true: You
can use your personal desktop computer -- a '386 IBM-PC with a math
coprocessor, 4 MB of RAM, and a 256-color SVGA display will do nicely -- to
experiment with radiosity methods. A 66-MHz '486DX machine will render a
simple scene (540 polygons) in less than three minutes. A more complex scene
with 2,700 polygons can be rendered in a little over six minutes.
Commercial radiosity renderers are slowly making their way into the
marketplace. Take a look, for example, at the incomparable Lightscape
Visualization system (http://www.lightscape.com) to see what is available now
for Windows NT. (Other interesting sites on the Web for commercial radiosity
renderers are http://www.bentley.com/products/masterpiece -- download the
Microstation MasterPiece Technical Profile -- and the Italian
http://www.atma.it/. [also Lightworks,
http://www.lightwork.com/products/rad_over.htm. -EAH])
In the meantime, you can download HELIOS to experiment with the possibilities
of radiosity rendering using MS-Windows 3.1 or Windows 95. The Web site is:
http://www.ledalite.com/software/software.htm
where you will find a demonstration version of HELIOS Version 1.03A (144 KB)
and the complete C++ development package (700 KB), including four different
executable versions of HELIOS, fully-commented C++ source code (over 12,700
lines), make files for Microsoft Visual C++ 1.5 and Borland C++ 4.5, online
help files, two demonstration environments, demo images, and more.
(While you are perusing our Web site, you might want to look at
http://www.ledalite.com/library-/ledapub.htm -- we have an eclectic variety of
academic papers and articles on computer graphics and related topics available
online for downloading.)
The HELIOS development package is *not* in the public domain. It is
copyrighted material that may be freely copied, redistributed, and/or modified
for personal, non-commercial use ONLY, provided the copyright notice is
included with all source code files.
HELIOS was first developed in:
Ashdown, I. 1994. Radiosity: A Programmer's Perspective.
New York, NY: John Wiley & Sons, Inc. Softcover, 498 pages, 12 color
plates. http://www.wiley.com.
ISBN 0-471-30444-1 (without diskette) $39.95 US
ISBN 0-471-30488-3 (with 3.5-inch MS-DOS diskette) $54.95 US
[brief review and table of contents in RTNv7n3. -EAH]
This book provides a detailed explanation of radiosity theory and its
associated algorithms (no knowledge of higher mathematics required!) More
important, it also includes complete, fully documented, and
compiler-independent C++ source code (over 7,500 lines) for HELIOS Version
1.00A, a fully-functional radiosity- based rendering program for MS-Windows
3.1, Windows 95, and Windows NT.
The radiosity-related code presented in the book is identical to that now
offered online. If you want to fully understand how HELIOS (and radiosity)
works, you more or less need to buy the book.
-------------------------------------------------------------------------------
Cells and Portals Resources, by Randy Stiles (stiles@aic.lockheed.com)
[This is a reply for information on cells and portals, a technique for quickly
culling out geometry for scenes with many rooms. Invented for speeding
architectural walkthroughs, the most popular application of the ideas is now
probably "Quake"). --EAH]
Carlo Sequin's group at UC Berkeley has done some good work in the area of
culling scenes using cells and portals, the URL for his home page with related
links is:
http://HTTP.CS.Berkeley.EDU/~sequin/
A good reference for this technical area is:
S. J. Teller and C. H. Sequin,
``Visibility Preprocessing for Interactive Walkthroughs,''
Proc. ACM SIGGRAPH'91,
Las Vegas, Aug. 1991, and
Computer Graphics,
Vol 25, No 4, pp 61-69, (1991).
Seth Teller is now at MIT, and here is the URL for his PhD Thesis which is
focused on the cells and portals culling area:
http://graphics.lcs.mit.edu/~seth/pubs/pubs.html
David Leubke at UNC has implemented a cell-based culling system for Performer,
see his URL at:
http://www.cs.unc.edu/~luebke/publications.html
-------------------------------------------------------------------------------
New Books, by Eric Haines
One thing I've been doing for a little money is review book proposals and
technical edit book drafts. I could probably make a better hourly wage at
McDonald's, but the concept that someone is actually paying me to read
something that I want to read anyway is great (don't tell any editors that, or
my wages drop).
Two books I've worked on last year are Keith Rule's "3D Graphic File Formats:
A Programmers Reference," Addison-Wesley (see http://www.europa.com/~keithr/)
and and Bridget Testa Mintz' "Graphical Treasures of the Internet," Academic
Press (http://www.apnet.com/). I've also done some editing for parts of Rob
Glidden's "Graphics Programming with Direct3D: Techniques and Concepts,"
Addison-Wesley.
"3D Graphic File Formats" is about the only book in its field. Murray and
vanRyper's "Encyclopedia of Graphics File Formats" is about the only book that
comes close at this point, and their volume is primarily about image file
formats. Rule's book discusses VRML 1.0, DXF, 3DS, Wavefront OBJ, TrueSpace,
WTK-NFF, POV-Ray 2.2, and RAW formats. For most readers Rule provides
something no other book does: a working program for reading and writing each
of these 3D formats and translating from one to another. It is written in C++,
with a Windows framework to read/view/write models, along with an OCX custom
control. The CD ROM also includes over 100 models not normally found on the
web and trial versions of TriSpectives and TrueSpace. The best thing about the
book: the DXF format is explained in English, not tech-ese. Credit for this
accomplishment is shared by Rule and Zap Andersson, who has worked with the
format for years.
Some limitations of the book should be mentioned. First, the software converts
basic geometry, but currently does not support translations of surface vertex
normals (i.e. any smoothing information is lost when moving between formats)
or texture coordinates. It also does not have a serious polygon tessellator.
So, you may have to get a professional translation program (Syndesis'
InterChange, Okino's Polytrans, etc), for higher quality translations. Also,
the book is not meant to be the final word on each specification; much of the
book is dedicated to explaining how the software works, not how the format
works. However, given the page limitations, this book is quite a feat. It is
more than adequate for those who want to get under the hood and understand
various 3D file formats, and for those who want to convert from their own file
formats to others. Adding new format readers and writers is a fairly
straightforward process; usually one of the formats Rule covers is similar to
what you want to add, so that code can simply be regrooved to read in the new
format.
Mintz' "Graphical Treasures of the Internet" is on the non-technical end of the
spectrum. While a fair bit of the book is dedicated to explaining to beginners
many of the basics of Internet, commercial services, and non-commercial
alternatives (and I would bet that almost no one has tried all the resources
she covers), there is also a fair bit of meat about other topics. There are
interviews with a number of well-known computer graphics and Internet people
and many different perspectives are presented. Personally, I found the chapter
on copyright and software patents particularly interesting; it was something I
felt confident passing on to our company lawyer for his own edification. There
is also information here found nowhere else. For example, she presents a
history of the comp.graphics.* hierarchy back as far as could be done. Mintz
is an engineer turned professional writer, and it shows: this book is both
well-written and well-informed.
Glidden's "Graphics Programming with Direct3D", from Addison-Wesley
(http://www.aw.com/) is a pleasure to read, which is surprising given that the
topic sounds dry as can be. His book talks about the place of 3D in the
marketplace, motion capture, how to manage multi-user spaces, and other
interesting topics which do not directly bear on the topic of Direct3D. The
author is a long-time technical reporter on computer graphics and related
topics, so his prose is readable and he can't resist commenting on many aspects
of the field. He gives a great top-level view of various areas of the field,
along with solid advice on Direct3D, e.g. whether to go with retained mode vs.
immediate mode. There is also some information on 3D file formats such as
DirectX, DXF, and 3DS, on the basics of 3D vector math, and on the fundamentals
of graphics PC hardware. Given that the Direct3D API is still in flux (e.g.
today I read how a new command, DrawTriangle, was just added to it), it means
to me that this book actually could have a longer life-span than could be
expected, as Glidden's discussions on more than just the nitty-gritty of using
D3D will last well beyond the next beta release.
Many other books came out during 1996, of course. API books were a noticeable
trend, witness "3D with HOOPS" by William Leier and Jim Merry, Addison-Wesley,
with a CD-ROM with a personal version of the HOOPS library for the Mac, Unix,
and Windows. Such software used to cost hundreds, if not thousands, of dollars
for a single person license, but in the new market for mind-share prices have
dropped. Same story for "Learn 3D Graphics Programming on the PC" by Richard
Ferraro, Addison-Wesley, which is essentially a nicely done explanation of 3D
graphics using the RenderWare API and includes a personal copy on CD-ROM -
again, a few years ago a personal copy of this software would cost you about
$1000.
Certainly of note is "Jim Blinn's Corner", from Morgan Kaufmann Publishers,
http://www.mkp.com/. Finally, many of Blinn's IEEE CG&A column articles are
assembled in one place. I personally wish they had just gone for it and
published all 45 of his columns to date, but having these 20 in one place saves
me a lot of flipping through old issues of CG&A. In case you've been missing
out on this column, think "Graphics Gems" meets "Programming Pearls". Each
chapter is around 10 pages long and focusses on a particular topic, like "How
Many Ways Can You Draw A Circle?", "Subpixelic Particles", and "Backface
Culling Snags". There is a lot of useful "here's what I think is the
best/fastest/cleanest way to do this" type of advice. If you're a graphics
programmer, why you don't have this book I cannot imagine.
What else is new? Well, there's a second edition of Murray & vanRyper's
"Encyclopedia of Graphics File Formats", from O'Reilly, http://www.ora.com/,
which has bulked out a bit more with information on new formats like PNG and
more on 3D file formats like 3DS. I still consider this primarily a 2D file
format book (esp. now that Rule's book is out), but it's definitely the first
place I look for information on such formats. Other than that, hmmm, I haven't
gone wild buying new books this year I guess. There are a ton of new books on
specific topics (e.g. I'd guess around 50+ on just VRML itself), but nothing
else pops out at me as of general interest - if you know anything good I've
missed, let me know!
Another book of note is Watkins & Mallette's "Stereogram Programming
Techniques", from Charles River Media. I beat this topic to death in RTNv8n1,
reviewing two books in the process, so I have to admit that I've only skimmed
through this new volume (the fad's past, right? Stereogram pictures are
destined to be remembered as the black light posters of the 90's). This new
entry in the field looks pretty good: it stresses the theory and programming
behind the technique, and there's no hidden agenda to sell you any additional
software. This book is considerably longer that the other two books I reviewed
(it's about 450 pages), and so can take its time in presenting theory, code
listings, etc. The stress is put on stereo pairs, SIRDS, and SIS (i.e. there
are no text stereograms). All in all, it looks good, and also points the
reader at other articles and resources.
For more book recommendations, try Brian Hook's page:
http://www.wksoftware.com/publications/3dbooks.html
Tastes vary, and I disagree with a few of his reviews, but the great thing for
me was to hear about some books I've never even run across. Such a list gives
you a chance to get some titles to page through at the bookstore and see if
they look worthwhile (Also, there are some interesting HTML pages on Direct3D
and OpenGL under the "publications" category at this site, if you're in to
such things).
-------------------------------------------------------------------------------
END OF RTNEWS