_ __ ______ _ __
' ) ) / ' ) )
/--' __. __ , --/ __ __. _. o ____ _, / / _ , , , _
/ \_(_/|_/ (_/_ (_/ / (_(_/|_(__<_/ / <_(_)_ / (_, but send
contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989 by the individual authors
Archive location: anonymous FTP at cs.uoregon.edu (128.223.4.1), /pub/RTNews
Contents:
Introduction
New People and Address Changes
Q&A on Radiosity Using Ray Tracing, Mark VandeWettering & John Wallace
Dark Bulbs, by Eric Haines
MTV Ray Tracer Update and Bugfix, by Mark VandeWettering
DBW Ray Tracer Description
======== USENET cullings follow ========
Wanted: Easy ray/torus intersection, by Jochen Schwarze
Polygon to Patch NFF Filter, by Didier Badouel
Texture Mapping Resources, by Eric Haines, Prem Subrahmanyam,
Ranjit Bhatnagar, and Jack Ritter
-------------------------------------------------------------------------------
Introduction
There are a lot of new people, with some interesting fields of study.
There's been a lot of talk about texture mapping and the DBW ray tracer on the
net. This discussion will probably continue into next issue, but I felt Jack
Ritter's posting a good way to end it for now. I've also been toying with
texturing again, making my version of "Mount Mandrillbrot" (fractal mountain
with everyone's favorite beasty textured onto it), which some clever person
invented at the University of Waterloo (I think) some years ago (does anyone
know who?). There are also other useful snippets throughout.
However, one major reason that I'm flushing the queue right now is
that the node "hpfcrs" is disappearing off the face of the earth. So, please
note my only valid address is the "wrath" path at the top of the issue.
Thanks!
-------------------------------------------------------------------------------
New People and Address Changes
Panu Rekola, pre@cs.hut.fi
To update my personal information in your files:
Surface mail: Panu Rekola
Mannerheimintie 69 A 7
SF-00250 Helsinki, Finland
Phone: +358-0-4513243 (work), +358-0-413082 (home)
Email: pre@cs.hut.fi
Interests: illumination models, texture mapping, parametric surfaces.
You may remove one of the names you may have in the contact list. Dr. Markku
Tamminen died in the U.S. when he was returning home from SIGGRAPH. How his
project will go on, is still somewhat unclear.
--------
Andrew Pearce, pearce@alias
I wrote my MS thesis on Multiprocessor Ray Tracing, then moved to Alias where
I sped up Mike Sweeney's ray caster. I've just completed writing the Alias
Ray Tracer using a recursive uniform subdivision method (see Dave Jevans paper
in Graphics Interface '89, "Adaptive Voxel Subdivision for Ray Tracing") with
additional bounding box and triangle intersection speed ups.
Right now, I'm fooling around with using the guts of the ray tracer to do
particle/object collision detection with complex environments, and
particle/particle interaction with the search space reduced by the spatial
subdivision. (No, I don't use the ray tracer to render the particles.)
In response to Susan Spach's question about mip mapping, we use mip maps for
our textures, we get the sample size by using a "cone" size parameter which is
based on the field of view, aspect ratio, distance to the surface and angle of
incidence. For secondary rays this size parameter is modified based on the
tangents to the surface and the type of secondary ray it is (reflection or
refraction). This may be difficult to do if you are not ray tracing surfaces
for which the tangent information is readily available (smooth shaded
polygonal meshes?).
- Andrew Pearce
- Alias Research Inc., Toronto, Ontario, Canada.
- pearce%alias@csri.utoronto.ca | pearce@alias.UUCP
- ...{allegra,ihnp4,watmath!utai}!utcsri!alias!pearce
--------
Brian Corrie, bcorrie@uvicctr.uvic.ca
I am a graduate student at the University of Victoria, nearing the
completion of my Masters degree. The topic of my thesis is producing
realistic computer generated images in a distributed network environment.
This consists of two major research areas: providing a distributed (in the
parallel computing sense) system for ray tracing, as well as a workbench for
scene description, and image manipulation. The problems that need to be
addressed by a system for parallel processing in a distributed loosely coupled
system are quite different than those addressed by a tightly coupled parallel
processor system. Because of the (likely) very high cost of communication in
a distributed processing environment, most parallel algorithms currently used
are not feasible (due to the high overhead). The gains of parallel ray
tracing in a distributed environment are: the obvious speedup by bringing
more processing power to bear on the problem, the flexibility of distributed
systems, and the availability of the resources that will become accessible as
distributed systems become more prominent in the computer community.
Whew, what a mouthful. In a nutshell, I am interested in: ray
tracing in general, parallel algorithms, distributed systems for image
synthesis (any one know of any good references), and this new fangled
radiosity stuff.
--------
Joe Cychosz
Purdue University CADLAB
Potter Engineering Center
W. Lafayette, IN 47906
Phone: 317-494-5944
Email: cychosz@ecn.purdue.edu
My interests are in supercomputing and computer graphics. Research work is
Vectorized Ray Tracing. Other interests are: Ray tracing on MIMD tightly
coupled shared memory machines, Algorithm vectorization, Mechanical design
processes, Music synthesis, and Rendering in general.
--------
Jerry Quinn
Department of Math and Computer Science
Bradley Hall
Dartmouth College
Hanover, NH 03755
sunapee.dartmouth.edu!quinn
My interests are currently ray tracing efficiency, parallelism,
animation, radiosity, and whatever else happens to catch my eye at the
given moment.
--------
Marty Barrett - octrees, parametric surfaces, parallelism.
mlb6@psuvm.bitnet
Here is some info about my interests in ray tracing:
I'm interested in efficient storage structures for ray tracing, including
octree representations and hybrid regular subdivision/octree grids. I've
looked at ray tracing of parametric surfaces, in particular Bezier patches and
box spline surfaces, via triangular tessellations. Parallel implementations of
ray tracing are also of interest to me.
--------
Charles A. Clinton
Sierra Geophysics, Inc.
11255 Kirkland Way
Kirkland, WA 98033 USA
Email: ...!uw-beaver!sumax!ole!steven!cac
Voice: (206) 822-5200
Telex: 5106016171
FAX: (206) 827-3893
I am doing scientific visualization of 3D seismic data. To see the kind of
work that I am doing, check out:
'A Rendering Algorithm for Visualizing 3D Scaler Fields'
Paolo Sabella
Schlumberger-Doll Research
Computer Graphics, Vol. 22, Number 4 (SIGGRAPH '88 Conference Proc.)
pp 51-58
In addition, I try to keep up with ray-tracing and computer graphics in
general. I occasionally try my hand at doing some artistic ray-tracing.
(I would like to extend my thanks to Mark VandeWettering for distributing
MTV. It has provided a wonderful platform for experimentation.)
--------
Jochen Schwarze
I've been developing several smaller graphics packages, e.g. a 3D
visualization of turning parts etc. Now I'm implementing the 2nd version of a
ray tracing program that supports modular programming using a description
language, C++ vector analysis and body class hierarchy, CSG trees, texture
functions and mapping, a set of body primitives, including typeface rendering
for logos, and a network ipc component to allow several cpu's to calculate a
single image.
My interests lie - of course :-) - in speedup techniques, and the
simulation of natural phenomena, clouds, water, etc. Just starting with this.
Jochen Schwarze Domain: schwarze@isaak.isa.de
ISA GmbH, Stuttgart, West Germany UUCP: schwarze@isaak.uucp
Bang: ...!uunet!unido!isaak!schwarze
S-Mail: ISA GmbH
c/o Jochen Schwarze
Azenberstr. 35
7000 Stuttgart 1
West Germany
-------------------------------------------------------------------------------
Q&A on Radiosity Using Ray Tracing, Mark VandeWettering & John Wallace
From Mark VandeWettering:
I am currently working on rewriting my ray tracer to employ radiosity-like
effects. Your paper (with Wallace and Elmquist) is very nice, and suggests
a really straightforward implementation. I just have a couple of questions
that you might be able to answer.
When you shoot energy from a source patch, it is collected at a specific
patch vertex. How does this energy get transferred to a given patch for
secondary shooting? In particular, is the vertex shared between multiple
patches, or is each vertex only in a single patch? I can imagine the
solution if each vertex is distinct, but have trouble with the case where
vertices are shared. Any quick insights?
The only other question I have is: HOW DO YOU GET SUCH NICE MODELS TO RENDER?
[We use ME30, HP's Romulus based solids modeler - EAH]
Is there a public domain modeling package that is available for suns or sgi's
that I can use to make more sophisticated models? Something cheap even?
[The BRL modeler and ray tracer runs on a large number of machines, and they
like having universities as users - see Vol.2 No.2 (archive 6). According
to Mike Muuss' write-up, some department in Princeton already has a copy.
The Simple Surface Modeler (SSM) works on SGI equipment. It was developed at
the Johnson Space Center and, since they are not supposed to make any money
off it, is being sold cheap (?) by a commercial distributor. COSMIC, at
404-542-3265, can send you some information on it. It also runs on a Pixel
Machine (which is what I saw it running on at SIGGRAPH 88), though I don't
believe support for this machine will be shipped. It's evidentally not
shipping yet (red tape - the product is done), but should be "realsoonnow".
More information when I get the abstract. Does anyone else know any
resources?]
--------
Reply from John Wallace:
Computing the patch energy in progressive radiosity using ray tracing:
Following a step of progressive radiosity, every mesh vertex in the scene will
have a radiosity. Energy is not actually collected at the mesh vertices.
What is computed at each vertex is the energy per unit area (radiosity)
leaving the surface at that location. The patch radiosity is the average
energy per unit area over the patch. Finally, the patch energy is the patch
radiosity times the patch area (energy per unit area times area).
The vertex radiosities can be considered a sampling of the energy per unit
area at selected points across the patch. To obtain the average energy per
unit area over the patch, take the average of the vertex radiosities. This
assumes that the vertices represent uniform sub-areas of the patch. This is
not necessarily true, and when it is not a more accurate answer is obtained by
taking an area weighted average of the vertex radiosity. The weight given to
a vertex is equal to the area of the patch that it represents. In our work we
used a uniform mesh and weighted all vertices equally.
It doesn't matter whether vertices are shared by neighboring patches, since
we're talking about energy per unit area. Picture four patches that happen to
all share a particular vertex. The energy per unit area leaving any of the
patches at the vertex is not affected by the fact that other patches share
that vertex. If we were somehow collecting energy at the vertex, then it
would have to be portioned out between the patches.
Once the patch radiosity is know, the patch energy is obtained by multiplying
patch radiosity times patch area.
-------------------------------------------------------------------------------
Dark Bulbs, by Eric Haines
An interesting idea mentioned to me by Andrew Glassner was the concept
of "darkbulbs" in computer graphics. This idea is a classic joke technology,
in which the darkbulb sucks light out of an area. For example, if you want
to sleep during the daytime, you simply turn on your negative 100 watt dark
bulb and your bedroom is flooded in darkness. Andrew noted that this
technology is entirely viable in computer graphics, and would even be useful
in obtaining interesting results.
I happened to mention the idea to Roy Hall, and he told me that this
was an undocumented feature of the Wavefront package! Last year Wavefront came
out with an image of two pieces of pottery behind a candle, with wonderful
texturing on the objects. It turns out that the artist had wanted to
tone down the brightness in some parts of the image, and so tried negative
intensity light sources. This turned out to work just fine, and the artist
mentioned this to Roy, who, as an implementer of this part of the package,
had never considered that anyone would try this and so never restricted the
intensity values to be non-negative.
-------------------------------------------------------------------------------
MTV Ray Tracer Update and Bugfix, by Mark VandeWettering
[this was extracted by me from personal mail, with parts appearing on
comp.graphics - EAH]
As was recently pointed out to me by Mike Schoenborn, the cylinder code in the
version of the MTV raytracer is broken somewhat severely. Or at least it
appeared to be so, what actually happens is that I forgot to normalize two
vectors, which leads to interesting distortions and weird looking cylinders.
Anyway, the bug is in cone.c, in the function MakeCone(). After the vectors
cd -> cone_u and cd -> cone_v are created, they should be normalized. A
context diff follows at the end of this. This makes the SPD "tree" look MUCH
better. (And all this time I thought it was Eric's fault :-)
This bugfix will be worked into the next release, and I should also update the
version on cs.uoregon.edu SOMETIME REAL SOON NOW (read, don't hold your breath
TOO anxiously). Hope that this program continues to be of use... :-)
Somebody has some texture mapping code that they are sending me, I will
probably try to integrate it in before I make my next release. I am also
trying to get in spline surfaces, but am having difficulty to the point of
frustration.Any recommendations on implementing them?
*** ../tmp/cone.c Fri Aug 25 20:25:52 1989
--- cone.c Fri Aug 25 21:31:04 1989
***************
*** 240,247 ****
--- 240,251 ----
/* find two axes which are at right angles to cone_w
*/
+
VecCross(cd -> cone_w, tmp, cd -> cone_u) ;
VecCross(cd -> cone_u, cd -> cone_w, cd -> cone_v) ;
+
+ VecNormalize(cd -> cone_u) ;
+ VecNormalize(cd -> cone_v) ;
cd -> cone_min_d = VecDot(cd -> cone_w, cd -> cone_base) ;
cd -> cone_max_d = VecDot(cd -> cone_w, cd -> cone_apex) ;
-------------------------------------------------------------------------------
DBW Ray Tracer Description
[A ray tracer that has been mentioned in these pages (screens?) before is DBW.
Not having an Amiga and not being able to deal with "zoo" files, I never got a
copy. Prem Subrahmanyam has now made it available via anonymous FTP from
geomag.gly.fsu.edu in /pub/pics/DBW.src. Output is four bits for each
channel.
The original program was written by David B. Wecker, translating from a Vax
11/750 to the Amiga, with a conversion to Sun workstations by Ofer Licht
(ofer@gandalf.berkeley.edu). - EAH]
Below is an excerpt from the documentation RAY.DOC:
The RAY program knows how to create images composed of four primitive
geometric objects: spheres, parallelograms, triangles, and flat circular
rings (disks with holes in them). Some of the features of the program are:
Diffuse and specular reflections (with arbitrary levels of gloss or polish).
Rudimentary modeling of object-to-object diffusely reflected light is also
implemented, that among other things accurately simulates color bleed effects
from adjacent contrasting colored objects.
Mirror reflections, including varying levels of mirror smoothness or
perfection.
Refraction and translucency (which is akin to variable microscopic smoothness,
like the surface of etched glass).
Two types of light sources: purely directional (parallel rays from infinity)
of constant intensity, and spherical sources (like light bulbs, which cast
penumbral shadows as a function of radius and distance) where intensity is
determined by the inverse square law.
Photographic depth-of-field. That is, the virtual camera may be focused on a
particular object in the scene, and the virtual camera's aperture can be
manipulated to affect the sharpness of foreground and background objects.
Solid texturing. Normally, a particular object (say a sphere) is considered
to have constant properties (like color) over the entire surface of the
object, often resulting in fake looking objects. Solid texturing is a way to
algorithmically change the surface properties of an object (thus the entire
surface area is no longer of constant nature) to try and model some real world
material. Currently the program has built in rules for modelling wood,
marble, bricks, snow covered scenes, water (with arbitrary wave sources), plus
more abstract things like color blend functions.
Fractals. The program implements what's known as recursive triangle
subdivision, which creates all manners of natural looking surface shapes (like
broken rock, mountains, etc.). The character of the fractal surface (degree
of detail, roughness, etc.) is controlled by parameters fed to the program.
AI heuristics to complete computation of a scene within a user specified
length of time. [???]
======== USENET cullings follow ===============================================
Wanted: Easy ray/torus intersection, by Jochen Schwarze
What I want to do is to turn a path consisting of line and arc segments around
an axis and then ray-trace the generated turning part. The rotated line
segments produce cylinders or cones that are easy to intersect with a ray,
whereas the arcs produce tori. To evaluate the intersection of the ray with a
torus I'd have to numerically solve a polynomial equation of fourth degree.
Does anybody know a way that avoids solving a general fourth- degree equation?
Perhaps something that respects torus geometry and allows to split the
equation into two quadric ones? Any other fast way to do it?
Thanks very much.
Jochen Schwarze Domain: schwarze@isaak.isa.de
ISA GmbH, Stuttgart, West Germany UUCP: schwarze@isaak.uucp
Bang: ...!uunet!unido!isaak!schwarze
-------------------------------------------------------------------------------
Polygon to Patch NFF Filter, by Didier Badouel
This is a new filter program for NFF databases, it converts polygons (p)
into patches (pp) computing normal vector for vertices.
________________________________________________________________
Didier BADOUEL badouel@irisa.fr
INRIA / IRISA Phone : +33 99 36 20 00
Campus Universitaire de Beaulieu Fax : 99 38 38 32
35042 RENNES CEDEX - FRANCE Telex : UNIRISA 950 473F
________________________________________________________________
[Code removed. Find it at cs.uoregon.edu or write him - EAH]
-------------------------------------------------------------------------------
Texture Mapping Resources, by Eric Haines, Prem Subrahmanyam,
Ranjit Bhatnagar, and Jack Ritter
From: Eric Haines
Robert Minsk had a question about how to do inverse mapping on a quadrilateral.
This was my response:
For the inverse bilinear mapping of XYZ to UV, see p. 59-64 of "An Introduction
to Ray Tracing", edited by Andrew Glassner, Academic Press (hot off the press).
Tell me if you find any bugs, since I need to send typoes to AP. This same
info is in the "Intro to RT" SIGGRAPH course notes from 1987 & 1988,
with one important typo fixed (see old issues of the Ray Tracing News to
find out the typo).
An excellent discussion of the most popular mappings (affine, bilinear, and
projective), and for a discussion on why to avoid simple Gouraud interpolation,
get a copy of Paul Heckbert's Master's Thesis (again, hot off the press),
"Fundamentals of Texture Mapping and Image Warping". It's got what you need
and is also a good start on sampling/filtering problems. Order it as
Report No. UCB/CSD 89/516 (June 1989) from
Computer Science Division
Dept of Electrical Engineering and Computer Sciences
University of California
Berkeley, California 94720
It was $5.50 when I ordered mine. Oh, I should also note: it has source
code in C for most of the algorithms described in the text.
--------
From: prem@geomag.fsu.edu (Prem Subrahmanyam)
Newsgroups: comp.graphics
Subject: Re: Texture mapping
Organization: Florida State University Computing Center
I would strongly recommend obtaining copies of both DBW_Render and QRT, as
both have very good texture mapping routines. DBW uses absolute spatial
coordinates to determine texture, while QRT uses a relative position per each
object type mapping. DBW has some really interesting features, like
sinusoidal reflection to simulate waves, a turbulence-based marble/wood
texture based on the wave sources defined for the scene. It as well has a
brick texture, checkerboard, and mottling (turbulent variance of the color
intensity). Writing a texture routine in DBW is quite simple, since you're
provided with a host of tools (like a turbulence function, noise function,
color blending, etc.). I have recently created a random-color texture that
uses the turbulence to redefine the base color based on the spatial point
given, which it then blends into the object's base color using the color blend
routines. Next will be a turbulent-color marble texture that will modify the
marble vein coloring according to the turbulent color. Also in the works are
random color checkerboarding (this will require a little more thought),
variant brick height and mortar color (presently they are hard-wired), the
list is almost endless. I would think the ideal ray-tracer would be one that
used QRT's user-definable texture patches which are then mapped onto the
object, as well as DBW's turbulence/wave based routines. The latter would
have to be absolute coordinate based, while the former can use QRT's relative
position functions. In any case, getting copies of both of these would be the
most convenient, as there's no reason to reinvent the wheel.
--------
From: ranjit@grad1.cis.upenn.edu (Ranjit Bhatnagar)
4211 Pine St., Phila PA 19104
Newsgroups: comp.graphics
Subject: Re: Texture mapping by spatial position
Organization: University of Pennsylvania
The combination of 3-d spatial texture-mapping (where the map for a particular
point is determined by its position in space rather than its position on the
patch or polygon) with a nice 3-d turbulence function can give really neat
results for marble, wood, and such. Because the texture is 3-d, objects look
like they are carved out of the texture function rather than veneered with it.
It works well with non-turbulent texture functions too, like bricks, 3-d
checkerboards, waves, and so on. However, there's a disadvantage to this kind
of texture function that I haven't seen discussed before: as generally
proposed, it's highly unsuited to _animation._ The problem is that you
generally define one texture function throughout all of space. If an object
happens to move, its texture changes accordingly. It's a neat effect - try it
- but it's not what one usually wants to see.
The obvious solution to this is to define a separate 3-d texture for each
object, and, further, _cause the texture to be rotated, translated, and scaled
with the object._ DBW does not allow this, so if you want to do animations of
any real complexity with DBW, you can't use the nice wood or marble textures.
This almost solves the problem. However, it doesn't handle the case of an
object whose shape changes. Consider a sphere that metamorphoses into a cube,
or a human figure which walks, bends, and so on. There's no way to keep the
3-d texture function consistent in such a case.
Actually, the real world has a similar defect, so to speak. If you carve a
statue out of wood and then bend its limbs around, the grain of the wood will
be distorted. If you want to simulate the real world in this way and get
animated objects whose textures stay consistent as they change shape, you have
to use ordinary surface-mapped (2-d) textures. But 3-d textures are so much
nicer for wood, stone, and such! There are a couple of ways to get the best
of both worlds: [I assume that an object's surface is defined as a constant
set of patches, whether polygonal or smooth, and though the control points may
be moved around, the topology of the patches that make up the object never
changes, and patches are neither added to or deleted from the object during
the animation.]
1) define the base-shape of your object, and _sample its surface_ in
the 3-d texture. You can then use these sample tables as ordinary
2-d texture maps for the animation.
2) define the base-shape of your object, and for each metamorphosized
shape, keep pointers to the original shape. Then, whenever a ray
strikes a point on the surface of the metamorphed shape, find the
corresponding point on the original shape and look up its
properties (i.e. color, etc.) in the 3-d texture map. [Note: I
use ray-tracing terminology but the same trick should be applicable
to other techniques.]
The first technique is perhaps simpler, and does not require you to modify
your favorite renderer which supports 2-d surface texture maps. You just
write a preprocessor which generates 2-d maps from the 3-d texture and the
base-shape of the object. However, it is susceptible to really nasty aliasing
and loss of information. The second technique has to be built into the
renderer, but is amenable to all the antialiasing techniques possible in an
ordinary renderer with 3-d textures, such as DBW. Since the notion of 'the
same point' on a particular patch when the control points have moved is
well-defined except in degenerate cases, the mapping shouldn't be a problem --
though it does add an extra level of antialiasing to worry about. [Why?
Imagine that a patch which is very large in the original base-shape has become
very small - sub-pixel size - in the current animated shape. Then a single
pixel-sized sample in the current shape could be mapped to a large part of the
original - using, for instance, stochastic sampling or analytic techniques.]
If anyone actually implements these ideas, I'd like to hear from you (and get
credit, heh heh, if I thought of it first). I doubt that I will have the
opportunity to try it.
--------
From: ritter@versatc.UUCP (Jack Ritter)
Organization: Versatec, Santa Clara, Ca. 95051
[Commenting on Ranjit's posting]
It seems to me that you could solve this problem by transforming the
center/orientation of the texture function along with the object that is being
instantiated. No need to store values, no tables, etc. The texture function
must of course be simple enough to be so transformable.
Example, wood grain simulated by concentric cylindrical shells around an axis
(the core of the log):
Imagine the log's center line as a half-line vector, (plus a position, if
necessary), making it transformable. Imagine each object type in its object
space, BOLTED to the log by an invisible bracket. As you translate and rotate
the object, you also sling the log around. But be careful, some of these logs
are heavy, and might break your teapots. I use only natural logs myself.
Jack Ritter, S/W Eng. Versatec, 2710 Walsh Av, Santa Clara, CA 95051
Mail Stop 1-7. (408)982-4332, or (408)988-2800 X 5743
UUCP: {ames,apple,sun,pyramid}!versatc!ritter
-------------------------------------------------------------------------------
END OF RTNEWS
_ __ ______ _ __
' ) ) / ' ) )
/--' __. __ , --/ __ __. _. o ____ _, / / _ , , , _
/ \_(_/|_/ (_/_ (_/ / (_(_/|_(__<_/ / <_(_)_ / (_, but send
contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.1) and at
freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews
Contents:
Introduction
New People and Address Changes
Solid Surface Modeler Information, by Eric Haines
Minimum Bounding Sphere Program, by Marshall Levine
Parallelism & Modeler Info Request, by Brian Corrie
======== USENET cullings follow ========
Ray Tracer Available, by Craig Kolb
Source from Roy Hall's Book, by Tim O'Connor
More on Texture Mapping by Spatial Position, by Paul Lalonde
Procedural Bump-mapping Query, by Prem Subrahmanyam
Ray Tracer Performance on Machines,
by Gavin A. Bell, Phil Dykstra, Steve Lamont
Projective Mapping Explanation, by Ken "Turk" Turkowski
Intersection Calculation Problem Request, Jari Toivanen
Mathematical Elements for Computer Graphics - Call for Errata,
by David Rogers
Raytracing on NCUBE Request, by Ping Kang Hsiung
Intersection Between a Line and a Polygon (UNDECIDABLE??),
by Dave Baraff, Tom Duff
-------------------------------------------------------------------------------
Introduction
It's October, the time when the air turns chilly, the leaves turn red, and
people's minds turn towards releasing a public domain version of their ray
tracer. Holy smokes there's a lot of them coming out lately! This month
Craig Kolb's ray tracer has become available, along with the first PD ray
tracer from Australia, by David Hook. Paul Lalonde mentions that his will be
coming out soon, and will include spline surfaces. Also, David Kirk and Jim
Arvo have created a ray tracer which they used in their workshop in Australia,
and which may be released to the general public soon. Other code that has
been made available is that printed in Roy Hall's _Illumination and Color in
Computer Generated Imagery_ book.
Next month I hope to collect various timing information from all sorts of
ray tracers on all sorts of machines. I hope to do a "trace-off" sometime
soon, comparing MTV's, Craig's, DBW, QRT, ART, mine, and any others I can get
up and running. If anyone else has any timings or observations on performance
of ray tracers and various machines, please send them to me.
-------------------------------------------------------------------------------
New People and Address Changes
David Hook
dgh@munnari.OZ.AU
Dept. Of Engineering Computer Resources
University Of Melbourne
Parkville, Vic, 3052
Australia
G'day.
Our major area of interest in ray tracing is CSG modeling and we have a
locally developed ray tracer which is a step towards this, as a department we
are also involved with the Faculty of Architecture at this University, so we
are starting to look at special effects somewhat more seriously than before.
This has also led to a greater interest in acceleration techniques.
Personally, I am currently doing a masters degree in the area of CSG and ways
of introducing patches into the model. The rendering technique being used is
ray tracing.
[And a further note from David Hook:]
The mailing list has been set up on munnari, so if you send it to
rtnews@munnari.OZ.AU, it will (should) travel around Oz to the people who want
it. I am asking people who subscribe if they wish to be on the contact list,
etc...
As a bit of additional info, I have written a ray-tracer which does CSG and
renders algebraic surfaces, (ala Pat Hanrahan), although in this case it's
built around Sturm Sequences and we occasionally use CSG to take
cross-sections of the surfaces. The interest in algebraic surfaces began
because a friend of mine was struggling with a 6th order surface known as the
Hunt Surface, getting a good feel for the cusps on it was turning out to be
awful using polygonal subdivision. In any case there is a public domain
version of all this sitting in pub on munnari.OZ.AU (128.250.1.21) which can
be got by anonymous ftp. The file is vort.tar.Z. Knowing a bit more about
the whole business now, it's a bit of an embarrassment! Still it may be of
interest to someone and constructive criticism is always welcome.
[From a README file in his ray tracing distribution:]
By the by, for people who are interested, there are an excellent series of
papers on ray tracing and computer graphics in general published in the NATO
ASI Series of books. The volume in question is in Vol. 40, series F, and is
titled "Theoretical Foundations of Computer Graphics and CAD". It was
published in 1988 Springer-Verlag. Roman Kuchkuda's paper in it "An
Introduction To Ray Tracing", would be the best introductory paper we have
seen to date. Apart from that it was the first paper we found that actually
said what a superquadric was!
--------
NAME: Hench, Stephen D. SNAIL MAIL: 2621-C Stewart Drive
E MAIL: hench@csclea.ncsu.edu Raleigh, NC 27603
BRIEF: Undergrad in Mathematics and Computer Science at NCSU.
Interested in ray tracing (would I want to subscribe
if I wasn't?), radiosity, and rendering in general.
--------
Marshall Levine
136 1937 Hall Wilson College
Princeton University
Princeton, NJ 08544
(609) 734-6061
Home:
Marshall Levine
5212 Louise Avenue
Encino, California 91316
(818) 995-6528
(818) 906-7068
E-mail:
(1) mplevine@phoenix.princeton.edu or:
(2) mplevine@gauguin.princeton.edu or:
(3) mplevine@bogey.princeton.edu
My main interests are helicopters and computer graphics. Within graphics, I
am interested in animation and motion control. While I think it is great to
see a ray-traced magnifying glass sitting on top of a cicuit board, I would
rather see the magnifying glass fly smoothly over a spinning board while the
camera flies smoothly through the scene. I am currently designing a flexible
graphics language with a friend of mine, Chris Williams (Princeton U. '92).
If anyone is interested, I can say more about that later.
--------
Cornell Program of Computer Graphics
A ray tracing mailing list has been set up by Tim O'Connor:
ray-tracing-news@wisdom.graphics.cornell.edu
Program of Computer Graphics
120 Rand Hall
Cornell University
Ithaca, NY 14853
People on this list who've already been intro'ed here include: Roy Hall,
Mark Reichert, Ben Trumbore, and Tim O'Connor.
New people and brief bio sketches:
Wash Wawrzynek - paw@squid.graphics.cornell.edu
Current interest are user interfaces and visualization for computational
mechanics.
--------
Len Wanger - lrw@freedom.graphics.cornell.edu
My sketch is on a piece of paper, but my interests are: I am a graduate
student in the department of computer graphics at Cornell University. I am
interested in modeling and visual perception.
--------
Filippo Tampieri - fxt@freedom.graphics.cornell.edu
Areas of interest: parallel/distributed ray tracing, fast algorithms for ray
tracing.
--------
Ricardo Pomeranz - rxp@venus.graphics.cornell.edu
Interests: constructive solid geometry and rendering
--------
Paul Wanuga - phw@neptune.graphics.cornell.edu
Masters student at Cornell's Computer Graphics Lab. Interests - rendering
realistic complex environments in realistic times.
--------
Kathy Kershaw - kershaw@hope.graphics.cornell.edu
I'm Kathy Kershaw. I did the ray tracing thing once. Maybe it'll have
something to do w/ my master's thesis; maybe not.
--------
Colin Summers - colin@scarpa.graphics.cornell.edu
Just recently interested in computer graphics and heading into the abyss from
the architecture side, I have a background in personal computers and spent a
year out of the design studio to computer consult in Manhattan. Glad to be
back in the world of academia. As soon as someone comes across with a
Macintosh like text processor for xWindows, let me know.
--------
Ted Himlan - thh@squid.Graphics.Cornell.EDU
Color science, radiometric measurement, array camera.
interest: detailed measurements on an environment
for comparison to simulation.
--------
Julie O'Brien Dorsey - job@hope.graphics.cornell.edu
Computer aided design applications, radiosity, lighting design
--------
Francois Sillion - fxs@bruno.graphics.cornell.edu
I am currently on a Post-Doc position at Cornell, after having completed my
PhD at the 'Ecole Normale Superieure' in Paris, France, where my work included
the development of a two-pass method for lighting calculations, combining ray
tracing and radiosity.
My interests are illumination models (local and global), animation and
interactivity.
-------------------------------------------------------------------------------
Solid Surface Modeler Information, by Eric Haines
The Solid Surface Modeler from NASA finally came out. The disappointing news
is that even though it's "a non-profit unit of the University of Georgia," the
thing is priced at $1250 for a cartridge tape and documentation (which is $43
separately). The reason I mention it is this newsletter is that it was used
for some rather elaborate databases that were both modeled and ray traced on
the AT&T Pixel Machine. Unfortunately, it's unclear whether the Pixel Machine
driver program is included in the distribution. The modeler itself sounds
reasonable, source code comes on the tape, and there seems to be no
restrictions on the use of the software. It's a pity that it's pricey when
compared to, say, FSF stuff, but I guess someone has to pay for those glossy
advertisement folders.
From their literature: "SSM was written in standard C with Silicon Graphic's
Iris Graphics Library calls and AT&T PicLib calls.... The program is
available for the Silicon Graphics IRIS workstation running version 3.1 of
IRIX, and a Sun Workstation with AT&T PXM964 running 4.2 BSD."
For more information contact:
COSMIC
The University of Georgia
382 East Broad Street
Athens, GA 30602
(404)-542-3265
-------------------------------------------------------------------------------
Minimum Bounding Sphere Program, by Marshall Levine
I think you will be interested in the following program. It is a
minimum-bounding-sphere program. As the explained in the header comments, the
main algorithm seems to solve the problem in linear time. Please let me know
what you think.
{ clusters.p
Written by Marshall Levine (Princeton University '92)
e-mail: mplevine@phoenix.princeton.edu
Algorithm designed by Marshall Levine and Chris Williams (Princeton U. '92)
This program searches through a 3-dimensional space of randomly distributed
points for a cluster of 10 stars within the smallest radius possible.
I first implemented a "pri" list. This is a linked list of real numbers
(representing distances). The list is kept in order. However, when a number
that would go farther into the list than the number of points per sphere
(NUMINSPHERE) tries to go into the list, the insert procedure stops it. This
is done because the distance is useless. For example, if NUMINSPHERE is 5 and
a number would be inserted into the 7th slot in the list, it is not inserted.
The minimum radius of a sphere with 5 points would be determined by the 5th
element of the list (not including the header), so any number inserted after
the 5th element is useless and is therefore not inserted. If there are not
NUMINSPHERE elements in the pri, then there are not enough points to fill the
sphere.
The brute-force algorithm loops through every point in space. For each point,
the algorithm finds the distance between that point and every other point and
puts that distance into the pri. When all points have been compared against
this point, the NUMINSPHERE'th element is taken to be the minimum radius of a
sphere centered at this point containing NUMINSPHERE points. However, points
are not compared against themselves, so the exact number of comparisons is
N^2-N, making this an N^2 algorithm.
The efficient algorithm designed by Chris Williams and me divides the space
into a 3-dimensional grid. If the grid is divided into NUMPOINTS/NUMINSPHERE
sectors, then at least one of the sectors must have at least NUMINSPHERE
points. Now, make spheres with the same volume as the sectors. At least one
sphere surrounding one point will have at least NUMINSPHERE points. It turns
out that the tradeoff between fewer computations and more overhead is
minimized by choosing the grid to have enough sectors such that each sector is
r/2 on each side (where r is the radius of the aforementioned sphere). Our
algorithm starts with a sphere radius equal to the distance from one corner of
the unit cube to another (3^.5). Given the first point in the list, we
compare that point against every other point in sectors touching the sphere
(In this case, every other point in space!) By storing the distances and then
taking the NUMINSPHERE'th number from the pri list, as in the brute algorithm,
we frequently reduce the size of the sphere. Then, we check the next point
with the new, smaller sphere size and continue in this way until we have
tested every point. As we go along, the minimum sphere size keeps shrinking
until for any given point, we only check a few neighboring sectors, if any.
In practice, this radius shrinks so quickly that the algorithm displays LINEAR
BEHAVIOR!
NOTE: This program was written for clarity, not for efficiency. If it is to
be used in any real applications, there are many ways to speed it up.
Bruteforce: Our algorithm:
(Average of 3 runs)
Points: #Comparisons: comps/points: #Comparisons: comps/points:
50 2450 49.000 958 19.160
75 5550 74.000 1241 16.547
100 9900 99.000 2111 21.110
150 22350 149.000 2785 18.567
200 3689 18.445
250 5120 20.480
300 6010 20.033
350 7149 20.426
400 7658 19.145
600 11404 19.007
800 16781 20.976
1000 20438 20.438
Testing 50 points.
Brute-force:
Best sphere: 0.3067962678
Number of comparisons: 2450
Efficient Algorithm:
Best sphere: 0.3067962678
Number of comparisons: 946
%time cumsecs #call ms/call name
31.6 0.10 1 100.01 _brutecluster <====
10.5 0.18 1 33.34 _elegantcluster <====
5.3 0.25 101 0.17 _clearprilist
5.3 0.27 581 0.03 _insertprilist
5.3 0.28 1 16.67 _makespace
Testing 300 points.
Brute-force:
Best sphere: 0.1569231423
Number of comparisons: 89700
Efficient Algorithm:
Best sphere: 0.1569231423
Number of comparisons: 5617
%time cumsecs #call ms/call name
44.2 3.27 1 3267.00 _brutecluster <====
2.9 6.82 1 216.69 _elegantcluster <====
1.1 7.00 2358 0.04 _insertprilist
0.2 7.33 601 0.03 _clearprilist
0.0 7.38 1 0.00 _makespace
}
{==========================================================================}
program clusters(input,output);
const
MAXNUMPOINTS = 501; { The maximum # of points we can handle }
NUMINSPHERE = 10; { # stars to find inside sphere }
INFINITY = 999999.9; { Larger than largest distance possible }
MAXUSESPACE = 20; { Maximum length per edge of space-grid }
PI = 3.1415926535;
type
datatype = real;
point = record { The type of a point }
x : real;
y : real;
z : real;
data : datatype;
end;
ptr = ^node;
node = record { Linked list for a distances list called "pri" }
data : real;
next : ptr;
end;
sptr = ^spacenode; { Linked list for each sector in the space-grid }
spacenode = record
index : integer; { Stores index of that point in points[] }
next : sptr;
end;
var
rndnm : integer; { Needed for the random number generator }
points : array [1..MAXNUMPOINTS] of point; { All points in space }
listhead : ptr; { List head for distances list called "pri" }
space : array[0..MAXUSESPACE, 0..MAXUSESPACE, 0..MAXUSESPACE] of sptr;
{ The space-grid (hereafter called 'grid') }
spacesize, usespace : integer; { Size per edge of grid }
NUMPOINTS : integer; { The number of points we have in space }
{ **************** Support routines for random generators ************** }
procedure seed; { Seed the random number generator }
begin
writeln('seed:');
readln(rndnm);
end;
function rndom(scale : integer) : real; { Make random real from 0 to scale }
begin
rndnm := abs(abs((rndnm*921+1)) mod 32749);
rndom := (rndnm*scale/32749)
end;
procedure randompoint(var pt : point); { Generate a random point within }
begin { a unit cube. }
pt.x := rndom(1);
pt.y := rndom(1);
pt.z := rndom(1)
end;
procedure generatepoints; { Generate NUMPOINTS points in space }
var x : integer;
begin
for x := 1 to NUMPOINTS do
randompoint(points[x])
end;
{ *************** Support routines for the "pri" list ******************** }
procedure initprilist; { Initialize the pri list }
begin
new(listhead);
listhead^.data := 0.0;
new(listhead^.next);
listhead^.next^.data := INFINITY;
listhead^.next^.next := nil
end;
procedure clearprilist; { Clear the pri list }
var p,oldp : ptr;
begin
p := listhead;
while p <> nil do
begin
oldp := p;
p := p^.next;
dispose(oldp)
end;
new(listhead);
listhead^.data := 0.0;
new(listhead^.next);
listhead^.next^.data := INFINITY;
listhead^.next^.next := nil
end;
procedure insertprilist(r : real); { Insert a distance into pri list }
var p,oldp,temp : ptr; { "pri" is just a linked list of distances }
x : integer; { kept in low -> high order. The catch is }
begin { that if a number should be inserted after }
x := 1; { the NUMINSPHERE'th node, we don't bother }
p := listhead^.next; { inserting it, because it isn't in the }
oldp := listhead; { smallest sphere with NUMINSPHERE points. }
while (r > p^.data) and (x <= NUMINSPHERE) do
begin
oldp := p;
p := p^.next;
x := x + 1
end;
if x <= NUMINSPHERE then
begin
new(temp);
temp^.data := r;
temp^.next := p;
oldp^.next := temp
end
end;
function getbiggestinsphere : real; { Returns value of the NUMINSPHERE'th }
var x : integer; { element in pri list, or INFINITY }
p : ptr; { if the list isn't that long. }
begin
x := 1;
p := listhead^.next;
while (x < NUMINSPHERE) and (p <> nil) do
begin
x := x + 1;
p := p^.next
end;
if (x < NUMINSPHERE) or (p = nil) then getbiggestinsphere := INFINITY
else getbiggestinsphere := p^.data
end;
procedure printprilist; { Print the pri list, for debugging }
var p : ptr;
begin
p := listhead; { DO print the head }
while p <> nil do
begin
writeln(p^.data:15:10);
p := p^.next
end;
writeln('nil')
end;
{ ******************* Miscellaneous support routines ******************** }
procedure printpoint(pt : point); { Print out a point }
begin
writeln('(',pt.x:8:5,',',pt.y:8:5,',',pt.z:8:5,')')
end;
function cube(x : real) : real; { Return cube root of a number }
begin
cube := exp((1/3)*ln(x))
end;
{ *********************** Brute Force algorithm ************************* }
procedure brutecluster; { Find minimum sphere containing NUMINSPHERE }
{ points by testing the distance between }
{ every point. }
var distx,disty,distz,dist : real; { Find distance between two points }
bestsphere,trysphere : real; { Find minimum sphere }
numcomps : integer; { # comparisons }
thispoint,againstpoint : integer; { Counters }
begin
clearprilist; { Kill the priority list }
bestsphere := INFINITY;
numcomps := 0;
for thispoint := 1 to NUMPOINTS do { Test every point... }
begin
clearprilist;
for againstpoint := 1 to NUMPOINTS do { ...against every other point }
if thispoint <> againstpoint then { Don't compare point against self}
begin
distx := points[thispoint].x - points[againstpoint].x;
disty := points[thispoint].y - points[againstpoint].y;
distz := points[thispoint].z - points[againstpoint].z;
dist := sqrt(distx*distx + disty*disty + distz*distz);
numcomps := numcomps + 1;
if dist < bestsphere then { If dist less than smallest sphere,}
insertprilist(dist) { insert distance into pri list }
end;
trysphere := getbiggestinsphere; { Get 'NUMINSPHERE'th item from list }
if trysphere < bestsphere then { If this radius is the smallest yet,}
bestsphere := trysphere; { then remember it. }
end;
writeln('Brute-force:');
writeln(' Best sphere: ',bestsphere:15:10);
writeln(' Number of comparisons: ',numcomps:8)
end;
{ **************************** My algorithm *********************** }
procedure makespace; { Build the space-grid. See documentation at }
var x,y,z : integer; { beginning of program for details. }
temp : sptr;
thispoint : integer;
begin
spacesize := trunc(cube(8*PI*NUMPOINTS/NUMINSPHERE));
usespace := spacesize-1;
if usespace > MAXUSESPACE then writeln('****** NOT ENOUGH MEMORY FOR GRID');
for x := 0 to usespace do
for y := 0 to usespace do
for z := 0 to usespace do
space[x,y,z] := nil; { Clear the grid }
for thispoint := 1 to NUMPOINTS do { Go through every point... }
begin
new(temp);
temp^.index := thispoint;
x := trunc(points[thispoint].x * spacesize);
y := trunc(points[thispoint].y * spacesize);
z := trunc(points[thispoint].z * spacesize);
temp^.next := space[x,y,z]; { Put this point into proper }
space[x,y,z] := temp; { sector in grid. }
end
end;
procedure elegantcluster; { Find smallest sphere containing NUMINSPHERE }
{ points by looping through every point, }
{ checking ROUGHLY only the points within }
{ a radius less than or equal to the }
{ minimum radius found so far. }
var bestsphere,trysphere : real;
xmin,xmax,ymin,ymax,zmin,zmax : integer; { Dimensions of box to check }
thispoint : integer; { The current point to test against }
x,y,z : integer; { The current grid we are testing }
distx,disty,distz,dist : real; { For computing distances }
numcomps : integer; { # comparisons }
cpindex : sptr; { Pointer into point list for a grid sector }
begin
makespace;
bestsphere := 1.732050808; { Start with radius of distance from one }
numcomps := 0; { corner of unit cube to other: 3^.5 }
for thispoint := 1 to NUMPOINTS do { Loop for every point }
begin
clearprilist;
xmin := trunc((points[thispoint].x - bestsphere) * spacesize);
xmax := trunc((points[thispoint].x + bestsphere) * spacesize);
ymin := trunc((points[thispoint].y - bestsphere) * spacesize);
ymax := trunc((points[thispoint].y + bestsphere) * spacesize);
zmin := trunc((points[thispoint].z - bestsphere) * spacesize);
zmax := trunc((points[thispoint].z + bestsphere) * spacesize);
if xmin < 0 then xmin := 0;
if ymin < 0 then ymin := 0; { Get dimensions of box }
if zmin < 0 then zmin := 0; { containing every sector in }
if xmax > usespace then xmax := usespace; { grid that we want to check }
if ymax > usespace then ymax := usespace; { against the current point }
if zmax > usespace then zmax := usespace;
for x := xmin to xmax do
for y := ymin to ymax do
for z := ymin to ymax do { Loop through every sector in this box }
begin
cpindex := space[x,y,z];
while cpindex <> nil do { Test against every point in this sector}
begin
if thispoint <> cpindex^.index then { Don't test point against }
begin { itself. }
distx := points[thispoint].x - points[cpindex^.index].x;
disty := points[thispoint].y - points[cpindex^.index].y;
distz := points[thispoint].z - points[cpindex^.index].z;
dist := sqrt(distx*distx + disty*disty + distz*distz);
numcomps := numcomps + 1;
if dist < bestsphere then { If dist less than smallest sphere}
insertprilist(dist) { insert distance into pri list }
end;
cpindex := cpindex^.next { Get next point in this sector }
end
end;
trysphere := getbiggestinsphere;
if trysphere < bestsphere then
bestsphere := trysphere
end;
writeln('Efficient Algorithm:');
writeln(' Best sphere: ',bestsphere:15:10);
writeln(' Number of comparisons: ',numcomps:8)
end;
begin
seed;
writeln('How many points?');
readln(NUMPOINTS);
if NUMPOINTS < NUMINSPHERE then
writeln('***** Must have at least ',NUMINSPHERE:1,' points.')
else
begin
writeln('Testing ',NUMPOINTS:1,' points.');
initprilist;
generatepoints;
brutecluster;
elegantcluster
end
end.
-------------------------------------------------------------------------------
Parallelism & Modeler Info Request, by Brian Corrie
...!uw-beaver!uvicctr!bcorrie bcorrie@uvicctr.UVic.ca
[soon to be a posting on USENET, but it hadn't reached my node yet.]
Howdy folks....
It's survey time again, and I would appreciate your participation in this
version of twenty questions.
I am interested in parallel algorithms for ray tracing, and I am curious about
a couple of things. Please note that I have most of the "standard references"
that get cited in the literature, but I am interested in some of the OTHER
stuff that is out there.
The papers that I have:
Cleary et al. "Multiprocessor Ray Tracing", Internal Report, 83/128/17,
Department of Computer Science, University of Calgary, Calgary, Alberta,
Canada.
Dippe et al "An Adaptive Subdivision Algorithm and Parallel Architecture for
Realistic Image Synthesis", Computer Graphics, Volume 18, Number 3.
Gaudet et al "Multiprocessor Experiments for High Speed Ray Tracing", ACM
TOG, Volume 7, Number 3.
etc.....
What I am interested in are references to some of the goodies that are out
there in the real world. Are there any papers on the hardware Pixar uses.
How about the AT&T pixel machine, the Connection Machine (there is a piece on
it in Scientific American, Volume 256, Number 6 that I already have), and
other bizarre architectures. Dave Jevans from the University of Calgary (Hi
Dave, remember me? I met you at SIGGRAPH this year) mentioned at one point he
implemented some stuff on a BBN Butterfly (I think). Any more info on that
Dave? Did you write it up? Anybody else doing anything similar?
Here is the info I want....
1) What architecture do you run on?
2) Parallel, vectorized etc?
For parallel systems:
3) How many processors do you use?
4) How tightly coupled are they?
5) Do you perform any load balancing, and if so how?
6) Architectural requirements (memory/node, communications etc)?
7) Anything else you can think of that might be useful.
Thanks in advance for any help you can give me. Replies by email are of
course the best route to take, but I read comp.graphics every morning, so a
reply here will be seen. I will post a summary to the net if I get enough
information.
========
Question number two....
This should be quick and easy. I would like to know what kind of modelling
software people use out there in the real world.
We have all seen the pretty pictures, but how do they get created? I would
appreciate a quick or not so quick review of what kind of software is used at
your site to model 3D scenes.
For those of you in the RT News mailing list and don't read the net like I do,
I will send a copy of both this and the summary to Eric.
Thanks,
Brian
======== USENET cullings follow ===============================================
Ray Tracer Available, by Craig Kolb
From: craig@weedeater.math.yale.edu
Newsgroups: comp.graphics
Organization: Math Department, Yale University
All of this talk of solid texturing and the like has convinced me to pull
together my raytracer for public consumption. Although I'm calling this a
beta release, relatives of this version of rayshade have been making pretty
pictures for about a year now. For examples, see slides 32 and 57 from the
SIGGRAPH '89 technical slide set and slides 67/68 from the stereo slide set.
If there's enough interest, I'll post rayshade to comp.sources.unix once the
bugfixes stop rolling in.
[I would like to add that Craig's ray tracer is fairly nice, and most of the
portability problems and minor bugs have been fixed since its release. Its
input language is much more full featured than NFF (which, I'll say again, was
made only for testing ray tracers, not photorealism) and looks more mainstream
than some of the other public domain ray tracers I've seen. If you're looking
for a reasonable input language, check his out. His latest and greatest
version (i.e. newer that 2.21) might be available via ftp by now. - EAH]
--
Rayshade, a raytracing program, is available for "Beta" testing. Rayshade
reads a multi-line ASCII file describing a scene to be rendered and produces a
Utah Raster RLE format file of the raytraced image.
Features:
Primitives:
boxes
cones
cylinders
height fields
planes
polygons
spheres
triangles (flat- or Phong-shaded)
[he forgot to mention there are also superquadrics! - EAH]
Composite objects
Point, directional, and extended (area) light sources
Solid texturing and bump mapping of primitives, objects, and
individual instances of objects
Antialiasing through adaptive supersampling or "jittered" sampling
Arbitrary linear transformations of primitives,
instances of objects, and texture/bump maps
Use of uniform spatial subdivision and/or hierarchy of
bounding volumes to speed rendering
Options to facilitate rendering of stereo pairs
Support for the Linda parallel programming language
An awk script is provided to translate NFF format scripts to rayshade format.
Rayshade is written in C with parsing support provided through lex and yacc.
The C, lex and yacc files comprise approximately eight thousand lines of
code. Sites without lex and yacc can make use of the C source files produced
by lex and yacc which are included in this distribution.
Rayshade has been tested on a number of UNIX-based machines, including
Vaxes, Sun Workstations, Iris 4D Workstations, Encore Multimax, AT&T 3B2/310,
Cray XMP, and IBM RTs. In addition, support is provided for the Amiga using
the Aztec C compiler.
Rayshade makes use of the Utah Raster toolkit, a package consisting of a
large number of useful image manipulation programs, test images, and a
library to read and write images written using the toolkit's RLE format. The
toolkit is available via anonymous FTP from cs.utah.edu or from
weedeater.math.yale.edu.
Those sites that cannot or do not want to use the Utah Raster toolkit can
make use of a compile-time option to produce images written using a generic
file format identical to that used in Mark VandeWettering's "MTV" raytracer.
This version of rayshade is a "beta" release. The first "real" release will
include an updated manual page and additional documentation as well as
any bugfixes or extensions born out of this release.
Rayshade is copyrighted in a "Gnu-like" manner.
Rayshade is available via anonymous ftp from weedeater.math.yale.edu
(192.26.88.42) in pub/Rayshade.2.21.tar.Z. The Utah Raster toolkit
is available in pub/UtahToolkit.tar.Z.
-------------------------------------------------------------------------------
Source from Roy Hall's Book, by Tim O'Connor
From: toc@batcomputer.tn.cornell.edu
Newsgroups: comp.graphics
Straight from the dragon's mouth (so to speak) comes the source from
"Illumination and Color in Computer Generated Imagery" by Roy Hall. It's now
available via anonymous ftp from:
freedom.graphics.cornell.edu (128.84.247.85)
It's under pub/Hall and comes in two files: 1) README (of course) which also
contains some code necessary to convert 2) code.tar.Z.a which contains the
actual code. So, as always, read README first.
Those of you who do not have ftp access may wish to drop me a short line (a
Subject: of "gimme roy's source" is adequate). If there's enough interest
I'll post to this group, if not I'll (shudder!) attempt to mail it right to
you.
Also of interest on freedom are the Ray Tracing News archives (under
pub/RTNews) and the Xcu Widget Set. (Sorry, this code available only in
stores, no mailing.)
fishing in McElligot's Pool,
to'c
-------------------------------------------------------------------------------
More on Texture Mapping by Spatial Position, by Paul Lalonde
From: lalonde@ug.cs.dal.ca
Newsgroups: comp.graphics
Organization: Math, Stats & CS, Dalhousie University, Halifax,NS, Canada
[This is a continuation of last issue's discussion. The idea here I would
consider the most popular solution to the problem. I published most all of
the discussion because some of the answers were more interesting for the ideas
they generated than for practicality. - EAH]
In article [...] ranjit@grad1.cis.upenn.edu.UUCP (Ranjit Bhatnagar) writes:
> [Talk about 3-d texture maps deleted for brevity]
>I haven't seen discussed before: as generally proposed, it's highly
>unsuited to _animation._ The problem is that you generally define one
>texture function throughout all of space. If an object happens to move,
>its texture changes accordingly. It's a neat effect - try it - but it's
>not what one usually wants to see.
>The obvious solution to this is to define a separate 3-d texture for
>each object, and, further, _cause the texture to be rotated, translated,
>and scaled with the object._ DBW does not allow this, so if you want
>to do animations of any real complexity with DBW, you can't use the nice
>wood or marble textures.
I get around this be keeping not only the general texture stored with the
object, but also an (x,y,z) triple pointing to where the texture is to
be evaluated. I also keep some orientation information with the object.
The texturing routine then only has to translate the scene coordinate of the
point being textured into texture coordinates. It comes down to keeping
the textures in object coordinates. This allows you to carve more than
one object out of the same chunk of marble, which can be quite pleasing.
It also requires very little extra manipulation of the texture. For shape
changes you just keep track of your deformation function and apply it to
the point whose texture you are evaluating.
[Another congruent way of describing this is to use modeling matrices to
describe the location (and even deformation) of animated objects. Since the
object itself without the modeling matrix does not move, the texturing of its
surface does not change. - EAH]
-Paul
(Ps. My raytracer implementing this (and other goodies) should be available
as soon as I finish up my spline surfaces...Real Soon Now)
Paul A. Lalonde UUCP: ...{uunet|watmath}!dalcs!dalcsug!lalonde
Phone: (902)423-4748 BITNET: 05LALOND@AC.DAL.CA
-------------------------------------------------------------------------------
Procedural Bump-mapping Query, by Prem Subrahmanyam
From: prem@geomag.fsu.edu
Newsgroups: comp.graphics
Organization: Florida State University Computing Center
I am going to attempt to employ a few procedural bump-map textures to
DBW_Render (I have no idea how they will turn out). I would like to start
with some basics, like maybe a "golf-balling" algorithm that will give the
surface small, spherical pits. Also, a "cross- hatching" one would be nice as
well, one that would produce furrows in a surface. These should work for both
planar surfaces as well as spherical ones. So, here's the basic
request....given a point, the normal to the surface at that point as well, how
can I perturb the normal, position of the point, etc. to create reasonable
bump-maps.
One that I'd particularly like to reproduce is found in an old issue of IEEE
Computer Graphics & Applications....the one with the Martian Magnolia in it.
Well, there's this cross-hatching on the spheres on the previous page. These
would be very nice to try to implement in DBW_Render. Can anyone point me to
real code (or at least pseudocode) that will tell me how to generate these
types of textures? Eventually, with all these textures, bump-mapping, etc.
in tow, DBW will be the most kick-butt ray-tracer available to the general
public.
-------------------------------------------------------------------------------
Ray Tracer Performance on Machines, by Gavin A. Bell, Steve Lamont
[Note: Didier Badouel's timings will appear next issue, as I will be focussing
on metrics then]
From: gavin@krypton.sgi.com (Gavin A. Bell)
Newsgroups: comp.graphics
In article [...], pkh@vap.vi.ri.cmu.edu (Ping Kang Hsiung) writes:
>
> Does anyone have a collection of performance (timing) data based
> on running a raytracer (preferably a publicly available raytracer,
> e.g. mtv or qrt) on various machines?
>
I believe that the BRL-CAD ray-tracer is sometimes used as a standard
benchmark (with specific input files). A number is generated which they call
the 'Ray-tracing figure of merit'; the higher the number, the better. The
whole BRL-CAD package is public domain [it's not - see below], but big,
crufty, and pretty ancient as ray-tracing packages go.
I know all this because I'm in the Demo/Benchmarks group here at Silicon
Graphics, and this ray-tracing benchmark is one of the few in which our 4D/280
outperformed a Cray (ray-tracing being an easily multi-processed, but not
vectorizable, application).
If people are interested, I could send them the results.
--------
From: phil@sem.BRL.MIL (Phil Dykstra)
The BRL-CAD package is *not* public domain. It is Copyright by the U.S.Army
in order to control commercialization of it. We do distribute the source code
at no charge however as long as the recipient agrees, in writing, to the
conditions.
> but big, crufty, and pretty ancient as ray-tracing packages go.
Being one of the authors, I had to put in at least two cents worth of defense.
Big - yes. Crufty - parts of it. Pretty ancient - some of it. But I
wouldn't call things like CSG NURBs, arbitrary bounding planes, non-uniform
space partitioning, parallel *and* network distributed capability very
ancient.
--------
From: spl@mcnc.org (Steve Lamont)
Subject: Re: Raytracer performance on machines?
Organization: Foo Bar Brewers Cooperative
In article <6224@pt.cs.cmu.edu> pkh@vap.vi.ri.cmu.edu (Ping Kang Hsiung) writes:
>For example, my experience indicates that mtv runs on a
>Cray Y-MP (-hintrinsic,o_level3) ~2.6x than on a PMAX, is this
>(Cray time) the fastest turn-around one can expect?
>Will a Connection Machine do better than this? What about an Intel iPSC?
>The Meiko transputer array? Silicon Graphics's Power series?
Well, it depends on what other things you feel prepared to do. I have both an
IRIS 4D/120 and a Y-MP here (we just installed a Stardent Titan yesterday,
also) and see the same sort of numbers. I'll run a couple of experiments
today and get back to you with actual bench marks, if you'd like.
I am currently working on a parallelized version of MTV and should be able to
give you some results from that in a couple of days, barring getting blown
away by Hurricane Hugo, that is :-).
In any case, in order to take advantage of any parallelism on an MPP like a
CM, you'll probably have to do a lot of recoding. The CM is a SIMD machine
which does not really lend itself *directly* to ray tracing. I understand
that TMC has done some interesting algorithmic development to actually *do*
ray tracing, but the code certainly would not look anything like MTV as it
stands right now. Barry, are you out there lurking? Any comments?
The iPSC and Meiko systems would probably be more straightforward.
If you're interested in results, contact me by private email and I'll be glad
to share my thoughts with you.
-------------------------------------------------------------------------------
Projective Mapping Explanation, by Ken "Turk" Turkowski
From: turk@Apple.COM
Newsgroups: comp.graphics
Organization: Advanced Technology Graphics, Apple Computer, Cupertino, CA
In article [...] ccoprrm@pyr.gatech.edu (ROBERT E. MINSK) writes:
> I am currently trying to add texture mapping to my ray tracer. The problem
> I am having is texture mapping and normal interpolation to convex
> quadrilaterals and triangles. The situation is as follows:
> Given 4 points forming a convex planer quadrilateral,the texture vertices
> associated with each point, and the intersection point on the quadrilateral,
> find the associated (u,v) mapping coordinates for the intersection point.
> For example:
> 4 points the associated texture vertices
> p1=(-5, 1, 2) t1=(.2,.4)
> p2=(-2,-3, 6) t2=(.4,.2)
> p3=( 2,-1, 4) t3=(.8,.3)
> p4=( 1, 4,-1) t4=(.7,.5)
> intersection point = (-2,-1, 4)
> find the (u,v) mapping coordinates
>
> I assume a triangle will be the same mapping with two vertices sharing the
> same points and mapping coordinates.
For suitably well-behaved texture mappings (i.e. no bowtie quadrilaterals),
there is a projective mapping, that maps quadrilaterals to quadrilaterals.
This is represented by a 3x3 matrix, unique up to a scale factor.
[x y z] = [uw vw w] [M] (1)
where w is a homogeneous coordinate, and [M] is the 3x3 matrix. Since the 4
points must obey (1), we have the nonlinear system of equations:
|x0 y0 z0| |u0*w0 v0*w0 w0|
|x1 y1 z1| |u1*w1 v1*w1 w1|
|x2 y2 z2| = |u2*w2 v2*w2 w2| [M] (2)
|x3 y3 z3| |u3*w3 v3*w3 w3|
This represents 12 equations in 13 unknowns, but one of the w's may be
arbitrarily chosen as 1.
System (2) can be solved for the matrix [M] by any nonlinear system equation
solver, such as those available in the Collected Algorithms of the ACM.
When this matrix is inverted, it gives the mapping you desire:
-1
[uw vw w] = [x y z] [M] (3)
You just plug the desired [x y z] into (3), and divide by w.
For triangles, the mapping (1) is affine:
[x y z] = [u v 1] [M] (4)
and the resulting system is linear:
|x0 y0 z0| |u0 v0 1|
|x1 y1 z1| = |u1 v1 1| [M] (5)
|x2 y2 z2| |u2 v2 1|
This linear equation (9 equations in 9 unknowns) can easily be solved by LU
decomposition.
Note that this matrix computed in (2) and (5) depends only on the
parametrization, so its only needs to be computed once, offline, at the time
the texture is assigned.
Take note that the texture parameters are not interpolated linearly (or
bilinearly), but projectively. Also note that, unlike standard bilinear
interpolation (such as that used for Gouraud interpolation) this method of
interpolation is rotation invariant.
For more information on projective mappings, see the the book "Projective
Geometry and Applications to Computer Graphics" by Penna and Patterson. (I
hope this reference is correct -- I'm doing it from memory).
For more detail on this approach to texture-mapping, including texture-mapping
by scan-conversion, request a copy of the following paper:
Turkowski, Ken
The Differential geometry of Texture Mapping
Apple Technical Report No. 10
May 10, 1988
from
Apple Corporate Library
Apple Computer, Inc.
20525 Mariani Avenue
Mailstop 8-C
Cupertino, CA 95014
The e-mail address for the library is: corp.lib1@applelink.apple.com,
but the gateway is one-way, so don't expect an electronic reply.
-------------------------------------------------------------------------------
Intersection Calculation Problem Request, Jari Toivanen
From: toivanen@tukki.jyu.fi
Newsgroups: comp.graphics
Organization: University of Jyvaskyla, Finland
I would like to know is there any simple and effective solution to following
problem:
I have curve c:[0,1] -> R*R. Curve c is cubic and it's following form
3 2 3 2
c(t) = (a * t + b * t + c * t + d , a * t + b * t + c * t + d )
x x x x y y y y
Now I rotate curve c around vector v so that I get surface (or what ever I
should call it). Let vector v be
v = (x , y , z )
up up up
Now I should calculate intersection of this surface and given line l. Let
line l be
l = (k * u + x , k * u + y , k * u + z ), u >= 0
x 0 y 0 z 0
This should be done as fast as possible, because it would be part of ray
tracer. I also need surface's normal in the intersection point.
I've been calculating this, but the solution seems to be quite complicated.
Is there any other easier way to do something like this?
Could anyone tell me good books which would help me deal with this kind of
problems and bicubic surfaces and stuff like that?
-------------------------------------------------------------------------------
Mathematical Elements for Computer Graphics - Call for Errata, by David Rogers
From: dfr@usna.MIL
Newsgroups: comp.graphics
Organization: U.S. Naval Academy, Annapolis, MD
MATHEMATICAL ELEMENTS FOR COMPUTER GRAPHICS 2nd edition
David F. Rogers & J. Alan Adams
McGraw-Hill Book Co. 1990
hardcover -- ISBN 0-07-053529-9
My own interest is in asking those of you who use the book for some
assistance. No matter how careful the authors and proofreaders are all books
have typos. If you use the book and find a typo (or something that you think
is wrong) please [send a bug report and] tell me about it. I will have a
chance to correct these typos in about June 1990. In a few months, I'll
summarize those that I have received and publish an errata list to the net.
Thanks in advance.
Dave Rogers
-------------------------------------------------------------------------------
Raytracing on NCUBE Request, by Ping Kang Hsiung
From: pkh@vap.vi.ri.cmu.edu
Newsgroups: comp.graphics
Organization: Carnegie-Mellon University, CS/RI
Has anyone had ported/run raytracing on NCUBE? I would like to learn about
your experience; including raytracer name, NCUBE node number (NCUBE 2 or 1?),
port time, performance, features/tricks, etc.
-------------------------------------------------------------------------------
Intersection Between a Line and a Polygon (UNDECIDABLE??),
by Dave Baraff, Tom Duff
From: deb@charisma.graphics.cornell.edu
Newsgroups: comp.graphics
Keywords: P, NP, Jordan curve separation, Ursyhon Metrization Theorem
Organization: Program of Computer Graphics
In article [...] ncsmith@ndsuvax.UUCP (Timothy Lyle Smith) writes:
>
> I need to find a formula/algorithm to determine if a line intersects
> a polygon. I would prefer a method that would do this in as little
> time as possible. I need this for use in a forward raytracing
> program.
I think that this is a very difficult problem. To start with, lines and
polygons are semi-algebraic sets which both contain uncountable number of
points. Here are a few off-the-cuff ideas.
First, we need to check if the line and the polygon are separated. Now, the
Jordan curve separation theorem says that the polygon divides the plane into
exactly two open (and thus non-compact) regions. Thus, the line lies
completely inside the polygon, the line lies completely outside the polygon,
or possibly (but this will rarely happen) the line intersects the polyon.
Now, the phrasing of this question says "if a line intersects a polygon", so
this is a decision problem. One possibility (the decision model approach) is
to reduce the question to some other (well known) problem Q, and then try to
solve Q. An answer to Q gives an answer to the original decision problem.
In recent years, many geometric problems have been successfully modeled in a
new language called PostScript. (See "PostScript Language", by Adobe Systems
Incorporated, ISBN # 0-201-10179-3, co. 1985).
So, given a line L and a polygon P, we can write a PostScript program that
draws the line L and the polygon P, and then "outputs" the answer. By
"output", we mean the program executes a command called "showpage", which
actually prints a page of paper containing the line and the polygon. A quick
examination of the paper provides an answer to the reduced problem Q, and thus
the original problem.
There are two small problems with this approach.
(1) There is an infinite number of ways to encode L and P into the
reduced problem Q. So, we will be forced to invoke the Axiom of
Choice (or equivalently, Zorn's Lemma). But the use of the Axiom of
Choice is not regarded in a very serious light these days.
(2) More importantly, the question arises as to whether or not the
PostScript program Q will actually output a piece of paper; or in
other words, will it halt?
Now, PostScript is expressive enough to encode everything that a
Turing Machine might do; thus the halting problem (for PostScript) is
undecidable. It is quite possible that the original problem will turn
out to be undecidable.
I won't even begin to go into other difficulties, such as aliasing, finite
precision and running out of ink, paper or both.
A couple of references might be:
1. Principia Mathematica. Newton, I. Cambridge University Press, Cambridge,
England. (Sorry, I don't have an ISBN# for this).
2. An Introduction to Automata Theory, Languages, and Computation. Hopcroft, J
and Ulman, J.
3. The C Programming Language. Kernighan, B and Ritchie, D.
4. A Tale of Two Cities. Dickens, C.
--------
From: td@alice.UUCP (Tom Duff)
Summary: Overkill.
Organization: AT&T Bell Laboratories, Murray Hill NJ
The situation is not nearly as bleak as Baraff suggests (he should know
better, he's hung around The Labs for long enough). By the well known
Dobbin-Dullman reduction (see J. Dullman & D. Dobbin, J. Comp. Obfusc.
37,ii: pp. 33-947, lemma 17(a)) line-polygon intersection can be reduced to
Hamiltonian Circuit, without(!) the use of Grobner bases, so LPI (to coin an
acronym) is probably only NP-complete. Besides, Turing-completeness will no
longer be a problem once our Cray-3 is delivered, since it will be able to
complete an infinite loop in 4 milliseconds (with scatter-gather.)
--------
From: deb@svax.cs.cornell.edu (David Baraff)
Well, sure its no worse than NP-complete, but that's ONLY if you restrict
yourself to the case where the line satisfies a Lipschitz condition on its
second derivative. (I think there's an '89 SIGGRAPH paper from Caltech that
deals with this).
-------------------------------------------------------------------------------
END OF RTNEWS