Ray Tracing News

"Light Makes Right"

October 3, 1988

Volume 1, Number 10

Compiled by Eric Haines erich@acm.org . Opinions expressed are mine.

All contents are copyright (c) 1988, all rights reserved by the individual authors

Archive locations: anonymous FTP at ftp://ftp-graphics.stanford.edu/pub/Graphics/RTNews/,
wuarchive.wustl.edu:/graphics/graphics/RTNews, and many others.

You may also want to check out the Ray Tracing News issue guide and the ray tracing FAQ.


Contents:


Intro

This issue is something of a queue clearer for me: a lot has been posted on USENET concerning Mark VandeWettering's public domain ray tracer. I include all of this and more at the end. If you're not interested, I hope you can wade through it all until the end, as I would appreciate comments on the "neutral file format" I use in the SPD package.

back to contents


New Addresses and People

Remember that you can ask me any time for the latest version of the RT News mailing list.

Andrew Glassner has settled down and bought some bookshelves, and is at:

# Andrew Glassner Andrew Glassner
# Xerox PARC 690 Sharon Park Drive
# 3333 Coyote Hill Road Apt. #17
# Palo Alto, CA 94304 Menlo Park, CA 94025
# (415) 494 - 4467 (415) 854 - 4285
alias andrew_glassner glassner@xerox.com

For those of you who receive only the email version of the Ray Tracing News: you should contact Andrew, as he is the editor of the hardcopy version of the RT News. The hardcopy contains many articles which do not appear in the email version, so be sure to get both.

________

# K.R.Subramanian
# The University of Texas at Austin
# Dept. of Computer Sciences
# Taylor Hall 2.124
# Austin, Tx-78712.

alias  krs  subramn@cs.utexas.edu (ARPA)
 or
alias  krs  {uunet...}!cs.utexas.edu!subramn (UUCP).

Interests in Ray Tracing:

Use of hierarchical search structures for efficient ray tracing, investigating better space partitioning techniques, trying to apply ray tracing to practical applications.

Currently a PhD student in Computer Sciences at The University of Texas at Austin.

One suggestion on the RT round table: We must have a portion of time where we can talk to other RT people on a more personal basis. At least, I find it easier to talk to people.

On the RT news: I would like to see practical applications of ray tracing described here. What applications really require mirror reflections, refraction etc. Havent seen applications where ray tracing was the way to go.

________

From: mcvax!ecn-nlerf.com!jack@uunet.UU.NET (Jack van Wijk)

Via my old colleagues at Delft University of Technology I received a copy of your Ray Tracing News. I am delighted by this initiative, since it provides a fast, informal way to communicate with colleagues working in this sensational area.

At the moment I do not do research with respect to ray tracing, but I expect that in the coming year the blood will creep again where it can't go (old Dutch proverb). The institute where I work now is very interested in high quality graphics, scientific data visualization and parallellism, so I expect that ray tracing can be made a topic here.

I would be very happy if you could put me on the mailing list. Here is a short auto-biography:

# Jarke J. (Jack) van Wijk - Geometric modelling, intersection algorithms,
# parallel algorithms.
# Netherlands Energy Research Foundation, ECN
# P.O. Box 1, 1755 ZG Petten (NH), The Netherlands
alias jack_van_wijk ecn!jack@mcvax.cwi.nl

I have done research on ray-tracing at Delft University of Technology from 1982 to 1986 together with Wim Bronsvoort and Erik Jansen. My thesis is: "On new types of solid models and their visualization with ray-tracing", Delft University Press, 1986, which title summarizes my main interests. I have developed intersection algorithms for sweep-defined objects (translational, rotational, sphere), and blending. Also research was done on curved surfaces, modelling languages and on improving the efficiency. Currently I am interested in intersection algorithms, efficiency, and parallel algorithms, and the use of ray tracing for Scientific Data Visualization.

________

Linda Roy's mail address:

# Linda Roy - all aspects of ray tracing especially efficiency
# Silicon Graphics Inc.
# 2011 Shoreline Blvd.
# Mountain View, California 94039-7311
# 415-962-3684

________

Mark VW's mail address:

# Mark VandeWettering
# c/o Computer and Information Sciences Dept.
# University of Oregon
# Eugene, OR 97403

back to contents


Bitmap Stuff, by Jeff Goldsmith

[The following is for VMS people. UNIX/C people should contact anyone at the University of Utah for information on their "Utah RLE Toolkit", which has all kinds of bitmap manipulation tools using pipes (in the style of Tom Duff). It's a nice toolkit (and includes the famous mandrill picture), and can be had by ftp from cs.utah.edu. - EAH]

I have some bitmap utilities that I can put somewhere if there's interest. They aren't intended to be anywhere nearly so portable as poskbitmaps, but they seem to have more tools. I'm pretty curious what a good total set of tools would be; maybe this can spark such a list. Mine work only under VMS (does direct mapping to files--FAST) and use a bizarre format that is really just 1024 bytes of header followed by pixels. Here's a list of the tools: Cutout: Cuts a rectangle out Dissolve: Fades from one picture to another Gamma: Channel-independent contract change Filter: 2x2 boxfilter Lumin: Color to Black and White via luminosity Pastein: Pastes a rectangle into another picture Poke: Mess with header data, e.g. offsets Resam: Change from 1-1 to 5-4 aspect ratio fast Reverse: Inverse video Switch: Swap red, green, blue channels around Thresh: Sets pixel<threshhold = 0. Ramps rest Xzoom: Horizontal stretch. Floating point factor Zoom: Floating point rescale. None of these are super-robust, but they are pretty fast. The slowest is zoom and it runs in 1-2 minutes on a VAX 780. On a newer machine, they'd be ok-fast.

By the way, I've used each of them in animations, so the transformations are smooth. Also, they are clearly useful.

back to contents


More Comments on Kay/Kajiya

From Jeff Goldsmith:

I do a quick check on the children to determine the key for the sort. I just use the largest component of the current ray as the direction along which to check and then just use the minimum (or maximum) extent of the bounding volume to generate a key. Tim Kay says that that is not what they meant in the paper, but it's close enough and seems to work. However, before the sorter ever gets to deal with a new bounding volume, I check to see if the leading edge of the bounding volume is beyond the current hit. John Salmon added the trick that all illumination rays get a pseudo-hit at the light source position, so that automatically rejects all objects that cannot cast shadows. (Of course, it deals with objects on the other side of the ray origin, too.) I also, of course, don't sort the illumination rays' bounding volumes.

A further note: I did not find that the sorting cost was trivial; in fact, it made up for most of the time saved in avoiding bounding volume checking. It was more useful before we added all the other hacks to avoid things, though.

Good references for heap sort algorithms are:
        Standish, _Data Structure Techniques_     and
        Knuth, of course.

Heap sort is the right algorithm, I think, because a total order is not needed on all the objects. We need to pull off one object (bounding volume) at a time from the head of the list, and once we find a hit, we discard the rest of the list. There's no point in sorting stuff that we will never check.

____

I ended up tossing the heap sort version completely, in order to save memory space. (Odd, it's been a long time since I've had to worry about code size.) I think that I could gain all of their savings and then some by just postprocessing the tree so that the left child is closer to the eye than the right child. Most non- illumination rays go in the general direction of "away from the eye," so that would help them. I-rays don't need sorting anyway. Alternatively, as you suggested, putting the bigger boxes (whatever) on the left would work, too, maybe. If I ever have time to futz with it, I'd like to try some of that.

____

My reply to Jeff:

Sorting on distance to eye sounds good - in fact, I was going to try it, but I use the item buffer and so the eye rays are mostly taken care of. If anything, sorting with objects farther away might help me: the reflection rays, etc etc will probably be in a direction away from the eye rays! Oh, another good post-process might be to sort each list of sons on the difficulty of sorting (or did I mention this already?) - try the sphere before the spline.

back to contents


Questions and Answers (for want of a better name)

Wood Texture Request Filled:

Jeff Goldsmith's request for wood texture bitmaps was generously filled by Rod Bogart, who made four bitmaps (wood.img[1-4]) available for ftp at cs.utah.edu. These are still there (I just grabbed them), though I don't know how long they'll remain available. These are scanned images from an artist's book of textures.

________

Efficiency Question

From Mark VandeWettering:

How can we efficiently manage the intersect lists that get passed between the various procedures. Heckbert statically allocates arrays within the stack frames of various procedures, which seems a little odd, because you never really know how much space to allocate. Also, merging them using Roth's CSG scheme requires alot of copying: can this be avoided?

________

From Jack Ritter:

A simple method for fast ray tracing has occurred to me, and I haven't seen it in the literature, particularly Procedural Elements for Computer Graphics. It is a way to trivially reject rays that don't intersect with objects. It works for primary rays only (from the eye). It is:

Do once for each object:

   compute its minimum 3D bounding box. Project
   the box's 8 corners unto pixel space. Surround the
   cluster of 8 pixel points with a minimum 2D bounding box.
   (a tighter bounding volume could be used).

To test a ray against an object, check if the pixel through which the ray goes is in the object's 2D box. If not, reject it.

It sure beats line-sphere minimum distance calculation.

Surely this has been tried, hasn't it?

____

An Answer, by Eric Haines:

It's true, this really hasn't appeared in the literature, per se. However, it has been done.

The idea of the item buffer has been presented by Hank Weghorst, Gary Hooper, and Donald P. Greenberg in "Improved Computational Methods for Ray Tracing", ACM TOG, Vol. 3, No. 1, January 1984, pages 52-69. Here they cast polygons onto a z-buffer, storing the ID of the closest item for each pixel. During ray tracing the z-buffer is then sampled for which items are probably hit by the eye ray. These are checked, and if one is hit you're done. If none are hit then a standard ray trace is performed. Incidentally, this is the method Wavefront uses for eye rays when they perform ray tracing. It's fairly useful, as Cornell's research found that there are usually more eye rays than reflection and refraction rays combined. There's still all those shadow rays, which was why I created the light buffer (but that's another story...see IEEE CG&A September 1986 if you're interested).

In the paper the authors do not describe how to insert non-polygonal objects into the buffer. In Weghorst's (and I assume Hooper's, too) thesis he describes the process, which is essentially casting the bounding box onto the screen and getting its x and y extents, then shooting rays within this extent at the object as a pre-process. This is the idea you outlined. However, theirs avoids all testing of the extents by doing the work as a per object (instead of per ray) preprocess. A per object basis means they don't have to test extents: all they do is loop through the extent itself and shoot rays at the object for each pixel.

________

Efficient Polygon Intersection Question, from Mark VandeWettering

Another problem I have been considering arose from a profile of my raytracer when run on the "gears" database. A large amount of time (~40%) was spent in the polygon intersection code, which is greater than other scenes which used polygons. The reason: the polygon intersection routine which you described in the Siggraph Course Notes is linear in the number of sides of the object. For the case of the gear, the number of sides is 144, which is a very large number.

Perhaps a better way of trying to intersect polygons is to decompose the complex polygons into triangles, and then arrange them in your favorite hierarchy scheme. The simplest way would be to subdivide prior to the raytracing in a preprocessing step. Several very quick algorithms exist for intersection with triangles, and I think that this may be a better way to implement polygon intersection.

"Back of the envelope" calculations:

Haines' method of intersection:         O(n) to intersect polygon
Triangular decomposition:               O(1) to intersect triangle
                                        * number of triangles searched
                                          inside your hierarchy scheme.

Assuming a good hierarchy, you can expect O(logn) triangles to be searched. The problem is finding the constants involved in this. I do suspect that this method may in fact be superior, because in the ground case (intersect a triangle), the two methods are equivalent (actually since the code may be streamlined for triangles, the second is probably better), and I expect that as the number of sides grows, the second will get better relative to the first.

I am torn between trying to formally analyze the run-time, and just going ahead and implementing the thing, and gaining performance information from that. Perhaps I will have some figures for you about my experience soon.

I would like to hear from anyone on the RT-News who has information on ray tracing superquadrics. I am especially interested in the numerical methods used to solve intersections, but any information would be useful.

[as I recall Preparata talks about preprocessing polygons into trapezoids in his book _Computational Geometry_, leading to many fewer edge which need testing (each trapezoid has but two sides which can intersect, as the test ray is parallel to the other two edges). Any other solutions, anyone? -- EAH]

________

Bug in Paul Heckbert's Ray Tracer?

From Mark VandeWettering:

As I might have mentioned before, I modelled my raytracer after the one described in Heckbert's article "Writing a RayTracer". I have noticed some ambiguities/anomolies/bugs(?) that might be interesting to examine.

In Heckbert's code, there is some "weirdness" going on in the Shade procedure. The part of the "Shade" procedure which handles tranparency is something has a comment like:

/* hit[0].medium and hit.[1].medium are entering and exiting media */

The transmission direction is then calculated using the index of refraction of the two media.

But hit[0].medium should be the medium that the ray originates in, not the medium of the object actually hit. Therefore, the index of refractions are incorrect and the Transmission direction also is incorrect.

Perhaps Paul could comment on this. What seems to be correct is to keep hit[0] reserved to contain the type of material that the ray originates in, and hit[1] be the first hit along this ray? Was this what was intended?

________

A Tidbit from USENET

From: Ali T. Ozer

In article (10207@s.ms.uky.edu) sean@ms.uky.edu (Sean Casey) writes:
>Oh yeah, I hear that some of the commercial Amiga ray tracing software is
>being ported to the Mac II. These products have been around for a while, so
>it's a good chance for Mac users to get their hands on some already-evolved
>ray-tracing software.

For a lot higher price, though... I read that the Mac version of Byte by Byte's Sculpt 3D and Animate 3D packages will start from $500.

Ali Ozer, aozer@NeXT.com

back to contents


More on MTV's Public Domain Ray Tracer (features, bug fixes, etc)

________

Raytrace to Impress/Postscript Converter, by David Koblas

Contained is a shar for converting MRGB pictures to either impress or postscript depending on your needs (black and white).

{I'm looking for versatec plotter routines, if you have some I'd be interested}

[Ed. note: there is also a patch for this program posted to USENET.]

[as usual, the code is deleted for space. Check USENET or contact David for the program. - EAH]

name : David Koblas place: MIPS Computers Systems phone: 408-991-0287 uucp : {ames,decwrl,pyramid,wyse}!mips!koblas

________

Raytrace to X Image converter, by Paul Andrews

Here's a somewhat primitive program to display one of Marks raytraced pic's on an X display. There's no makefile, but then there's only one source file.

paul@torch.UUCP (Paul Andrews)

[again, code deleted for space. Check USENET or write Paul]

________

Better Shading Model for Raytracer, by David Koblas

A better shading model for the MTV raytracer [I probably should have posted this a while back, while I was sure it all worked]

The two big changes this has are a better shading model, including doing something diffrent with diffuse reflection. You can specify the color of a light, and surfaces have an ambiant and absorbance values [default: no ambiant and no absorbtion]. The "shine" value is now in the range from 0.0 -> 1.0 instead of 0 -> infinity. On balls I ran a sed script like this: '/^f/s/ 35 / 0.2 /' and got close the the same results. Also all componants of a surface can be specified with r,g,b values.

Give it a try, and if you have any bugs/problems/sugestions, let me know and I'll give them a try/fix.

name : David Koblas place: MIPS Computers Systems phone: 408-991-0287 uucp : {ames,decwrl,pyramid,wyse}!mips!koblas

[code deleted for space: check USENET or write David for the new model]

________

From Irv Moy:

I have Mark VandeWettering's raytracer running on a Sun 3/260 and Version 2.4 of Eric Haines' SPD (I took the SPD that Mark posted and applied the patch that Eric posted to get Ver. 2.4). I display the output of the raytracer on a Targa 32; I had to add an extra byte in the output file for the Targa's alpha channel. The output of 'balls.c' looks great; I now have my very own "sphereflake"!!! I tried 'gears' at a size factor of 4 and the resulting output is quite dark. The background is a nice UNC blue but the gear surfaces are very dark and so is the reflecting polygon underneath the gears. Has anyone else tried to raytrace 'gears' with Mark's program yet??? Enquiring minds want to know.....(BTW, if you look closely at 'sphereflake', you can see Elvis (recursively, of course)).

                                Irv Moy
                                UUCP: ..!chinet!musashi
                                Internet: musashi@chinet.uucp

________

From Ron Hitchens:

   This may have some bearing on the problem:

vixen% ray -i gears.nff -o gears.pic -t
ray: (9345 prims, 5 lights)
ray: inputfile = "gears.nff"
ray: resolution 512 512
ray: after adding bounding volumes, 10516 prims
                                    ^^^^^

   From defs.h:

#define MAXPRIMS        (10000)
                         ^^^^^

I ran gears.nff last night and got the same results. I bumped MAXPRIMS to 11000 and ran it again, seemed to work fine. I only ran a 128x128 version, the resolution was so low that most of the gears looked like fuzzy blobs, but it seemed to be properly lighted and plenty colorful. I have a 512x512 run going now, should be finished in about 12 hours (I love my Sun 3/60FC, but it sure would be handy to have a Cray now and then).

> (BTW, if you look closely at 'sphereflake',
> you can see Elvis (recursively, of course)).

Naw, that's the spirit of Tom Snyder, Elvis is way too busy channelling through an unemployed truck driver in Muncie, Indiana.

To Mark VandeWettering: Hey, thanks for the ray tracer. I don't suppose you could send me a disk drive to store all these picture files on could you?

Ron Hitchens ronbo@vixen.uucp hitchens@cs.utexas.edu

________

From: Steve Holzworth

There is a bug in the screen.c routine of Mark's raytracer. Specifically, everywhere he does a malloc, the code is of the form:

foo = (Pixel *) malloc (xres * sizeof (Pixel)) + 1;

The actual intent is to allocate xres+1 Pixels, thusly:

foo = (Pixel *) malloc ((xres + 1) * sizeof (Pixel));

There are three occurences of the former in the code; they should be changed similarly to the later. (Note: I never ran over this bug until I tried to run a 1024x1024 image. It worked fine on 512x512 or less images.)

Other than that, its a good raytracer. Congrats, Mark! I'm working on a better lighting model and a better camera model. I'll send them on when (if) I finish them.

                                                Steve Holzworth
                                                rti!tachyon!sch

________

Teapot Database for Ray Tracing, by Ron Hitchens

Subject: Ray traced teapot

Below is a modification of a program that Dean S. Jones posted a few weeks ago that draws the well known teapot in wire frame using SunCore. I changed it so that it would use the same data to produce an NFF file that Mark VandeWettering's ray tracer can use. The result looks surprisingly good. Using the default step value of 6 is satisfactory, 12 looks very nice.

I'd like to know what's causing the little specks on the spout and the handle. I don't know if it's a problem with how this guy generates the NFF file, or some glitch in Mark's ray tracer. I don't have the time to investigate.

The original program that Dean posted was Sun specific, since it used SunCore. This one is not Sun-specific, all it does is some computation and spit out some text data, so it should run most anywhere. You'll probably need to remove the -f68881 from the makefile spec if you compile it on a non-Sun system though.

   Enjoy.

Ron Hitchens ronbo@vixen.uucp hitchens@cs.utexas.edu

[code deleted for space. Check USENET or write Ron Hitchens for the code]

________

From Mark VandeWettering (to me):

Your final comments regarding Kay/Kajiya BVs were basically in line with the thinking that I have done, and with the current state of my raytracer. I now provide cutoffs for shadow testing, and cull objects immediately if they are beyond the maximum distance that we need to look.

This also allows me to implement some of the "shadow caching" and other optimizations suggested by you in the March 28, 1988 RT-News. Most of these were trivial to implement, and will be incorporated in a better/stronger/faster version of my raytracer.

--

Gosh, I just can't keep quiet can I? I just wanted you to know that a new and improved version of my raytracer is available for anonymous ftp. It employs some of the stuff regarding Kay/Kajiya bounding volumes, and shadow caches for an improvement in speed as well. (Roughly 30% improvement). I can now do the sphereflake is less than 5 hours on a Sun 3 w/68881 coprocessor.

For the future, I am thinking of CSG, antialiasing, and Goldsmith and Salmon style hierarchy generation. Things that have been put off, but I would like to include would be more complex primitives, but I just can't deal with numerical analysis at the moment :-)

Soon it will be back to the world of functional programming and my thesis so I better get this all done. *sigh*

--

New Ideas: an ObjectDesc -> NFF compiler

One possible project that I have thought of doing is an Object to NFF compiler. The compiler could be a procedural language which could be used to define hierarchical objects, with facilities for rotation, translation and scaling. The output would be an NFF file for the scene.

For instance, we might have primitive object types CUBE, SPHERE, POLYGON and CONE. Each of these might represent the canonical "unit" primitive. We could then build new objects out of these primitives.

A hypothetical example program to create a checkerboard might be:

#
# checkboard.obj
#
define object check {
        polygon (0.0 0.0 0.0)
                (1.0 0.0 0.0)
                (1.0 1.0 0.0)
                (0.0 1.0 0.0) ;
        }
#
# Check4 contains 4 squares...
#
define object check4 {
        check, color white ;
        check, translate(1.0, 0.0, 0.0), color black ;
        check, translate(0.0, 1.0, 0.0), color white ;
        check, translate(1.0, 1.0, 0.0), color black ;
        }
#
# Board 4 is 1/4 of a checkerboard...
#
define object board4 {
        check4 ;
        check4, translate(2.0, 0.0, 0.0) ;
        check4, translate(0.0, 2.0, 0.0) ;
        check4, translate(2.0, 2.0, 0.0) ;
        }

#
# Board is a full sized checkerboard...
#
define object board {
        board4 ;
        board4, translate(4.0, 0.0, 0.0) ;
        board4, translate(0.0, 4.0, 0.0) ;
        board4, translate(4.0, 4.0, 0.0) ;
        }

#
# the scene to be rendered...
#

define scene {
        board ;
        }

--

I would also like it to support CSG, and maybe even procedural (looping constructs). I don't know if I will get up enough steam to implement this, but it would make scenes easier to specify for the average user.

Ideally, such a language would be interesting to use for specifying motion as well, although I have no real ideas about the ideal way to specify (or implement) this.

back to contents


Neutral File Format (NFF), by Eric Haines

[This is a description of the format used in the SPD package. Any comments on how to expand this format are appreciated. Some extensions seem obvious to me (e.g. adding directional lights, circles, and tori), but I want to take my time, gather opinions, and get it more-or-less right the first time. -EAH]

Draft document #1, 10/3/88

The NFF (Neutral File Format) is designed as a minimal scene description language. The language was designed in order to test various rendering algorithms and efficiency schemes. It is meant to describe the geometry and basic surface characteristics of objects, the placement of lights, and the viewing frustum for the eye. Some additional information is provided for esthetic reasons (such as the color of the objects, which is not strictly necessary for testing rendering algorithms).

Future enhancements include: circle and torus objects, spline surfaces with trimming curves, directional lights, characteristics for positional lights, CSG descriptions, and probably more by the time you read this. Comments, suggestions, and criticisms are all welcome.

At present the NFF file format is used in conjunction with the SPD (Standard Procedural Database) software, a package designed to create a variety of databases for testing rendering schemes. The SPD package is available from Netlib and via ftp from drizzle.cs.uoregon.edu. For more information about SPD see "A Proposal for Standard Graphics Environments," IEEE Computer Graphics and Applications, vol. 7, no. 11, November 1987, pp. 3-5.

By providing a minimal interface, NFF is meant to act as a simple format to allow the programmer to quickly write filters to move from NFF to the local file format. Presently the following entities are supported: A simple perspective frustum A positional (vs. directional) light source description A background color description A surface properties description Polygon, polygonal patch, cylinder/cone, and sphere descriptions

Files are output as lines of text. For each entity, the first line defines its type. The rest of the first line and possibly other lines contain further information about the entity. Entities include:

"v"  - viewing vectors and angles
"l"  - positional light location
"b"  - background color
"f"  - object material properties
"c"  - cone or cylinder primitive
"s"  - sphere primitive
"p"  - polygon primitive
"pp" - polygonal patch primitive

These are explained in depth below:

Viewpoint location.  Description:
    "v"
    "from" Fx Fy Fz
    "at" Ax Ay Az
    "up" Ux Uy Uz
    "angle" angle
    "hither" hither
    "resolution" xres yres

Format:

    v
    from %g %g %g
    at %g %g %g
    up %g %g %g
    angle %g
    hither %g
    resolution %d %d

The parameters are:

    From:  the eye location in XYZ.
    At:    a position to be at the center of the image, in XYZ world
           coordinates.  A.k.a. "lookat".
    Up:    a vector defining which direction is up, as an XYZ vector.
    Angle: in degrees, defined as from the center of top pixel row to
           bottom pixel row and left column to right column.
    Resolution: in pixels, in x and in y.

Note that no assumptions are made about normalizing the data (e.g. the from-at distance does not have to be 1). Also, vectors are not required to be perpendicular to each other.

For all databases some viewing parameters are always the same:

    Yon is "at infinity."
    Aspect ratio is 1.0.

A view entity must be defined before any objects are defined (this requirement is so that NFF files can be used by hidden surface machines).

________

Positional light. A light is defined by XYZ position. Description: "b" X Y Z

Format:
    l %g %g %g

All light entities must be defined before any objects are defined (this requirement is so that NFF files can be used by hidden surface machines). Lights have a non-zero intensity of no particular value [this definition may change soon, with the addition of an intensity and/or color].

________

Background color. A color is simply RGB with values between 0 and 1: "b" R G B

Format:
    b %g %g %g

If no background color is set, assume RGB = {0,0,0}.

________

Fill color and shading parameters.  Description:
     "f" red green blue Kd Ks Shine T index_of_refraction

Format:
    f %g %g %g %g %g %g %g %g

    RGB is in terms of 0.0 to 1.0.

Kd is the diffuse component, Ks the specular, Shine is the Phong cosine power for highlights, T is transmittance (fraction of light passed per unit). Usually, 0 <= Kd <= 1 and 0 <= Ks <= 1, though it is not required that Kd + Ks == 1. Note that transmitting objects ( T > 0 ) are considered to have two sides for algorithms that need these (normally objects have one side).

The fill color is used to color the objects following it until a new color is assigned.

________

Objects: all objects are considered one-sided, unless the second side is needed for transmittance calculations (e.g. you cannot throw out the second intersection of a transparent sphere in ray tracing).

Cylinder or cone. A cylinder is defined as having a radius and an axis defined by two points, which also define the top and bottom edge of the cylinder. A cone is defined similarly, the difference being that the apex and base radii are different. The apex radius is defined as being smaller than the base radius. Note that the surface exists without endcaps. The cone or cylinder description:

    "c"
    base.x base.y base.z base_radius
    apex.x apex.y apex.z apex_radius

Format:
    c
    %g %g %g %g
    %g %g %g %g

A negative value for both radii means that only the inside of the object is visible (objects are normally considered one sided, with the outside visible). Note that the base and apex cannot be coincident for a cylinder or cone.

________

Sphere. A sphere is defined by a radius and center position: "s" center.x center.y center.z radius

Format:
    s %g %g %g %g

If the radius is negative, then only the sphere's inside is visible (objects are normally considered one sided, with the outside visible).

________

Polygon. A polygon is defined by a set of vertices. With these databases, a polygon is defined to have all points coplanar. A polygon has only one side, with the order of the vertices being counterclockwise as you face the polygon (right-handed coordinate system). The first two edges must form a non-zero convex angle, so that the normal and side visibility can be determined. Description:

    "p" total_vertices
    vert1.x vert1.y vert1.z
    [etc. for total_vertices vertices]

Format:
    p %d
    [ %g %g %g ] <-- for total_vertices vertices

________

Polygonal patch. A patch is defined by a set of vertices and their normals. With these databases, a patch is defined to have all points coplanar. A patch has only one side, with the order of the vertices being counterclockwise as you face the patch (right-handed coordinate system). The first two edges must form a non-zero convex angle, so that the normal and side visibility can be determined. Description:

    "pp" total_vertices
    vert1.x vert1.y vert1.z norm1.x norm1.y norm1.z
    [etc. for total_vertices vertices]

Format:
    pp %d
    [ %g %g %g %g %g %g ] <-- for total_vertices vertices

________

Comment.  Description:
    "#" [ string ]

Format:
    # [ string ]

As soon as a "#" character is detected, the rest of the line is considered a comment.

back to contents


Eric Haines / erich@acm.org