John Wright wrote:
> find it difficult to believe that a graphics card could calculate all
> pixel locations for potentially tens of thousands of polygons for nearly
> a 1 million pixel display (1024x768) all in 1/100th of a second? I'm
> guessing that they somehow optimize this to reduce the raw power needed.
Yes, that is something done by the video card. PC type cards have had
hardware Z buffers since at least 1995 back when I had my old Matrox
Impression Plus. Depth testing is a very simple thing - you don't do it
for each pixel.
The simplified algorithm for how this is done is:
- Is the entire polygon is visible or not? It does this by looking at
the average depth of the polgon. There are different approaches to this
including full polygon depth sorting in the view space coordinates, but
the simplest is to convert the verticies to screen space and test the z
values of each vertex in the depth buffer. If all the vertices are
deeper than the current values, toss the polygon away.
- If the entire polygon is in front, start to render it, scanline
fashion. As you are calculating the values, you can check start and end
points of the current face part. If both are lower than the current
depth values, toss the line away. If both higher, draw the entire line
and ignore depth calculations - just do lighting
- If the polygon intersects with the other polygon(s), scanline render
and test every depth. Some implementations will attempt to segment the
polygon into entirely visible and entirely invisible parts and then
throw away the invisible part.
Where you run into problems (always) is the floating point accuracy. If
you get two polygons that are very close together in relative depth, the
inaccuracies of a floating point calculation due to rounding errors
allows a fair amount of wiggle room. Thus, you get z-buffer tearing as
you end up with two co-planar polygons, or at the very intersection of
two intersecting polygons. The tearing is due to this floating point
precision. Effectively, what the depth buffer does is allocate an int
value of either 16 bits or 32 bits for each pixel on screen. Each value
of the int becomes one depth value. Each depth increment is determined
by (back_clip_distance - front_clip_distance) / sizeof(int) . So, what
happens is your card determines the exact depth as a float and then
casts it back to one of the int values. Then, to determine if the new
value is deeper than the current value, it is a simple integer
difference rather than a vector calculation. Thus, with two values close
together, and both reduced to the same depth buffer int value for the
depth you get the inaccuracies.
--
Justin Couch http://www.vlc.com.au/~justin/
Freelance Java Consultant http://www.yumetech.com/
Author, Java 3D FAQ Maintainer http://www.j3d.org/
-------------------------------------------------------------------
"Humanism is dead. Animals think, feel; so do machines now.
Neither man nor woman is the measure of all things. Every organism
processes data according to its domain, its environment; you, with
all your brains, would be useless in a mouse's universe..."
- Greg Bear, Slant
-------------------------------------------------------------------
===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST". For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".