On Tuesday 01 February 2005 19:31, Timothy Miller wrote:
> Daniel Phillips wrote:
> > We don't actually have to follow the edicts of OpenGL exactly so long as 
> > glReadPixels can recover what OpenGL wants to see.  For ortho 
> > projection we essentially want to have transformed Z's in the depth 
> > buffer instead of 1/W.
> 
> Well, screen Z is [Z/(1+Z/d)].  While world Z can be anything, as world 
> Z increases, screen Z asymptotically approaches d.  For comparison 
> purposes, the depth ordering is the same, but there is a massive loss of 
> precision at large world Z values in terms of screen Z.
> 
> > By the way, is 1/W in the depth buffer going to be fixed point or 
> > floating point?  I suggest keeping things simple and always using 24 or 
> > 32 bit fixed point.  If 24 bit, the left over 8 bits can be used for 
> > stencil or alpha.  Again, what the hardware does doesn't have to 
> > reflect the OpenGL state exactly, so long as the driver can make it 
> > appear to.
> 
> I was going to use float25 (16 bit mantissa), because it's not so easily 
> predictable what the range of precision should be.  No one's ever made 
> any suggesion about what values 'd' might have, plus d is kinda just an 
> artifact of the fact that I'm working in device coordinates.  It's 
> unfortunate that we have to waste bits on the exponent, but I'm not sure 
> we have a choice.
> 
> > 
> > In my experience, a 16 bit depth buffer is not useful for much more than 
> > contrived demos.  With a 1/W depth buffer, planes start to 
> > interpenetrate horribly as soon as the viewpoint recedes only a short 
> > distance, or else you have to place the front clipping plane 
> > unacceptably far away.  This problem could be alleviated by using 
> > floating point 1/W in the depth buffer, though I suspect that would 
> > introduce a different and nastier plethora of precision artifacts, not 
> > to mention bloating up the hardware.
> 
> According to Hugh Fisher, our resident graphics professor on the list, a 
> paper was published a while back that showed that W-buffering had 
> demonstrably better results than Z-buffering.  But it wasn't clear to me 
> if they meant W or 1/W.  (I didn't see the paper.)
> 
> Anyhow, screw it.  I'm going to do Z buffering, because that's what 
> everyone wants.  If there are any adjustments to the numbers which would 
> improve precision, they can probably be substituted for Z. 
> Additionally, I'm going to store screen Z in the depth buffer, not world 
> Z, because I can't compute world Z in hardware without an even greater 
> precision loss.

Some FYIs:
1. I don't care which format (i.e. float, fixed, mantissas, whatever) the Z 
buffer uses.
2. Storing screen Z is correct.
2b. When computer graphics people talk about a W buffer, they are normally 
talking about a buffer that stores world Z. Quoting for example the 
Z-Buffering article on Wikipedia: "To implement a w-buffer, the old values 
of z in camera space, or w, are stored in the buffer, generally in floating 
point format." (Man, do we have communications problems on this list.)
3. Not multiplying the interpolated screen Z value with 1/M (the perspective 
correction thing) is indeed correct. This means that the Z values written 
out to the depth buffer are indeed linear. That's what people mean when 
they say Z is linear in screen space.
4. The Z value from OpenGL is always in the range [0,1]. The way OpenGL 
defines clipping results in normalized device coordinates (i.e. screen 
coordinates without taking the screen size into account) that are always in 
the range [-1,1]. The viewport transform translates these in a way that 
guarantuees that Z is in the range [0,1].

I'm going to leave it at that for today on this list. This thread has been 
far too busy already, I have the impression it needs to cool down a bit.

cu,
Nicolai

Attachment: pgpt1OEOj4mPN.pgp
Description: PGP signature

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to