On Saturday 29 January 2005 18:35, Nicolai Haehnle wrote:
> > This is problematic.  For instance, the chip does W-buffering,
> > while Mesa probably does Z-buffering.  We'd have to rewrite parts
> > of Mesa in order to be able to fall back on it and still have it
> > WORK.
>
> Well, don't say you haven't been warned because this is not the first
> time this has come up.
>
> The fact is, *all* 3D APIs expose their depth buffer. OpenGL (and
> Direct3D) specify a Z-buffer. Applications can read this Z-buffer -
> for example using glReadBuffer, but there are probably less
> straightforward ways to do it. So we *MUST* be able to produce
> compliant Z values. There's simply no way around it.

This really sucks.  As far as I can see, Z is completely useless and 1/W 
is the only coordinate worth calculating.  Are you sure a careful 
reading of the OpenGL spec doesn't permit this?  (I'm just grasping at 
straws here)

If someone knows any precision argument for using 1/Z instead of 1/W, 
I'd like to hear it.

> And once we're able to produce compliant Z values, we can just use
> Mesa for software rendering.
>
> I'm curious what the rationale for the current design was, because I
> honestly don't remember. There is one interpolant less than with a Z
> buffer. Was there anything else?

If you omit all the Z's then homogeneous transformation only needs a 3x4 
matrix, saving 25% of transformation cost.  Savings in interpolation 
cost are even more.

Regards,

Daniel
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to