On Tuesday 01 February 2005 10:23, Timothy Miller wrote:
Nicolai Haehnle wrote:
On Monday 31 January 2005 18:01, Daniel Phillips wrote:
Of course. However, this is just an optimized way of dividing several parameters. For perspective interpolation you have to interpolate the inverses of the parameters, and divide by the interpolated inverse of W. This does not appear to be handled correctly in the model.
The formula for perspective correct interpolation is:
f = F(t) / M(t), where F(t) = (1-t)f_0/w_0 + tf_1/w_1 and M(t) = (1-t)/w_0 + t/w_1
Both F and M can be linear interpolated, and before the actual fragment pipeline is entered, divide F/M. This is exactly what the software model does already, only that M is called W.
I think the suggested optimization was to do some linear interpolation AFTER perspective divide. This would save on multipliers and other logic.
Well, there was that in another thread, but there was also me misreading the sample render code. I now think it's correct.
There's really nothing wrong with perspective correction in the software model as far as I can see, the problem lies with Z as I explained in the other thread ("Depth buffer"). Nobody has taken up the challenge of figuring out how to do orthographic projection correctly with the current software model (I'm saying it can't be done, because where OpenGL specifies two independent variables, the software model only has one).
Are you talking about Z and W or are you talking about W and Q?
Z and W can be easily interconverted. Plus we can do correct perspective correction (is this what you're calling 'orthographic'?) It's projective textures that we can't do (because we don't have Q).
Orthographic projection can't be expressed in terms of perspective projection because it would require an infinitely long focal length to bring the side clipping planes exactly parallel.
Sorry, I haven't got time to sort out the minutae of this just now, but in general terms, we need to jiggle the official glOrtho matrix a bit to end up with transformed Z's substituted for transformed W's (nearly the same as for non-ortho transformation) and substitute 1 for W in all perspective divisions.
For reference, glOrtho is defined nicely in mesa/progs/util/matrix.c.
For us, the big application of orthographic projection is 2D rendering. I think we want to be able to do Z buffering in the 2D pipeline, even though there's nothing in X itself that would require it, so we might as well use glOrtho and have the 2D pipeline really be exactly the same as the 3D pipeline.
I've decided that I'm going to add the Z into the rasterizer. The only thing that isn't clear to me is whether I need to store Z (world) or Z/W (screen coordiates) in the depth buffer.
Therein lies a problem. Since the reciprocal isn't precise (we use only 10 mantissa bits when computing it), computing world Z (which is screenZ/(1/W)) would result in an inaccurate value (not that it'll be all that great to begin with).
BTW, I think I'm going to go through the model and rename everything called 'W' to 'M'. The reason is to eliminate some confusion. W will always mean exactly the same thing which is [1+Zworld/d]. Since we rasterize 1/W, I'll call it M. Comments?
Another thing that I remain confused about is wrt rounding. If something is exactly 0.5, do we round up or down?
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)
