Hi Timothy,
On Monday 31 January 2005 10:59, Timothy Miller wrote:
What I WANT to do is define "round" so that it rounds up if the parameter is exactly 0.5. The hardware for that is MUCH simpler than the alternative.
Just as trunc(X) + 1 is easier than ceiling. However, does OpenGL permit this? I seem to recall something about a "upper left" rule for screen coordinate rounding.
Good point. First I have to figure out whether or not I'm doing it right already. Then I need to come up with a reasonable way to implement the alternative.
I seem to recall it was decided that geometry should be processed in fixed point, as opposed to floating point for color, texture and depth parameters. Considering that the multipliers are 18 bit, I suggest 14.18 fixed point format, or perhaps 12.20 on the theory that a couple of bits of extra interpolation accuracy are worth the extra shifts.
We're using floats. 8 exponent, 1 sign, 16 mantissa.
There's some discussion back there in the archives, but I may have failed to spot the final conclusion. I do recall it being mentioned that floating point does not make sense for geometry because the precision degrades as you move away from the origin.
Float makes a lot more sense for geometry and vertex processing than it does for fragment processing. Fragment processing is in screen coordinates, and we know, a priori, something about its precision requirements. You'll note that Y and X are converted to integers very early on.
One of the biggest reasons to use float in the fragment processor is that the precision requirements, before perspective divide, are variable.
There will actually be a fair amount of "hidden fixed point" in this design, mostly when I find that the trade-off doesn't hurt results while shrinking the logic.
What worries me more than that though, is the complexity of floating point operations. Addition in particular requires a variable shift at the start and normalize at the end. That amounts to a lot of logic that has to be replicated many times over. Surely it must be at least three or four times what is required for fixed point addition. Are you sure that the design can afford this luxury without displacing some more important feature, such as a second pixel pipeline?
I have a number of tricks up my sleave that involve pre-shifting values before they're used. If you're going to add two numbers and one needs to be shifted to the right, it doesn't matter WHEN you do that shift.
I strongly agree with floating point color handling if it's possible to pull it off. This will help create the impression that the card is not old technology and provide a much better base when it comes time to implement shaders and the like. I have my doubts about the wisdom of using floating point for the edge interpolation though.
And in fact, anything we can prove to not be hurt by switching to fixed will be changed. For instance, X stepping on triangle edges can be fixed point.
I designed the simulation model to deal with pixel centers and get the right results. The reason you can't pre-bias is that it breaks down when you do MIPmapping. Therefore, we just have to do it "right".
I do not see your point with respect to mipmapping. The geometry bias does not affect the projection of textures onto the geometry.
It's not the projection. It's the divide-by-two that happens for going down MIPmap levels. One of our other list members made an elegant proof of this.
But consider a simplified case. If you're at X=1.0, that should correspond to X=0.5 for the next MIP map level down.
If you do the 1/2 pixel adjustment AFTER the divide, it just works.
On the other hand, if you do it before, your 1.0 becomes 0.5, and then after division becomes 0.25, which is WRONG.
This calculation is more complex for perspective interpolation because the adjustment has to be performed on the inverse of the interpolant. Could somebody please take the time to work out an efficient algorithm for this? Even per-span divides have to be kept to a minimum otherwise small triangles become too expensive.
We do perspective correction by taking the reciprocal of W and multiplying.
Of course. However, this is just an optimized way of dividing several parameters. For perspective interpolation you have to interpolate the inverses of the parameters, and divide by the interpolated inverse of W. This does not appear to be handled correctly in the model.
I think you're placing an interpretation on the model that it doesn't imply. You really should have a look at the documentation.
If you're referring to 'W' as the perspective divide that happens when projecting world coordinates to screen coordinates, then it's understandable that you'd be confused. W isn't linearly interolable in hardware, but 1/W is, so we interpolate that, take the reciprocal, and multiply everything by what you were calling 'W'. That is the right way to do it.
The problem is that what you call 1/W, we're calling W. The reason is that we're no longer in world coordinates. We're in SCREEN coordinates. When in world coordinates, you have X, Y, and Z values. To project them, you compute and divide by W. But this yields another set of values called X, Y, and Z, or if you prefer, X', Y', and Z'. But we're working with homogenous coordinates here, which means everything has X, Y, Z, and W. Likewise, there is X', Y', Z', and W'. It just so happens that W' = 1/W. It also happens that we've stripped off the primes when naming the registers in the hardware.
Perhaps I need to go back to calling it M, and then call it W after reciprocal.
One question is whether what Mesa wants in the depth buffer is really Z or Z'. I'm not clear on that.
BTW, our depth buffer contains what you'd call 1/W values. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
