On Friday 04 March 2005 03:31, Hugh Fisher wrote: > > We have to detect denormals and negative exponent out of range > > (the same test) and set the number to zero. Which reminds me, > > zero is a special > > case that the reciprocal needs to handle. What should happen? > > I think the GPU should stop. On the other hand, that might > > prevent some buggy programs from running, that run on other > > graphics cards. Tough one. > > The NVIDIA and ARB vertex/fragment shader specifications say that > the reciprocal of +/- 0.0 is +/- INF respectively. Yes, I know > we're not doing programmable hardware yet, but it's a strong > hint as to how OpenGL is expected to work.
As you say, it's not a shader, it's just a rasterizer and everything is supposed to be in range at this point. If anything is out of range, the results aren't really important. In the days of fixed point rasterizers, the numbers would just wrap and look broken, so we are within our rights to do likewise to save a few gates. The problem with handling all the floating point by the book is, this logic is replicated a lot. There are 16 fp adders in the interpolater and a bunch more in scan setup. Additional logic to handle infinity and NAN encoding would be significant extra pain that does not deliver any rendering quality improvement. It should be ok just to let the exponent wrap if it overflows, which it never will if clipping is working properly. The next incremental improvement would be to detect exponent overflow and clamp to +-maximum. There isn't any point in implementing infinities in the rasterizer. On the other hand, I can see why shaders need to handle infinities, since it is easy to overflow in lighting calculations with exponentiation for example, and the results still need to look good. > > We really ought to detect NANs, infinities and exponent out of > > range, and halt processing with an error flag. Again, this > > might prevent some buggy programs from running. > > No no no. Read the OpenGL spec, or the more recent programmable > hardware specs, and you won't find any references to setting > GL errors due to floating point overflow/underflow. That was just me being anal. Of course the card should attempt to display a picture at all costs. One day if we have gates we don't know what to do with, we could raise a flag bit so that buggy programs don't go entirely unnoticed. Not that anyone would ever look at it ;-) > > An alternative is just to set the number to zero and keep going. > > Another alternative is to not do anything and just let things > > act strangely, so long as nothing locks. > > This is the behaviour expected by the average OpenGL programmer > and specified, if not always explicitly, by the reference book. > (Ditto DirectX.) Speed is everything for realtime 3D. If a number > underflows, set it to zero and keep going. If it overflows, leave > it as infinity and keep going: the vertex/texture access/frame > buffer clipper will throw it away. I would think infinities should be clamped to minimum/maximum values for colors, not discarded, otherwise yes. > The behaviour is not correct > by IEEE math standards, but it is for 3D graphics which ultimately > ends up as pixels. Besides, you're going to throw the whole lot > away in a few milliseconds anyway, so an odd glitch isn't really > noticeable. Ahem. There I don't agree, I believe that every visible glitch impacts the reputation of the card, and that we should aim for exactly zero visible glitches. The GL floating point rules don't get in the way of that. I believe ieee supports this saturating behaviour as a settable mode. Anyway, while the rules for shaders are well worth thinking about, they don't apply to the current design as far as I can see. Regards, Daniel _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
