On 15 August 2011 10:11, Dan McCabe <zen3d.li...@gmail.com> wrote:
>
> You might also want to consider implementing
>    quotient = int((float(x) + 0.5 * float(y)) * reciprocal(float(y)));
>
> This rounds the result to the nearest integer rather then flooring the
> result and is arguably faster (assuming that common subexpressions for
> float(y) are hoisted).

I see where you're coming from, and if we were designing a new
programming language from scratch, I might consider the idea.  But I
don't feel like we can break from tradition that strongly.  In every
programming language I'm aware of, dividing two integers to produce an
integer result either rounds toward zero (e.g. C99) or toward negative
infinity (e.g. Python 3.0, if you use the "floor division" operator).
Nobody rounds to nearest.  And I believe I've seen code that relies on
this rounding behavior, for example in computing loop bounds:

// access an array in groups of three, ignoring any leftover bit at the end
for (int i = 0; i < array_length/3; ++i) {
  ...access array elements 3*i, 3*i+1, and 3*i+2...
}

If we redefined integer division to do "round to nearest", code like
the above would break.

Incidentally, my experiments so far with the nVidia Linux driver
indicate that it implements "round toward zero" behavior.

It's really too bad that the GLSL spec doesn't narrow down corner
cases like these, or make reference to standards that do.  It seems
like their intention has been to make the language as C-like as
possible--how hard would it have been for them to say "Integer
operations are defined as in C99"?
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to