On 07/05/2010 07:59 AM, Stewart Gordon wrote:
bearophile wrote:
Stewart Gordon:
I can also imagine promoting your mindset leading to edit wars
between developers declaring an int and then putting
assert (qwert >= 0);
in the class invariant, and those who see this and think it's
brain-damaged.
As opposed to doing what?
This is quite interesting. You think that using an unsigned type in D
is today the same thing than using a signed value + an assert of it
not being negative?
Not quite - an unsigned type has twice the range. It's true that this
extra range isn't always used, but in some apps/APIs there may be bits
that use the extra range and bits that don't, and it is often simpler to
use unsigned everywhere it's logical than to expect the user to remember
which is which.
Another important difference is the point of non-'continuity'
with a signed integer, that point is *.max/min. Assuming typical usage
of integers centers around zero, this point doesn't get hit frequently.
with an unsigned integer, that point is 0. Assuming the same, this point
gets hit much more frequently, which has important implications for
subtraction and comparison.