On 2008-11-28 17:44:39 +0100, Don <[EMAIL PROTECTED]> said:
Andrei Alexandrescu wrote:
Don wrote:
Andrei Alexandrescu wrote:
(I lost track of quotes, so I yanked them all beyond Don's message.)
Don wrote:
The problem with that, is that you're then forcing the 'unsigned is a
natural' interpretation when it may be erroneous.
uint.max - 10 is a uint.
It's an interesting case, because int = u1 - u2 is definitely incorrect
when u1 > int.max.
uint = u1 - u2 may be incorrect when u1 < u2, _if you think of unsigned
as a positive number_.
But, if you think of it as a natural modulo 2^32, uint = u1-u2 is
always correct, since that's what's happening mathematically.
[...]
Any subtraction of two lengths has a possible range of
-int.max .. uint.max
which is quite problematic (and the root cause of the problems, I guess).
And unfortunately I think code is riddled with subtraction of lengths.
Code may be riddled with subtraction of lengths, but seems to be
working with today's rule that the result of that subtraction is
unsigned. So definitely we're not introducing new problems.
Yes. I think much existing code would fail with sizes over 2GB, though.
But it's not any worse.
I found a couple of instances where to compare addresses simply a-b was
done, instead of something like ((a<b)?-1:((a==b)?0:1)), so yes this is
a pitfall that happens.
Note that normally the subtraction of lengths is ok (because normally
one is interested in the result and a>b), it is when it is used as
quick way to introduce ordering (i.e. as comparison) that it becomes
problematic.
By the way the solution of going beyond 2GB is clearly using size_t, as
I think is done (at least in tango).
Fawzi