On Friday, 21 November 2014 at 21:44:25 UTC, Marco Leise wrote:
Am Wed, 19 Nov 2014 18:20:24 +0000
schrieb "Marc Schütz" <[email protected]>:

I'd say length being unsigned is fine. The real mistake is that the difference between two unsigned values isn't signed, which would be the most "correct" behaviour.

Now take my position where I explicitly write code relying
on the fact that `bigger - smaller` yields correct results.

uint bigger = uint.max;
uint smaller = 2;
if (bigger > smaller)
{
    auto added = bigger - smaller;
    // Now 'added' is an int with the value -3 !
}
else
{
    auto removed = smaller - bigger;
}

In fact checking which value is larger is the only way to
handle the full result range of subtracting two machine
integers which is ~2 times larger than what the original type
can handle:

T.min - T.max .. T.max - T.min

This is one reason why I'd like to just keep working with
the original unsigned type, but split the range around the
positive/negative pivot with an if-else.

Implicit conversion of unsigned subtractions to signed values
would make the above code unnecessarily hard.

Yes, that's true. However, I doubt that this is a common case. I'd say that when two values are to be subtracted (signed or unsigned), and there's no knowledge about which one is larger, it's more useful to get a signed difference. This should be correct in most cases, because I believe it is more likely that the two values are close to each other. It only becomes a problem when they're an opposite sides of the value range.

Unfortunately, no matter how you turn it, there will always be corner cases that a) will be wrong and b) the compiler will allow silently. So the question becomes one of preferences between usefulness for common use cases, ease of detection of errors, and compatibility.

Reply via email to