On Saturday, 22 November 2014 at 11:12:06 UTC, Marc Schütz wrote:
I'd say that when two values are to be subtracted (signed or
unsigned), and there's no knowledge about which one is larger,
it's more useful to get a signed difference. This should be
correct in most cases, because I believe it is more likely that
the two values are close to each other. It only becomes a
problem when they're an opposite sides of the value range.
Not being able to decrement unsigned types would be a disaster.
Think about unsigned integers as an enumeration. You should be
able to both take the predecessor and successor of the value.
This is also in line with how you formalize natural numbers in
math:
0 == zero
1 == successor(zero)
2 == successor(successor(zero))
This is basically a unary representation of natural numbers and
it allows both addition and subtraction. Unsigned int should be
considered a binary representation of the same capped at max
value.
Bearophile has given a sensible solution a long time ago, make
type coercion explicit and add a weaker coercion operator. That
operator should prevent senseless type coercion, but allow
system-level-coercion over signedness. Problem fixed.