On Tuesday, 26 March 2013 at 18:24:39 UTC, Steven Schveighoffer wrote:
http://dlang.org/expression.html#AddExpression

"If both operands are of integral types and an overflow or underflow occurs in the computation, wrapping will happen. That is, uint.max + 1 == uint.min and uint.min - 1 == uint.max."

Thanks Steve!

Do you know if there ever was a (public?) discussion about this, before being defined this way? I wanted to see what trade-offs were considered, etc.

(For instance, one disadvantage I see with this definition is that it exacerbates the potential problems with D's well-defined integral types' sizes. Imagine I'm programming some microcontroller with unusual word or register sizes. For instance, 10 bits bytes instead of the usual 8 bit bytes. In C there would not be any performance penalty even for the unsigned char, which mandates wrapping, because the wrapping would occur at 2^10. In D you would have to put extra checks because a well defined size plus a well defined wrapping would not allow just using the native arithmetic instructions alone, which presumably would not guarantee wrapping at 8-bit widths.)

Reply via email to