On Thursday, 17 July 2014 at 08:50:12 UTC, John Colvin wrote:
Every machine D will feasibly support will do T.max + 1 == T.min and T.min - 1 == T.max for native integral operations, signed or unsigned.

In fact, the spec mandates this (see AddExpression): "If both operands are of integral types and an overflow or underflow occurs in the computation, wrapping will happen."

It's probably not taken advantage of due to the amount of C code that assumes the expected semantics. Also, there may be ways of telling backends that it is defined and we may be using those ways, I don't know.

Oh dear, you'd be in for a very nasty surprise if you relied on this. ;)

Compiling the following code as C++ using Clang
---
bool foo(int a) {
  return a > (a + 1);
}
---
yields
---
; Function Attrs: nounwind readnone uwtable
define zeroext i1 @_Z3fooi(i32 %a) #0 {
  ret i1 false
}
---
That is, the optimizer completely gets rid of the check, as the overflow would be undefined behavior and thus cannot happen!

On the other hand, compiling it using LDC yields
---
; Function Attrs: nounwind readnone uwtable
define i1 @_D4test3fooFiZb(i32 inreg %a_arg) #0 {
  %tmp3 = icmp eq i32 %a_arg, 2147483647
  ret i1 %tmp3
}
---
just as it should. In other words, your suspicion that LLVM might offer a way to toggle whether overflow is defined is true, and LDC uses the correct variant of the arithmetic instructions.

GDC seems to be broken in that regard, though: http://bugzilla.gdcproject.org/show_bug.cgi?id=141

Cheers,
David

Reply via email to