On Fri, Mar 12, 2010 at 11:33:41PM +0100, sean finney wrote: > hi, > > On Fri, Mar 12, 2010 at 09:47:50PM +0100, Iustin Pop wrote: > > > (without the negation). adding a negation operator to this is what was > > > raising my eyebrows. it could be that as long as everything is a constant > > > that stuff is okay, but once you negate a non-constant value holding > > > INT_MIN > > > you are definitely in trouble, and the level of meta with C++/templating > > > added to this protobuf compiling stuff makes me think that not everything > > > that appears constant is in fact constant. > > > > Honestly this is way above my skills :), but I don't think the above is > > true. Constant or not, negation should work the same. > > nope: > > agricola% cat foo.c > #include <stdint.h> > > int main(int argc, char *argv[]){ > int32_t a = -0x80000000; /* okay, apparently */ > int32_t b = 0x80000000; > if (argc > 1) b = -b; /* not okay */ > return 0; > } > > agricola% gcc -g -ftrapv foo.c > agricola% ./a.out > agricola% ./a.out foo > zsh: abort ./a.out foo > > i'm a bit surprised that -0x80000000 actually works, but not at all suprised > that negating an signed int with INT_MIN doesn't.
So again, sorry if I'm talking stupid things. But here the compiler does exactly what you required it to do. 0x80000000 as a positive constant doesn't work already, if you for example try to print it it's stored as a negative value (it actually becomes equal to a, -0x80000000), which of course cannot be negated again because it would overflow. Also, -0x80000000 is a perfectly normal and legal value, it's documented to be OK. To me be above bevahiour of 100% correct and expected. Check this version: #include <stdint.h> #include <stdio.h> int main(int argc, char *argv[]){ int32_t a = -0x80000000; /* okay, apparently */ int32_t b = 0x80000000; printf("%i\n", b); printf("%i\n", a==b); if (argc > 1) b = -b; /* not okay */ return 0; } The problem occurs at the original assignment to b, not at the negation of it. > > #include <stdio.h> > > #include <limits.h> > > > > int main() { > > long long int j = LLONG_MAX; > > int check = 0; > > > > j -= 10; > > j += 5; > > j += 3; > > j += 2; > > check = (-j -1 ) == LLONG_MIN; > > > > printf("%d\n", check); > > return 0; > > } > > > > This small test program gives 1 with any combination of the flags below. > > why shouldn't it? Because I'm wrapping arround while doing arithmetic, and even after that, the wraparound behaviour results in the original value. No uncertainties. No aborts with -ftrapv. > > Hmm, this might make sense, except for my latest findings which were > > reported > > on the debian-arm list. > > > > First and very important, gcc 4.3 passes the tests, gcc 4.4 (the default > > now in > > sid) fails the tests. This, coupled with the fact that every single other > > architecture works fine, tells me that it's rather some kind of regression > > in > > gcc 4.4 on armel, rather than following or not the standard. > > i'm not saying it *isn't* a compiler error, but inserting a few printfs and > the problem disappearing is also pretty common in other situations of > "undefined behavior"... I disagree here. Undefined behaviour related to integer arithmetic should only corrupt the values in question, and a printf can't magically make them correct again. So yes, it might be a compiler bug, but I still don't think it's related to 64-bit arithmetic, but rather something else (storage of 64-bits values in registers/memory? optimization of such values? etc.). I'll run a compile with trapv, but a few small tests with the ZigZagEncode64/ZigZagDecode64 a -ftrapv shows there is no overflow. Expect results in a few hours… Thanks for all the comments. Again, I'm not very skilled at this, so apologies for any stupid things I said. iustin
signature.asc
Description: Digital signature