https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124223

--- Comment #13 from stano at meduna dot org ---
(In reply to Jonathan Wakely from comment #12)
> That's not undefined. It very clearly says "implementation-defined", not
> undefined.
> 
> GCC defines it here:
> https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html
> 
> "For conversion to a type of width N, the value is reduced modulo 2^N to be
> within range of the type; no signal is raised."
> 
> 
> So no, doing unsigned arithmetic and then converting to a signed type is not
> equally undefined, it's fine.

Well, implementation-defined still means that it can change with any version of
any of four compilers we are targeting, but yes, it is fine for most
definitions of fine.

The report is however about optimizing the whole loop away, not about the value
produced. So, am I correct that what happened was actually:

- the cast itself is well defined and produces -2147483648
- the subtraction might overflow or not, the compiler does not know (in the
example it can, but the behavior was the same if passed as an argument)
- the compiler rightfully decides that there is no signed integer, that, when
that number is subtracted from it, can produce the result less than -100 and
not overflow
- the whole loop can thus be optimized away and not compiled at all

That makes sense. Thanks for clarifications.

Reply via email to