https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124223

--- Comment #4 from stano at meduna dot org ---
The attached source code demonstrates an instance, where a cast of 0x80000000
to a int32_t causes the whole loop to be optimized away even with the -O1
optimization. I was not able to find the exact optimization flag responsible
for that; -O1 with every option at -fno-... still eliminated it, -O0 with every
-f option documented as enabled with -O1 still did not reproduce it.

While to my knowledge the cast of a value that cannot be represented by the
target type is an implementation defined behavior and strictly speaking the
code is invalid, I would nevertheless like to submit the report, as what
happened was a bit surprising and most developers think of such casts as
bitwise copies.

Changing the offending line
  int32_t diff = (int32_t)values[i] - (int32_t)0x80000000;
to
  int32_t diff = (int32_t)(values[i] - 0x80000000);
resolves the issue.

We have reproduced it with both 13.3.0 and 15.2.0, both times on Intel
platform.

Reply via email to