[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-30 Thread stefan at bytereef dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

--- Comment #6 from Stefan Krah  ---
Sometimes you hear "code should be rewritten" because squashing the warnings
makes it better.

I disagree. I've seen many segfaults introduced in projects that rush
to squash warnings.

Sometimes, analyzers just cannot cope with established idioms. clang-analyzer
for instance hates Knuth's algorithm D (long division). It would be strange to
change that for an analyzer.

[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-30 Thread stefan at bytereef dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

--- Comment #5 from Stefan Krah  ---
> So the diagnostic messages leave a lot to be desired but in the end
> they point to a problem in your code which is a guard against a NULL 's'.

Hmm, the real code is used to print floating point numbers and integers.
Integers get dot==NULL. It is fine (and desired!) in that case to optimize
away the if clause.

As far as I can see, it is compliant with the C standard.


Even with -fno-strict-overflow one could make the case that the warning
is strange. If "s" wraps around, the allocated output string is too small,
and you have bigger problems.

It is impossible for gcc to detect whether the string size is sufficient,
so IMHO it should not warn.


In essence, since gcc-10 (12?) idioms that were warning-free for 10 years
tend to receive false positive warnings now.

This also applies to -Warray-bounds. I think the Linux kernel disables at
least -Warray-bounds and -Wmaybe-uninitialized.

I think this is becoming a problem, because most projects do not report
false positives but just silently disable the warnings.

[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-30 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

Richard Biener  changed:

   What|Removed |Added

   Last reconfirmed||2024-01-30
 Status|UNCONFIRMED |NEW
 Ever confirmed|0   |1

--- Comment #4 from Richard Biener  ---
Confirmed.  As usual it's jump-threading related where we isolate, in the
-Warray-bounds case

MEM[(char *)1B] = 48;

we inline 'f' and then, when s == dot == NULL your code dereferences both
NULL and NULL + 1.

So the diagnostic messages leave a lot to be desired but in the end they
point to a problem in your code which is a guard against a NULL 's'.

The jump threading is different with -fwrapv-pointer, in particular without
it we just get the NULL dereference which we seem to ignore during
array-bound diagnostics.

We later isolate the paths as unreachable but that happens after the
diagnostic.

[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-29 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

--- Comment #3 from Andrew Pinski  ---
https://github.com/python/cpython/issues/96821 is the issue to re-enable
strict-overflow ...

[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-29 Thread stefan at bytereef dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

--- Comment #2 from Stefan Krah  ---
Thanks for the explanation!  I agree that one should not rely on
-fno-strict-overflow. In this case, my project is "vendored" in CPython and
they compile everything with -fno-strict-overflow, so it's out of my control:

https://github.com/python/cpython/issues/108562


mpdecimal itself does not need -fno-strict-overflow.

[Bug tree-optimization/113664] False positive warnings with -fno-strict-overflow (-Warray-bounds, -Wstringop-overflow)

2024-01-29 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113664

--- Comment #1 from Andrew Pinski  ---
-fno-strict-overflow turns on -fwrapv-pointer which allows pointers to wrap
which means if s was non-null, then `s+1` can be still a null pointer ...

And then we go and prop null into dot and s is equal to null at that point. and
then we still generate the code for `*s++ = '.';` in
```
if (s == dot) {
  *s++ = '.';
}
```

But it is `*NULL = '.';` due to that.

The warning is very sensitive to the ability to optimization away null pointer
checks in this case. Really `fno-strict-overflow` is normally to workaround
some "undefinedness" in the code and the code should be improved to be fixed.
Using -fwrapv instead also helps because now only signed integer overflows and
not also pointers ...