[Bug tree-optimization/108467] false positive -Wmaybe-uninitialized warning at -O1 or higher
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108467 --- Comment #4 from Vincent Lefèvre --- (In reply to Sam James from comment #3) > For 14/15, it seems gone with -O2, but I see it with -Og. The warning still occurs with -O1 too.
[Bug c/114746] With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants and floating-point constant expressions
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 --- Comment #7 from Vincent Lefèvre --- BTW, in /usr/include/math.h from the GNU libc 2.37: # define M_PI 3.14159265358979323846 /* pi */ i.e. M_PI is defined with 21 digits in base 10, which corresponds to about 70 digits in base 2, thus with the apparent intent to be accurate in extended precision (64 digits).
[Bug c/114746] With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants and floating-point constant expressions
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 --- Comment #6 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #5) > FLT_EVAL_METHOD = 0 is on some hw like the pre-SSE2 ia32 extremely > expensive, far more so than even the very expensive -ffloat-store. That is > certainly not a good default. Plus I'm afraid it would suffer from double > rounding, unless the floating point state is switched each time one needs to > perform some floating point instruction in a different precision. I would think that in general, users would choose a FP type and stick to it. However, there is the problem of libraries. But I would be interested to know what would be the actual loss in practice, for the use of such machines nowadays (if users want performance, there are faster processors, with SSE2 support).
[Bug c/114746] With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants and floating-point constant expressions
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 --- Comment #4 from Vincent Lefèvre --- I actually find it more confusing the fact that constants are not evaluated in extended precision while everything else is evaluated in extended precision. The real solution to avoid confusion would be to change the behavior so that FLT_EVAL_METHOD = 0 by default; if users see an effect on the performance (which may not be the case for applications that do not use floating-point types very much), they could still use an option to revert to FLT_EVAL_METHOD = 2 (if SSE is not available), in which case they should be aware of the consequences and would no longer be confused by the results. But in addition to the confusion, there is the accuracy issue with the current behavior.
[Bug c/114746] With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants and floating-point constant expressions
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 Vincent Lefèvre changed: What|Removed |Added Summary|With FLT_EVAL_METHOD = 2, |With FLT_EVAL_METHOD = 2, |-fexcess-precision=fast |-fexcess-precision=fast |reduces the precision of|reduces the precision of |floating-point constants|floating-point constants ||and floating-point constant ||expressions --- Comment #2 from Vincent Lefèvre --- I've updated the bug title from "With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants" to "With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants and floating-point constant expressions" (I don't think that this deserves a separate bug).
[Bug c/114746] With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 --- Comment #1 from Vincent Lefèvre --- There is the same issue with constant floating-point expressions. Consider the following program given at https://github.com/llvm/llvm-project/issues/89128 #include #include static double const_init = 1.0 + (DBL_EPSILON/2) + (DBL_EPSILON/2); int main() { double nonconst_init = 1.0; nonconst_init = nonconst_init + (DBL_EPSILON/2) + (DBL_EPSILON/2); printf("FLT_EVAL_METHOD = %d\n", FLT_EVAL_METHOD); printf("const: %g\n", const_init - 1.0); printf("nonconst: %g\n", (double)nonconst_init - 1.0); } With -m32 -mno-sse, one gets FLT_EVAL_METHOD = 2 const: 0 nonconst: 2.22045e-16 instead of FLT_EVAL_METHOD = 2 const: 2.22045e-16 nonconst: 2.22045e-16
[Bug c++/114050] Inconsistency in double/float constant evaluation between 32 and 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114050 --- Comment #17 from Vincent Lefèvre --- (In reply to Jonathan Wakely from comment #13) > -fexcess-precision does affect constants. Indeed, and this is a bug, as -fexcess-precision=fast was not meant to make general programs less accurate (but to possibly keep more precision internally, avoiding costly conversions, even in cases where this is forbidden). See bug 114746.
[Bug c/114746] New: With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114746 Bug ID: 114746 Summary: With FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- -fexcess-precision was added to resolve bug 323, so that with -fexcess-precision=standard, after an assignment or a cast, the value is converted to its semantic type; this conversion makes the program slower, hence -fexcess-precision=fast to have faster programs *without sacrificing their accuracy* in average (except for programs that are based on such a conversion). Said otherwise, the values may be kept with more precision than normally expected. Thus it is not expected that -fexcess-precision=fast reduces the precision of a value. However, with FLT_EVAL_METHOD = 2, -fexcess-precision=fast reduces the precision of floating-point constants compared to what is specified by the standard. This is a much worse issue. Testcase: #include #include int main (void) { printf ("FLT_EVAL_METHOD = %d\n", FLT_EVAL_METHOD); printf ("%La\n%La\n", 1e-8L, (long double) 1e-8); printf ("%a\n%a\n", (double) 1e-8, (double) 1e-8f); return 0; } With GCC 14.0.1 20240330 (experimental) [master r14-9728-g6fc84f680d0] using -m32 -fexcess-precision=fast, I get: FLT_EVAL_METHOD = 2 0xa.bcc77118461cefdp-30 0xa.bcc77118461dp-30 0x1.5798ee2308c3ap-27 0x1.5798eep-27 instead of the expected output (e.g. by using -fexcess-precision=standard) FLT_EVAL_METHOD = 2 0xa.bcc77118461cefdp-30 0xa.bcc77118461cefdp-30 0x1.5798ee2308c3ap-27 0x1.5798ee2308c3ap-27
[Bug c++/114050] Inconsistency in double/float constant evaluation between 32 and 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114050 --- Comment #16 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #15) > There is no bug, the compiler implements what the standard says for the > FLT_EVAL_METHOD == 2 case. I agree. I meant this *invalid* bug.
[Bug c++/114050] Inconsistency in double/float constant evaluation between 32 and 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114050 --- Comment #14 from Vincent Lefèvre --- This bug is about "double/float constant evaluation" (and it has been marked as a duplicate of a bug precisely on this subject), not about the rules that are applied *after* this evaluation.
[Bug c++/114740] i686-linux-gnu-g++ does not interpret floating point literals as double
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114740 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #3 from Vincent Lefèvre --- (In reply to Jonathan Wakely from comment #1) > See the first item at https://gcc.gnu.org/gcc-13/changes.html#cxx The mention of -fexcess-precision in this item is unclear, because it does not affect constants (or this is a real bug). I suspect that it means "evaluation method" (FLT_EVAL_METHOD).
[Bug c++/114050] Inconsistency in double/float constant evaluation between 32 and 64 bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114050 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #12 from Vincent Lefèvre --- (In reply to Søren Jæger Hansen from comment #10) > -fexcess-precision=fast it is for now then, thanks again for fast feedback. -fexcess-precision is unrelated. Its goal is to choose whether GCC conforms to the standard, i.e. reduces the precision for assignments and casts (*only* in these cases, thus constants are not concerned), or generates faster but non-conforming code.
[Bug tree-optimization/61502] == comparison on "one-past" pointer gives wrong result
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61502 --- Comment #48 from Vincent Lefèvre --- (In reply to Alexander Cherepanov from comment #35) > DR 260 allows one to argue that representation of these pointers could > change right between the checks but IMHO this part of DR 260 is just wrong > as it makes copying objects byte-by-byte impossible. See > https://bugs.llvm.org/show_bug.cgi?id=44188 for a nice illustration. Note that the behavior of GCC with your testcase is unrelated to the LLVM issue and DR 260. And it is even reversed: here with GCC, the result of the comparison is *incorrect before* the (potentially invalid) copy of the pointer, and it is *correct after* the copy. It seems that the reason why the result incorrect before the copy is that GCC optimizes based on the fact that p and q points to distinct objects (or past to them), i.e. GCC considers that the pointers are necessarily different in such a case, without needing a further analysis. For repr and val2, I assume that GCC gives the correct result because it does not optimize: it just compares the representations of p and q, which are the same. > While at it, the testcase also demonstrates that the comparison `p == q` is > unstable. p == p would also be unstable: What could happen is that: 1. The implementation evaluates the first p. 2. The representation of p changes. 3. The implementation evaluates the second p. 4. Due to the different representations, the comparison returns false.
[Bug middle-end/113540] missing -Warray-bounds warning with malloc and a simple loop
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113540 --- Comment #2 from Vincent Lefèvre --- Thanks for the explanations, but why in the following case void foo (void) { volatile char t[4]; for (int i = 0; i <= 4; i++) t[i] = 0; return; } does one get the warning (contrary to the use of malloc)?
[Bug middle-end/113540] New: missing -Warray-bounds warning with malloc and a simple loop
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113540 Bug ID: 113540 Summary: missing -Warray-bounds warning with malloc and a simple loop Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider the following code: #include int main (void) { volatile char *t; t = malloc (4); for (int i = 0; i <= 4; i++) t[i] = 0; return 0; } With -O2 -Warray-bounds, I do not get any warning. Replacing the loop by "t[4] = 0;" gives a warning "array subscript 4 is outside array bounds of 'volatile char[4]'" as expected. Or replacing the use of malloc() by "volatile char t[4];" also gives a warning. Tested with gcc (Debian 20240117-1) 14.0.1 20240117 (experimental) [master r14-8187-gb00be6f1576]. But previous versions do not give any warning either.
[Bug c/89072] -Wall -Werror should be defaults
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89072 --- Comment #12 from Vincent Lefèvre --- (In reply to Segher Boessenkool from comment #11) > Sure. If people want the pain, they can have it. But it is never okay to > cause other people to have -Werror -- they may have a different compiler > (version) that no one else has tested with, they may have different warnings > enabled, etc. For MPFR automated tests (and sometimes during my usual development too), I use -Werror in combination with -Wall plus some other useful warnings (-Wold-style-declaration -Wold-style-definition -Wmissing-parameter-type -Wmissing-prototypes -Wmissing-declarations -Wmissing-field-initializers -Wc++-compat -Wwrite-strings -Wcast-function-type -Wcast-align=strict -Wimplicit-fallthrough), and this is very useful, though this sometimes triggers GCC bugs. But of course, it's for internal use only. And the use of -Werror without "=" is valid only because I make sure that no other -W... options are used.
[Bug c/89072] -Wall -Werror should be defaults
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89072 --- Comment #6 from Vincent Lefèvre --- BTW, note that some code may be generated (instead of being written by a human). So having some code style being an error by default would be very bad.
[Bug c/89072] -Wall -Werror should be defaults
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89072 --- Comment #5 from Vincent Lefèvre --- Note that -Wall -Werror seem to be fine in general when they are used alone, but this combination can be very problematic when other options are used, such as -std=c90 -pedantic, and other warnings. So default errors need to be restricted, e.g. rather with something like -Werror=all. But in any case, there are currently warnings in -Wall that should not be turned into errors by default, e.g. those related to code style (such as -Wlogical-not-parentheses) and heuristics (such as -Wmaybe-uninitialized).
[Bug middle-end/576] gcc performs invalid optimization with float operations when different rounding mode.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=576 --- Comment #7 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #6) > That is because the code is GNU C90 and not C++ . I've used gcc, not g++. But this fails even with -std=gnu90.
[Bug middle-end/576] gcc performs invalid optimization with float operations when different rounding mode.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=576 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #5 from Vincent Lefèvre --- The -frounding-math option should solve the issue on this particular example. But on my machine, ld gives an "undefined reference to `_up'" error. For a more general bug, see PR34678.
[Bug c/112463] ternary operator / -Wsign-compare inconsistency
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112463 --- Comment #3 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #2) > One problem with -Wsign-conversion is that it is not enabled with > -Wextra/-Wall . However, I don't understand why -Wsign-compare is enabled by -Wextra but not -Wsign-conversion, while these are similar warnings. Note that when compiling GNU MPFR, *both* give lots of false positives (which could be avoided, at least for most of them, if PR38470 were fixed).
[Bug c/112463] New: ternary operator / -Wsign-compare inconsistency
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112463 Bug ID: 112463 Summary: ternary operator / -Wsign-compare inconsistency Product: gcc Version: 13.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- -Wsign-compare is described in the man page as follows: -Wsign-compare Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. In C++, this warning is also enabled by -Wall. In C, it is also enabled by -Wextra. But it can emit a warning even in the absence of comparisons between signed and unsigned values. For instance, it can appear due to the 2nd and 3rd operands of the ternary operator (these operands are not compared, just selected from the value of the first operand). This affects the warning output by -Wextra. Consider the following C code: #include int main (void) { for (int c = -1; c <= 1; c++) { long long i = c == 0 ? 0LL : (c >= 0 ? 1U : -1), j = c >= 0 ? (c == 0 ? 0LL : 1U) : -1; printf ("i = %lld\nj = %lld\n", i, j); } return 0; } (which shows that the ternary operator is not associative due to type conversions). With -Wextra, I get: ternary-op.c: In function ‘main’: ternary-op.c:7:43: warning: operand of ‘?:’ changes signedness from ‘int’ to ‘unsigned int’ due to unsignedness of other operand [-Wsign-compare] 7 | i = c == 0 ? 0LL : (c >= 0 ? 1U : -1), | ^~ But the "-Wsign-compare" is incorrect as there are no comparisons between signed and unsigned values. Only -Wsign-conversion should trigger a warning.
[Bug middle-end/56281] missed VRP optimization on i for signed i << n from undefined left shift in ISO C
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56281 --- Comment #6 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #1) > Given the amount of code in the wild that assumes 1 << 31 etc. work, I think > it would be a bad idea to try to optimize this for C99, especially when it > is now considered valid for C++. Anyway, this is documented in the GCC manual: As an extension to the C language, GCC does not use the latitude given in C99 and C11 only to treat certain aspects of signed ‘<<’ as undefined. (quoted from the GCC 13.2.0 manual). So perhaps there should be an option to allow GCC to optimize this case.
[Bug tree-optimization/102032] missed optimization on 2 equivalent expressions when -fwrapv is not used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102032 --- Comment #4 from Vincent Lefèvre --- Note that as said in PR111560 comment 6, re-association may break CSE, e.g. if there are also a + b + d and a + c + e with my example. So, re-association for global optimal CSE, in addition to being difficult, will not allow the optimization in all cases of equivalent expressions.
[Bug sanitizer/81981] [8 Regression] -fsanitize=undefined makes a -Wmaybe-uninitialized warning disappear
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81981 --- Comment #9 from Vincent Lefèvre --- Note, however, that there is a small regression in GCC 11: the warning for t is output as expected, but if -fsanitize=undefined is given, the message for t is suboptimal, saying "*[0]" instead of "t[0]": zira:~> gcc-11 -Wmaybe-uninitialized -O2 -c tst.c -fsanitize=undefined tst.c: In function ‘foo’: tst.c:12:15: warning: ‘*[0]’ may be used uninitialized in this function [-Wmaybe-uninitialized] 12 | return t[0] + u[0]; | ~^~ tst.c:12:15: warning: ‘u[0]’ may be used uninitialized in this function [-Wmaybe-uninitialized] No such issue without -fsanitize=undefined: zira:~> gcc-11 -Wmaybe-uninitialized -O2 -c tst.c tst.c: In function ‘foo’: tst.c:12:15: warning: ‘u[0]’ may be used uninitialized in this function [-Wmaybe-uninitialized] 12 | return t[0] + u[0]; | ~^~ tst.c:12:15: warning: ‘t[0]’ may be used uninitialized in this function [-Wmaybe-uninitialized] It is impossible to say whether this is fixed in GCC 12 and later, because of PR 110896, i.e. the warning is always missing.
[Bug c/44677] Warn for variables incremented but not used (+=, ++)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=44677 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #17 from Vincent Lefèvre --- (In reply to Martin Sebor from comment #10) > $ cat pr95217.c && clang -S -Wall -Wextra --analyze pr95217.c [...] > pr95217.c:8:3: warning: Value stored to 'p' is never read > p += 1; // missing warning > ^~ > pr95217.c:13:3: warning: Value stored to 'p' is never read > p = p + 1; // missing warning > ^ ~ > 2 warnings generated. Clang (15 and above) with -Wunused-but-set-variable now detects the issue on the "p++" and "p += 1" forms (ditto with other combined assignment operators), but not on "p = p + 1". Such forms (p++, etc.) are common, so that detecting an unused variable is very useful. Like Paul did for Emacs (comment 13), I've just fixed two issues in GNU MPFR (one cosmetic about a useless loop variable and one important in the testsuite), found with Clang 16. The references: https://gitlab.inria.fr/mpfr/mpfr/-/commit/4c110cf4773b3c07de2a33acbee591cedb083c80 https://gitlab.inria.fr/mpfr/mpfr/-/commit/b34d867fa41934d12d0d4dbaaa0242d6d3eb32c7 For the second MPFR issue, there was an "err++" for each error found by the function in the testsuite, but err was not tested at the end, so that potential errors were never reported.
[Bug c/95057] missing -Wunused-but-set-variable warning on multiple assignments, not all of them used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95057 --- Comment #6 from Vincent Lefèvre --- Well, for the ++, --, +=, -=, *=, etc. operators, that's PR44677 (though it is unclear on what it should cover).
[Bug c/95057] missing -Wunused-but-set-variable warning on multiple assignments, not all of them used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95057 --- Comment #5 from Vincent Lefèvre --- FYI, Clang 16 does not warn either on the testcases provided in comment 0 (bug report). But contrary to GCC (tested with master r14-1713-g6631fe419c6 - Debian gcc-snapshot package 20230613-1), Clang 15 and 16 both warn on f and g below: int h (void); void f (void) { int i = h (); i++; } void g (void) { for (int i = 0 ;; i++) if (h ()) break; } zira:~> clang-15 -c -Wunused-but-set-variable warn-inc.c warn-inc.c:5:7: warning: variable 'i' set but not used [-Wunused-but-set-variable] int i = h (); ^ warn-inc.c:11:12: warning: variable 'i' set but not used [-Wunused-but-set-variable] for (int i = 0 ;; i++) ^ 2 warnings generated. Thanks to this detection, a test with Clang 16 found two issues in GNU MPFR (one cosmetic about a useless loop variable and one important in the testsuite). So it is useful to consider such particular cases.
[Bug c/101090] incorrect -Wunused-value warning on remquo with constant values
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101090 --- Comment #3 from Vincent Lefèvre --- (In reply to Vincent Lefèvre from comment #2) > So, has this bug been fixed (and where)? This seems to be a particular case of PR106264, which was fixed in commit r13-1741-g40f6e5912288256ee8ac41474f2dce7b6881c111.
[Bug c/106264] [10/11/12/13 Regression] spurious -Wunused-value on a folded frexp, modf, and remquo calls with unused result since r9-1295-g781ff3d80e88d7d0
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106264 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #8 from Vincent Lefèvre --- This seems to be the same issue as PR101090 ("incorrect -Wunused-value warning on remquo with constant values"), which I had reported in 2021 and was present in GCC 9 too.
[Bug c/101090] incorrect -Wunused-value warning on remquo with constant values
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101090 --- Comment #2 from Vincent Lefèvre --- On Debian, I get a warning from GCC 9 to GCC 12 (Debian 12.3.0-6), but neither with GCC 13 (Debian 13.1.0-8) nor with 14.0.0 20230612 (Debian 20230613-1). So, has this bug been fixed (and where)?
[Bug middle-end/323] optimized code gives strange floating point results
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323 --- Comment #228 from Vincent Lefèvre --- PR64410 and PR68180 should also be removed from "See Also".
[Bug middle-end/323] optimized code gives strange floating point results
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323 --- Comment #227 from Vincent Lefèvre --- In "See Also", there are several bugs that are related only to vectorization optimizations. What is the relation with this bug? For instance, PR89653 is "GCC (trunk and all earlier versions) fails to vectorize (SSE/AVX2/AVX-512) the following loop [...]". If this is SSE/AVX2/AVX-512, where does x86 extended precision occur?
[Bug middle-end/79173] add-with-carry and subtract-with-borrow support (x86_64 and others)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79173 --- Comment #32 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #31) > (In reply to Vincent Lefèvre from comment #30) > > (In reply to Jakub Jelinek from comment #29) > > > I mean that if the compiler can't see it is in [0, 1], it will need > > > to use 2 additions and or the 2 carry bits together. But, because > > > the ored carry bits are in [0, 1] range, all the higher limbs could > > > be done using addc. > > > > If the compiler can't see that carryin is in [0, 1], then it must not "or" > > the carry bits; it needs to add them, as carryout may be 2. > > That is not how the clang builtin works, which is why I've implemented the | > and documented it that way, as it is a compatibility builtin. I'm confused. In Comment 14, you said that *carry_out = c1 + c2; was used. This is an addition, not an OR.
[Bug middle-end/79173] add-with-carry and subtract-with-borrow support (x86_64 and others)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79173 --- Comment #30 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #29) > (In reply to Vincent Lefèvre from comment #28) > > What do you mean by "the first additions will be less optimized"? (If you > > don't know anything about the initial carryin and the arguments, you can't > > optimize at all, AFAIK.) > > I mean that if the compiler can't see it is in [0, 1], it will need to use 2 > additions and or the 2 carry bits together. But, because the ored carry > bits are in [0, 1] range, all the higher limbs could be done using addc. If the compiler can't see that carryin is in [0, 1], then it must not "or" the carry bits; it needs to add them, as carryout may be 2. So each part of the whole chain would need 2 __builtin_add_overflow and an addition of the carry bits. However, if the compiler can detect that at some point, the arguments cannot be both 0x at the same time (while carryin is in [0, 2]), then an optimization is possible for the rest of the chain.
[Bug middle-end/79173] add-with-carry and subtract-with-borrow support (x86_64 and others)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79173 --- Comment #28 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #27) > Given that the builtins exist for 10 years already, I think changing it for > them is too late, though they don't seem to take backwards compatibility as > seriously. > They don't document the [0, 1] restriction and the behavior implemented in > GCC is what I saw when trying it. Their documentation at https://clang.llvm.org/docs/LanguageExtensions.html is currently just unsigned sum = __builtin_addc(x, y, carryin, ); But a carry for a 2-ary addition is always 0 or 1, so the [0, 1] restriction is implicit (by the language that is used). And in their example, the carries are always 0 or 1. > Note, in many cases it isn't that big deal, because if carry_in is in [0, 1] > range and compiler can see it from VRP, it can still optimize it. And given > that carry_out is always in [0, 1] range, for chained cases worst case the > first additions will be less optimized but the chained will be already > better. What do you mean by "the first additions will be less optimized"? (If you don't know anything about the initial carryin and the arguments, you can't optimize at all, AFAIK.)
[Bug c/110374] New: slightly incorrect warning text "ISO C forbids forward parameter declarations"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110374 Bug ID: 110374 Summary: slightly incorrect warning text "ISO C forbids forward parameter declarations" Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- With -pedantic, on int f(int n; int n) { return n; } I get: tst.c:1:1: warning: ISO C forbids forward parameter declarations [-Wpedantic] 1 | int f(int n; int n) { return n; } | ^~~ Instead of "forward parameter declarations", it should be "parameter forward declarations", as https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html calls it "parameter forward declaration".
[Bug tree-optimization/106155] [12/13/14 Regression] spurious "may be used uninitialized" warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106155 --- Comment #12 from Vincent Lefèvre --- Here's a similar, simpler testcase: int f1 (void); void f2 (int); long f3 (long); void tst (void) { int badDataSize[3] = { 1, 1, 1 }; for (int i = 0; i < 3; i++) { int emax; if (i == 2) emax = f1 (); int status = f3 (badDataSize[i]); if (f1 ()) f1 (); if (status) f1 (); if (i == 2) f2 (emax); } } gcc-12 (Debian 12.2.0-14) 12.2.0 warns at -O1, but not at -O2. gcc-13 (Debian 13.1.0-5) 13.1.0 is worse, as it warns at both -O1 and -O2.
[Bug target/110011] -mfull-toc (-mfp-in-toc) yields incorrect _Float128 constants on power9
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110011 --- Comment #4 from Vincent Lefèvre --- (In reply to Kewen Lin from comment #3) > Thanks for reporting, this exposes one issue that: when encoding KFmode > constant into toc, it uses the format for the current long double, it could > be wrong if the current long double is with IBM format instead of IEEE > format. I have a patch. OK, but why does an explicit `-mfull-toc` make this issue appear while this is documented to be the default?
[Bug target/110011] New: -mfull-toc yields incorrect _Float128 constants on power9
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110011 Bug ID: 110011 Summary: -mfull-toc yields incorrect _Float128 constants on power9 Product: gcc Version: 8.3.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Note: I selected version 8.3.1, because this is what I had for my tests, but at least 13.1.0 is still affected (see below). We got a report of a GNU MPFR failure on an IBM POWER9 machine: https://sympa.inria.fr/sympa/arc/mpfr/2023-05/msg0.html With additional information given in a private discussion, Matthew R. Wilson found that the failure came from the use of -mfull-toc in CFLAGS (this is not visible in the report mentioned above), and he could reproduce the behavior with GCC 13.1.0, 12.2.0, as well as the Debian-provided GCC 10.2.1. As he noticed, -mfull-toc is documented to be the default, so that this shouldn't have any effect. I could reproduce the failure on gcc135 at the Compile Farm with /opt/at12.0/bin/gcc (gcc (GCC) 8.3.1 20190304 (Advance-Toolchain-at12.0) [revision 269374]), and I did some tests with it. It appears that when -mfull-toc is provided, the cause is incorrect _Float128 constants MPFR_FLOAT128_MAX and -MPFR_FLOAT128_MAX, where one has #define MPFR_FLOAT128_MAX 0x1.p+16383f128 Indeed, I added _Float128 m = MPFR_FLOAT128_MAX, n = -MPFR_FLOAT128_MAX; at the beginning of the mpfr_set_float128 function in src/set_float128.c, and did the following: 1. Compile MPFR (and the testsuite) using CFLAGS="-mcpu=power9 -O0 -g -mfull-toc" 2. In the "tests" directory, gdb ./tset_float128 3. Add a breakpoint on mpfr_set_float128, then run, next, and print values: (gdb) print m $1 = 5.96937875341074040910051755689516189e-4947 (gdb) print n $2 = 1.19416736664469867978830578385372193e-4946 Both are wrong. If I do the same test with CFLAGS="-mcpu=power9 -O0 -g" (i.e. without -mfull-toc), then I get (gdb) print m $1 = 1.18973149535723176508575932662800702e+4932 (gdb) print n $2 = -1.18973149535723176508575932662800702e+4932 These are the expected values. Unfortunately, I couldn't reproduce the issue with a simple C program.
[Bug c/109979] -Wformat-overflow false positive for %d and non-basic expression
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109979 --- Comment #5 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #1) > The warning should happen for both ... OK (as the documentation says "[...] that might overflow the destination buffer). (In reply to Richard Biener from comment #4) > If you know the value is in a range that fits s[4] then assert that before > the prints. I don't think that an assert() will change anything. With MPFR, the code is in an "else" branch, already with a reduced range. However, this time, I did not use -O2 to enable VRP (I was working on a different issue, but had to use -Werror=format to change the behavior of the configure script); that was my mistake. Then I found the inconsistency between "e" and "e - 1", so I did not look further.
[Bug c/109979] New: [12 Regression] -Wformat-overflow false positive for %d and non-basic expression
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109979 Bug ID: 109979 Summary: [12 Regression] -Wformat-overflow false positive for %d and non-basic expression Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider #include void f (int *); void g (void) { int e; char s[4]; f (); sprintf (s, "%d", e); sprintf (s, "%d", e - 1); } I get on my Linux/x86_64 machine with gcc-12 (Debian 12.2.0-14) 12.2.0: zira:~> gcc-12 -Wformat-overflow -c tst.c tst.c: In function ‘g’: tst.c:12:16: warning: ‘%d’ directive writing between 1 and 11 bytes into a region of size 4 [-Wformat-overflow=] 12 | sprintf (s, "%d", e - 1); |^~ tst.c:12:15: note: directive argument in the range [-2147483648, 2147483646] 12 | sprintf (s, "%d", e - 1); | ^~~~ tst.c:12:3: note: ‘sprintf’ output between 2 and 12 bytes into a destination of size 4 12 | sprintf (s, "%d", e - 1); | ^~~~ Note that the warning occurs for "e - 1" but not for "e". This bug was found when compiling GNU MPFR 4.2.0 with "-std=c90 -Werror=format -m32" (compilation failure for get_d64.c).
[Bug tree-optimization/95699] __builtin_constant_p inconsistencies
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95699 --- Comment #12 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #11) > Since GCC 11 which is correct now. I confirm. > That changed after r11-1504-g2e0f4a18bc978 for the improved minmax > optimization. The bug has been resolved as INVALID. But if I understand correctly, this should actually be FIXED. And I suppose that "Known to work" could be set to 11.0 and "Known to fail" to "9.3.0, 10.1.0".
[Bug middle-end/109578] fail to remove dead code due to division by zero
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109578 --- Comment #4 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #3) > Anyways maybe the issue with PR 29968 was a scheduling issue which was fixed > later on that GCC didn't realize divide could trap. OK, thanks, I can see your update marking PR 29968 as a duplicate of PR 41239 (fixed).
[Bug c/29968] integer division by zero with optimization
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29968 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #4 from Vincent Lefèvre --- (In reply to Andreas Schwab from comment #2) > Your program is invoking undefined behaviour. You should not perform the > division if the divisor is zero. But PR109578 Comment 1 says that side effects should be visible before undefined behavior occurs. Thus this bug should be reopened.
[Bug middle-end/109578] fail to remove dead code due to division by zero
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109578 --- Comment #2 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #1) > We don't removing code before undefined behavior ... > That is GCC does not know that printf does not have side effects. Then GCC is incorrect in bug 29968, because it does the division *before* the printf.
[Bug middle-end/109578] New: fail to remove dead code due to division by zero
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109578 Bug ID: 109578 Summary: fail to remove dead code due to division by zero Product: gcc Version: 12.2.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- This is about the opposite of the invalid bug 29968: #include int f (int i, int k) { if (k == 0) printf ("k = 0\n"); return i/k; } int main (void) { return f (1, 0); } With gcc-12 (Debian 12.2.0-14) 12.2.0 and -O3 optimization, I get: k = 0 zsh: illegal hardware instruction (core dumped) ./tst But since the case k == 0 corresponds to an undefined behavior (which is the justification behind that GCC is correct in bug 29968), the code if (k == 0) printf ("k = 0\n"); should have been removed as an optimization.
[Bug preprocessor/81745] missing warning with -pedantic when a C file does not end with a newline character [-Wnewline-eof]
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81745 --- Comment #14 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #13) > GCC removed the pedwarning on purpose (between GCC 4.1 and 4.4), see PR > 14331 and PR 68994. No, PR 14331 was just asking to remove the warning by default, not that `-pedantic` shouldn't warn: "I checked through the gcc manual, and didn't found any option to suppress the warning message "no newline at end of file". And PR 68994 was complaining about the missing warning. > https://gcc.gnu.org/legacy-ml/gcc-patches/2007-04/msg00651.html > is the specific one about GCC's behavior and since it is considered > undefined at compile time (not a runtime one) GCC behavior is correct so is > clang Even though GCC decides to add a newline to the logical file, so that the missing diagnostic can be regarded as correct, I think that an optional warning would be useful for portability. https://gcc.gnu.org/legacy-ml/gcc-patches/2007-04/msg00651.html was suggesting "add -W(no-)eof-newline". So why hasn't -Wno-eof-newline been added?
[Bug c/68994] GCC doesn't issue any diagnostic for missing end-of-line marker
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68994 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #2 from Vincent Lefèvre --- (In reply to l3x from comment #1) > Found a duplicate: #14331. > > *** This bug has been marked as a duplicate of bug 14331 *** PR 14331 is actually about the *opposite* behavior: the diagnostic has been removed.
[Bug analyzer/98447] incorrect -Wanalyzer-shift-count-overflow warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98447 --- Comment #7 from Vincent Lefèvre --- On https://godbolt.org/z/Yx7b1d this still fails with "x86-64 gcc (trunk)". Moreover, several releases are affected: 11.1, 11.2, 11.3, 12.1, 12.2.
[Bug c/108700] false _Noreturn error with -Werror=old-style-declaration
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108700 --- Comment #2 from Vincent Lefèvre --- And there's the same issue with "inline" instead of "_Noreturn" (these are the only two function specifiers).
[Bug c/108700] New: false _Noreturn error with -Werror=old-style-declaration
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108700 Bug ID: 108700 Summary: false _Noreturn error with -Werror=old-style-declaration Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- With gcc-12 (Debian 12.2.0-14) 12.2.0 (but this error was already present in GCC 4.8): cventin% echo 'int _Noreturn does_not_return (void) { for (;;) continue; }' | gcc-12 -xc -c -Werror=old-style-declaration - :1:1: error: ‘_Noreturn’ is not at beginning of declaration [-Werror=old-style-declaration] cc1: some warnings being treated as errors This error is incorrect. The grammar in ISO C17 is function-definition: declaration-specifiers declarator declaration-listopt compound-statement declaration-specifiers: storage-class-specifier declaration-specifiersopt type-specifier declaration-specifiersopt type-qualifier declaration-specifiersopt function-specifier declaration-specifiersopt alignment-specifier declaration-specifiersopt where "int" is part of type-specifier and "_Noreturn" is part of function-specifier, so that they can be in any order. Note that the int _Noreturn does_not_return (void) { for (;;) continue; } comes from one of the autoconf tests, which fails as a consequence.
[Bug c/53232] No warning for main() without a return statement with -std=c99
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53232 --- Comment #18 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #17) > Yeah, but warnings with high false positivity rates at least shouldn't be in > -Wall. Well, there already is -Wunused, which is included in -Wall (such warnings may typically be emitted due to #if and also in temporary code when debugging), and -Wsign-compare in C++. Anyway, there is a first issue: the warning is inexistent, even with -Wextra. There is a second issue: the warning is not emitted with -Wreturn-type when there is a call to main(). Solving these two issues alone would not yield a high false positivity rate with -Wall. (That said, I think that developers should be encouraged to have an explicit "return" for main(); in particular, this is really easy to do and improves the code readability, specially knowing the difference of behavior with other languages, such as shell scripts and Perl.)
[Bug c/53232] No warning for main() without a return statement with -std=c99
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53232 --- Comment #16 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #15) > But much more often it is intentional than unintentional. That's the same thing for many kinds of warnings.
[Bug c/53232] No warning for main() without a return statement with -std=c99
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53232 --- Comment #14 from Vincent Lefèvre --- Anyway, as I said initially, the warning would be interesting even in C99+ mode, because the lack of a return statement may be unintentional. For instance, the developer may have forgotten a "return err;".
[Bug c/53232] No warning for main() without a return statement with -std=c99
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53232 --- Comment #11 from Vincent Lefèvre --- (In reply to Vincent Lefèvre from comment #8) > (In reply to comment #6) > > Er, if you want to find portability problems for people not using C99 then > > don't use -std=c99. Then -Wreturn-type warns about main. > > There are several reasons one may want to use -std=c99, e.g. to be able to > use C99 features when available (via autoconf and/or preprocessor tests). In any case, there does not seem to be a -std value to say that the program must be valid for all C90, C99, C11 and C17 standards (and the future C23 standard). That's what portability is about.
[Bug tree-optimization/108467] New: false positive -Wmaybe-uninitialized warning at -O1 or higher
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108467 Bug ID: 108467 Summary: false positive -Wmaybe-uninitialized warning at -O1 or higher Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider the following code, derived from MPFR's sub1sp.c (where the issue occurred since at least GCC 4.9.4 and the warning was silenced with the "sh = sh" trick via an INITIALIZED() macro): extern long emin; extern long emax; int f(void); int g(void) { int sh, rb, sb; if (f()) rb = sb = 0; else { sh = f(); sb = f(); rb = f(); } (0 >= emin && 0 <= emax) || (f(), __builtin_unreachable(), 0); if (rb == 0 && sb == 0) return 0; else return sh; } With gcc-12 (Debian 12.2.0-14) 12.2.0, I get: $ gcc-12 -O2 -Wmaybe-uninitialized -c tst.c tst.c: In function ‘g’: tst.c:23:12: warning: ‘sh’ may be used uninitialized [-Wmaybe-uninitialized] 23 | return sh; |^~ tst.c:8:7: note: ‘sh’ was declared here 8 | int sh, rb, sb; | ^~ The warning also occurs at -O1 and -O3. It disappears if I slightly modify the code. Note: During the code reduction, I also got the warning, but with a different location. However, the code was more complex, and I've already reported PR108466 about a location issue (where the -Wmaybe-uninitialized is correct). So I haven't reported an additional issue about the location.
[Bug c/108466] New: inconsistent -Wmaybe-uninitialized warning location
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108466 Bug ID: 108466 Summary: inconsistent -Wmaybe-uninitialized warning location Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- With gcc-12 (Debian 12.2.0-14) 12.2.0, the -Wmaybe-uninitialized warning location depends on the declared type. This is inconsistent. To reproduce, consider a tst.c file: int f(void); int g(void) { T a; if (f()) a = f(); if (f()) return 0; else return a; } I get: $ for i in int long; do echo; echo "Type $i"; gcc-12 -O2 -Wmaybe-uninitialized -DT=$i -c tst.c; done Type int tst.c: In function ‘g’: tst.c:11:12: warning: ‘a’ may be used uninitialized [-Wmaybe-uninitialized] 11 | return a; |^ tst.c:4:5: note: ‘a’ was declared here 4 | T a; | ^ Type long tst.c: In function ‘g’: tst.c:4:5: warning: ‘a’ may be used uninitialized [-Wmaybe-uninitialized] 4 | T a; | ^
[Bug middle-end/106805] [13 Regression] Undue optimisation of floating-point comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106805 --- Comment #8 from Vincent Lefèvre --- Isn't it the same as PR56020, which is due to the fact that the STDC FENV_ACCESS pragma is not implemented and assumed to be OFF (PR34678)?
[Bug c/108128] missing -Wshift-overflow warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108128 --- Comment #1 from Vincent Lefèvre --- Well, with -pedantic, GCC also warns on "enum { A = 1 << 31 };".
[Bug c/108128] New: missing -Wshift-overflow warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108128 Bug ID: 108128 Summary: missing -Wshift-overflow warning Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider the following C program: #include enum { A = 1 << 31 }; int main (void) { printf ("%d\n", A); printf ("%d\n", 1 << 31); printf ("%d\n", 2 << 31); return 0; } In C, the 3 shifts have undefined behavior. The GCC 12 man page says -Wshift-overflow=n These options control warnings about left shift overflows. -Wshift-overflow=1 This is the warning level of -Wshift-overflow and is enabled by default in C99 and C++11 modes (and newer). This warning level does not warn about left-shifting 1 into the sign bit. (However, in C, such an overflow is still rejected in contexts where an integer constant expression is required.) No warning is emitted in C++20 mode (and newer), as signed left shifts always wrap. -Wshift-overflow=2 This warning level also warns about left-shifting 1 into the sign bit, unless C++14 mode (or newer) is active. Nothing is said about the default, but I assume that this should be -Wshift-overflow=2 in C because undefined behavior should be warned. But with gcc-12 (Debian 12.2.0-10) 12.2.0, I get a warning only for 2 << 31. cventin:~> /usr/bin/gcc-12 -std=c99 tst.c -o tst tst.c: In function ‘main’: tst.c:7:21: warning: result of ‘2 << 31’ requires 34 bits to represent, but ‘in’ only has 32 bits [-Wshift-overflow=] 7 | printf ("%d\n", 2 << 31); | ^~ BTW, according to the man page, gcc should warn on "enum { A = 1 << 31 };" even with -Wshift-overflow=1, but it doesn't. This is actually required by the standard as constraint 6.6#4 is violated (as the evaluation is not defined). With the UB sanitizer (-fsanitize=undefined), running the program gives as expected: -2147483648 tst.c:6:21: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' -2147483648 tst.c:7:21: runtime error: left shift of 2 by 31 places cannot be represented in type 'int' 0 Note that the sanitizer does not emit an error for "enum { A = 1 << 31 };" since the issue occurs only at compilation (thus a warning is particularly important).
[Bug tree-optimization/107839] spurious "may be used uninitialized" warning while all uses are under "if (c)"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107839 --- Comment #3 from Vincent Lefèvre --- (In reply to Richard Biener from comment #2) > it's loop invariant motion that hoists the v + v compute out of the loop > and thus outside of its controlling condition. You can see it's careful > to not introduce undefined overflow that is possibly conditionally > executed only but it fails to consider the case of 'v' being conditionally > uninitialized. > > It's very difficult to do the right thing here - it might be tempting to > hoist the compute as > > if (c) > tem = v+v; > while (1) > if (c) > f(tem); Couldn't the -Wmaybe-uninitialized warning be disabled on hoisted code, so that the controlling condition wouldn't be needed? To make sure not to disable potential warnings, the information that v was used for tem should be kept together with tem in the loop. Something like ((void)v,tem), though GCC doesn't currently warn on that if v is uninitialized (but that's another issue that should be solved). However... > Maybe the simplest thing would be to never hoist v + v, or only > hoist it when the controlling branch is not loop invariant. > > The original testcase is probably more "sensible", does it still have > a loop invariant controlling condition and a loop invariant computation > under that control? In my tmd/binary32/hrcases.c file, there doesn't seem to be a loop invariant, so I'm wondering what is the real cause. The code looks like the following: static inline double cldiff (clock_t t1, clock_t t0) { return (double) (t1 - t0) / CLOCKS_PER_SEC; } and in a function hrsearch() where its mprog argument (named c above) is an integer that enables progress output when it is nonzero: if (mprog) { mctr = 0; nctr = 0; t0 = ti = clock (); } do { [...] if (mprog && ++mctr == mprog) { mctr = 0; tj = clock (); mpfr_fprintf (stderr, "[exponent %ld: %8.2fs %8.2fs %5lu / %lu]\n", e, cldiff (tj, ti), cldiff (tj, t0), ++nctr, nprog); ti = tj; } [...] } while (mpfr_get_exp (x) < e + 2); The warning I get is In function ‘cldiff’, inlined from ‘hrsearch’ at hrcases.c:298:11, inlined from ‘main’ at hrcases.c:520:9: hrcases.c:46:23: warning: ‘t0’ may be used uninitialized [-Wmaybe-uninitialized] 46 | return (double) (t1 - t0) / CLOCKS_PER_SEC; | ^ hrcases.c: In function ‘main’: hrcases.c:128:11: note: ‘t0’ was declared here 128 | clock_t t0, ti, tj; | ^~ So the operation on t0 is tj - t0, and as tj is set just before, I don't see how it can be used in a loop invariant. This can be simplified as follows: int f (int); void g (int mprog) { int t0, ti, tj; if (mprog) t0 = ti = f(0); do if (mprog) { tj = f(0); f(tj - ti); f(tj - t0); ti = tj; } while (f(0)); } and I get tst.c: In function ‘g’: tst.c:13:9: warning: ‘t0’ may be used uninitialized [-Wmaybe-uninitialized] 13 | f(tj - ti); | ^~ tst.c:4:7: note: ‘t0’ was declared here 4 | int t0, ti, tj; | ^~ BTW, the warning is incorrect: I can't see t0 in "f(tj - ti);".
[Bug tree-optimization/80548] -Wmaybe-uninitialized false positive when an assignment is added
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80548 --- Comment #12 from Vincent Lefèvre --- (In reply to Jeffrey A. Law from comment #11) > As I said in my previous comment, the best way forward is to get those two > new instances filed as distinct bugs in BZ. See PR107838 and PR107839.
[Bug tree-optimization/106155] [12/13 Regression] spurious "may be used uninitialized" warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106155 --- Comment #10 from Vincent Lefèvre --- A similar bug (all uses of the variable are under some condition) with a simpler testcase I've just reported: PR107839.
[Bug tree-optimization/107839] New: spurious "may be used uninitialized" warning while all uses are under "if (c)"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107839 Bug ID: 107839 Summary: spurious "may be used uninitialized" warning while all uses are under "if (c)" Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider int f (int); void g (int c) { int v; if (c) v = f(0); while (1) if (c) f(v + v); } $ gcc-test -O -Wmaybe-uninitialized -c tst2.c tst2.c: In function ‘g’: tst2.c:4:7: warning: ‘v’ may be used uninitialized [-Wmaybe-uninitialized] 4 | int v; | ^ All uses of v are under "if (c)", so the warning is incorrect. Note that replacing "v + v" by "v" makes the warning disappear. This occurs with GCC 8.4.0 and above up to at least 13.0.0 20220906 (experimental) from the master branch. No warnings with GCC 6.5.0 and below. Note to myself (to check once this bug is fixed): this testcase is derived from tmd/binary32/hrcases.c (warning on variable t0).
[Bug tree-optimization/107838] New: spurious "may be used uninitialized" warning on variable initialized at the first iteration of a loop
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107838 Bug ID: 107838 Summary: spurious "may be used uninitialized" warning on variable initialized at the first iteration of a loop Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider int f(void); void g(int *t) { int i, v; for (i = 0; i < 9; i++) { if (i == 0) v = f(); if (v + t[i]) f(); } } $ gcc-test -O -Wmaybe-uninitialized -c tst3.c tst3.c: In function ‘g’: tst3.c:9:13: warning: ‘v’ may be used uninitialized [-Wmaybe-uninitialized] 9 | if (v + t[i]) | ~~^~ tst3.c:4:10: note: ‘v’ was declared here 4 | int i, v; | ^ The variable v is initialized at the first iteration (i == 0). Therefore the warning is incorrect. This occurs with GCC 4.8, 6.5.0, 8.4.0, 9.5.0, 12.2.0, and 13.0.0 20220906 (experimental) from the master branch. But there are no warnings with GCC 4.9, 5.5.0, 10.4.0 and 11.3.0. Note to myself (to check once this bug is fixed): this testcase is derived from tmd/binary32/hrcases.c (warning on variable b).
[Bug tree-optimization/106754] compute_control_dep_chain over-estimates domain
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106754 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #3 from Vincent Lefèvre --- (In reply to CVS Commits from comment #1) > The patch also removes the bogus early exit from > uninit_analysis::init_use_preds, fixing a simplified version > of the PR106155 testcase. This commit also fixes the PR80548 testcase. However, I had built this testcase from more complex code, for which GCC still warns. So I'll have to find another simplified testcase for a new PR...
[Bug tree-optimization/80548] -Wmaybe-uninitialized false positive when an assignment is added
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80548 --- Comment #10 from Vincent Lefèvre --- (In reply to Jeffrey A. Law from comment #9) > These warnings are certainly sensitive to all kinds of things, so it's > possible it's just gone latent. The only way to be sure would be to bisect > all the work between gcc-12 and the trunk and pour over the dumps with a > fine tooth comb. I would hazard a guess it was Aldy's backwards threader > work, particularly around not bailing out too early for subpaths based on > comments in the BZ, but one would have to bisect to be 100% sure. The commit that made the warning disappear is actually the one fixing PR106754. commit 0a4a2667dc115ca73b552fcabf8570620dfbe55f Author: Richard Biener Date: 2022-09-06 13:46:00 +0200 tree-optimization/106754 - fix compute_control_dep_chain defect The following handles the situation of a loop exit along the control path to the PHI def or from there to the use in a different way, aoviding premature abort of the walks as noticed in the two cases where the exit is outermost (gcc.dg/uninit-pred-11.c) or wrapped in a condition that is on the path (gcc.dg/uninit-pred-12.c). Instead of handling such exits during recursion we now pick them up in the parent when walking post-dominators. That requires an additional post-dominator walk at the outermost level which is facilitated by splitting out the walk to a helper function and the existing wrapper added earlier. The patch also removes the bogus early exit from uninit_analysis::init_use_preds, fixing a simplified version of the PR106155 testcase.
[Bug tree-optimization/80548] -Wmaybe-uninitialized false positive when an assignment is added
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80548 --- Comment #8 from Vincent Lefèvre --- Indeed, compared to GCC 12.2.0, the trunk no longer warns on the simple testcase I provided. However, I cannot see any change of the warnings on my original file (to myself: tmd/binary32/hrcases.c), except concerning the order of the warnings (on this file, I get 2 spurious -Wmaybe-uninitialized warnings, and they are now reversed). I'll try to provide another simple testcase. I'm wondering whether this bug is really fixed or it just happens to have disappeared on the testcase just because of a side effect of some other change in GCC and a small change addition to the testcase would make it reappear.
[Bug c/105499] inconsistency between -Werror=c++-compat and g++ in __extension__ block
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105499 --- Comment #8 from Vincent Lefèvre --- It is bad that __extension__ does two completely different things: 1. Disable warnings associated with GNU extensions, like ({ ... }). 2. Disable compatibility warnings that do not correspond to GNU extensions, like invalid conversions in C++.
[Bug c/105499] inconsistency between -Werror=c++-compat and g++ in __extension__ block
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105499 --- Comment #6 from Vincent Lefèvre --- To be clear... I'm not sure about what kind of compatibility warnings one can get, but it is OK to silence valid extensions, i.e. those that will not give an error. But invalid extensions, i.e. those that would give an error with a compat-implied compiler (like in the testcase), should not be silenced by __extension__. The problem is that in my original testcase, __extension__ was used in order to silence the warning for the ({...}) construct, which is still valid with g++. But as a side effect, it also silences the conversion from the "int *p = q;", which is invalid in C++ (and is actually *not* an extension as it fails as shown in my bug report).
[Bug c/105499] inconsistency between -Werror=c++-compat and g++ in __extension__ block
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105499 --- Comment #5 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #4) > __extension__ disables all compatibility warnings. > > This is by design really as headers sometimes needs to be written using C > code and need to turn off these warnings. I don't understand why. If the code is designed only for C (i.e. it will not work with a C++ compiler), then the C++ compatibility option is not needed to test the code. Otherwise, the code is buggy, so the compatibility warning is useful.
[Bug c++/95148] -Wtype-limits always-false warning triggered despite comparison being avoided
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95148 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #2 from Vincent Lefèvre --- Simpler code, also for C: int f (void) { unsigned int x = 5; return 0 && x < 0 ? 1 : 0; } Alternatively: int f (void) { unsigned int x = 5; if (0) return x < 0 ? 1 : 0; return 0; } With -Wtype-limits, GCC 12.2.0 warns on both.
[Bug target/106165] incorrect result when using inlined asm implementation of floor() on i686
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #6 from Vincent Lefèvre --- (In reply to xeioex from comment #5) > My question is more practical. For example while > `-fexcess-precision=standard` fixes the problem in GCC. But, I am left with > the same problem regarding other compilers. The need for -fexcess-precision=standard is due to a "bug" in GCC (more precisely, a non-conformance issue with the ISO C standard). If the other compilers conform to the C standard by default, you won't have to do anything. Note that even with -fexcess-precision=standard, you get double rounding, i.e. a first rounding to extended precision, then a second rounding to double precision. This is allowed by the C standard, but may break some algorithms or give results different from platforms with a single rounding.
[Bug bootstrap/105688] GCC 11.3 doesn't build with the GNU gold linker (version 2.37-27.fc36) 1.16: libstdc++.so.6: version `GLIBCXX_3.4.30' not found
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #39 from Vincent Lefèvre --- (In reply to Jonathan Wakely from comment #38) > (In reply to Vincent Lefèvre from comment #35) > > (I reported it in 2012, with Jonathan Nieder's patch to fix it, but after 10 > > years, there is still no reaction from the developers!) > > So don't use gold then. It is (was) installed by default on some machines. And users don't necessarily know that it is the cause of failures when building other software (like this GCC bug).
[Bug bootstrap/105688] GCC 11.3 doesn't build with the GNU gold linker (version 2.37-27.fc36) 1.16: libstdc++.so.6: version `GLIBCXX_3.4.30' not found
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #36 from Vincent Lefèvre --- An alternative solution: for programs that are known to potentially fail due to built libraries and LD_LIBRARY_PATH, GCC could define wrappers that clean up LD_LIBRARY_PATH before executing the real program.
[Bug bootstrap/105688] GCC 11.3 doesn't build with the GNU gold linker (version 2.37-27.fc36) 1.16: libstdc++.so.6: version `GLIBCXX_3.4.30' not found
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #35 from Vincent Lefèvre --- And since the title of this bug was changed to mention that this is related to the GNU gold linker, there is a runpath bug in this linker that might affect libtool (perhaps causing it to use LD_LIBRARY_PATH?): https://sourceware.org/bugzilla/show_bug.cgi?id=13764 (I reported it in 2012, with Jonathan Nieder's patch to fix it, but after 10 years, there is still no reaction from the developers!) So you may want to try this patch to see if this solves the issue.
[Bug bootstrap/105688] GCC 11.3 doesn't build with the GNU gold linker (version 2.37-27.fc36) 1.16: libstdc++.so.6: version `GLIBCXX_3.4.30' not found
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #33 from Vincent Lefèvre --- (In reply to Andrew Pinski from comment #32) > The runpath won't work because the libraries are installed yet. This is what libtool does for GNU MPFR, and this works fine. For instance, when building test programs, I can see: -Wl,-rpath -Wl,/home/vinc17/software/mpfr/src/.libs so that it doesn't need to change LD_LIBRARY_PATH. (The test programs don't need to be installed, so that using the path to the build directory will not yield any issue, but AFAIK, if need be, libtool supports relinking of programs to be installed.)
[Bug bootstrap/105688] Cannot build GCC 11.3 on Fedora 36
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #26 from Vincent Lefèvre --- (In reply to Jonathan Wakely from comment #23) > (In reply to Vincent Lefèvre from comment #21) > > I suppose that LD_LIBRARY_PATH is set because it is needed in order to use > > built libraries. > > It is not needed except when running the testsuite, and should not be set. Indeed, for most projects. IIRC, for some project (was it Subversion?), using built libraries was needed also during a part of the build, but that said, since this is rather specific, this is probably not a reason to change LD_LIBRARY_PATH globally, in case this is what libtool does. (In reply to Sam James from comment #24) > (In reply to Vincent Lefèvre from comment #21) > > I have a similar issue under Debian/unstable with GCC old of a few months, > > where in x86_64-pc-linux-gnu/libstdc++-v3/po, msgfmt fails with an error > > like > > > > /usr/bin/msgfmt: > > /home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/ > > libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by > > /usr/lib/x86_64-linux-gnu/libicuuc.so.71) > > This issue is, I think, slightly different: https://bugs.gentoo.org/843119. I think that the cause is the same (and the fix should be the same): LD_LIBRARY_PATH is changed somewhere to point to some build directories. As mentioned by https://bugs.gentoo.org/843119#c10 this shouldn't be done there. > It might end up being related but I've only ever seen *that* issue in the > context of installing libstdc++ when doing the gcc build when NLS is enabled > and using a newer GCC to do the build of an older version. While disabling NLS (Gentoo's fix) would be OK for most testing of older GCC, I doubt that this is the right solution.
[Bug bootstrap/105688] Cannot build GCC 11.3 on Fedora 36
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105688 --- Comment #21 from Vincent Lefèvre --- I have a similar issue under Debian/unstable with GCC old of a few months, where in x86_64-pc-linux-gnu/libstdc++-v3/po, msgfmt fails with an error like /usr/bin/msgfmt: /home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /usr/lib/x86_64-linux-gnu/libicuuc.so.71) A msgfmt wrapper with "printenv LD_LIBRARY_PATH" shows that LD_LIBRARY_PATH is set to /home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/src/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libsanitizer/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libvtv/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libssp/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libgomp/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libitm/.libs:/home/vlefevre/software/gcc-build/x86_64-pc-linux-gnu/libatomic/.libs:/home/vlefevre/software/gcc-build/./gcc:/home/vlefevre/software/gcc-build/./prev-gcc This bug is probably still present in master: I do not get any failure, probably because the built libstdc++.so.6 is recent enough to be similar to the one currently provided by Debian/unstable, but the wrapper still shows that LD_LIBRARY_PATH is set to the above string (or similar), so that the bug could still occur when the system libstdc++.so.6 gets something new. I suppose that LD_LIBRARY_PATH is set because it is needed in order to use built libraries. But perhaps that instead, a run path should be used together with --disable-new-dtags (so that it overrides the user's LD_LIBRARY_PATH).
[Bug tree-optimization/106155] [12/13 Regression] spurious "may be used uninitialized" warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106155 --- Comment #1 from Vincent Lefèvre --- I detected the issue on tests/tfpif.c with the upgrade of Debian's package gcc-snapshot from 1:20220126-1 to 1:20220630-1 (it doesn't occur on tests/tfpif.c with gcc-snapshot 1:20220126-1). However, the simplified testcase I've provided fails with gcc-snapshot 1:20220126-1.
[Bug tree-optimization/106155] New: [12/13 Regression] spurious "may be used uninitialized" warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106155 Bug ID: 106155 Summary: [12/13 Regression] spurious "may be used uninitialized" warning Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- With "-O -Wmaybe-uninitialized", I get a spurious "may be used uninitialized" on the following code on an x86_64 Debian/unstable machine: int *e; int f1 (void); void f2 (int); long f3 (void *, long, int *); void f4 (void *); int *fh; void tst (void) { int status; unsigned char badData[3][3] = { { 7 }, { 16 }, { 23 } }; int badDataSize[3] = { 1, 1, 1 }; int i; for (i = 0; i < 3; i++) { int emax; if (i == 2) emax = f1 (); status = f3 ([i][0], badDataSize[i], fh); if (status) { f1 (); f1 (); f1 (); } f4 (fh); *e = 0; f1 (); if (i == 2) f2 (emax); } } Note that even a small change such as changing "long" to "int" as the second parameter of f3 makes the warning disappear. $ gcc-12 -O -Wmaybe-uninitialized -c -o tfpif.o tfpif.c tfpif.c: In function ‘tst’: tfpif.c:31:9: warning: ‘emax’ may be used uninitialized [-Wmaybe-uninitialized] 31 | f2 (emax); | ^ tfpif.c:17:11: note: ‘emax’ was declared here 17 | int emax; | ^~~~ $ gcc-12 --version gcc-12 (Debian 12.1.0-5) 12.1.0 [...] $ gcc-snapshot -O -Wmaybe-uninitialized -c -o tfpif.o tfpif.c tfpif.c: In function 'tst': tfpif.c:31:9: warning: 'emax' may be used uninitialized [-Wmaybe-uninitialized] 31 | f2 (emax); | ^ tfpif.c:17:11: note: 'emax' was declared here 17 | int emax; | ^~~~ $ gcc-snapshot --version gcc (Debian 20220630-1) 13.0.0 20220630 (experimental) [master r13-1359-gaa1ae74711b] [...] No such issue with: gcc-9 (Debian 9.5.0-1) 9.5.0 gcc-10 (Debian 10.4.0-1) 10.4.0 gcc-11 (Debian 11.3.0-4) 11.3.0 I detected this issue by testing GNU MPFR. The above code is derived from "tests/tfpif.c", function check_bad.
[Bug other/105548] New: -frounding-math description contains an misleading sentence
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105548 Bug ID: 105548 Summary: -frounding-math description contains an misleading sentence Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: other Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html has -frounding-math Disable transformations and optimizations that assume default floating-point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating-point expressions at compile time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes. However, the sentence "This is round-to-zero for all floating point to integer conversions, and round-to-nearest for all other arithmetic truncations." is misleading, because even code with no rounding in the source code may be affected (e.g. PR102498, due to code generated by GCC to load particular floating-point constants). This sentence doesn't bring anything useful and should be removed (in short, if some part of the code runs in non-default rounding mode, then -frounding-math should be used, whatever the operations involved in the source code). Note also that "round-to-zero for all floating point to integer conversions" probably doesn't concern the user (or are there languages with dynamic FP-to-integer rounding modes?). Possibly add "(round-to-nearest)" at the end of the first sentence.
[Bug target/102498] [9/10 Regression] Long double constant and non-default rounding mode on x86
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102498 --- Comment #14 from Vincent Lefèvre --- Sorry, I wasn't using -frounding-math (which matters to have the optimization disabled).
[Bug target/102498] [9/10 Regression] Long double constant and non-default rounding mode on x86
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102498 --- Comment #13 from Vincent Lefèvre --- Strange. I still get this bug with gcc-11 (Debian 11.3.0-1) 11.3.0.
[Bug c++/105499] inconsistency between -Werror=c++-compat and g++ in __extension__ block
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105499 --- Comment #2 from Vincent Lefèvre --- (In reply to Eric Gallager from comment #1) > This is probably another one of those issues with how the preprocessor works > in C++ mode in general; see for example bug 71003 and bug 87274 Note, however, that bug 71003 and bug 87274 are about parsing (complaints about an escape sequence and numeric literals), while in this PR, the error occurs at the semantic level (type issues). Does the preprocessor know about types?
[Bug c++/105499] New: inconsistency between -Werror=c++-compat and g++ in __extension__ block
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105499 Bug ID: 105499 Summary: inconsistency between -Werror=c++-compat and g++ in __extension__ block Product: gcc Version: 11.3.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net Target Milestone: --- Consider int *f (void *q) { return __extension__ ({ int *p = q; p; }); } With GCC 11.3.0 under Debian (Debian package), I get the following: $ gcc -Werror=c++-compat -c tst.c $ g++ -c tst.c tst.c: In function ‘int* f(void*)’: tst.c:3:36: error: invalid conversion from ‘void*’ to ‘int*’ [-fpermissive] 3 | return __extension__ ({ int *p = q; p; }); |^ || |void* so no errors with "gcc -Werror=c++-compat", but an error with g++. This is not consistent. Either this is regarded as a valid extension in C++ so that both should succeed, or this is not valid C++ code even in __extension__ so that both should fail. Same issue with various GCC versions from 4.9 to 11.3.0. AFAIK, the purpose of -Wc++-compat is to test whether code would still pass when replacing C compilation by C++ (there might be false positives or false negatives, but this should not be the case with the above example). FYI, I got the above issue while testing GNU MPFR (tested with -Werror=c++-compat first, and with g++ a bit later in a more extensive test).
[Bug tree-optimization/31178] VRP can infer a range for b in a >> b and a << b
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=31178 --- Comment #19 from Vincent Lefèvre --- (In reply to rguent...@suse.de from comment #18) > Sure, if that's what the precision is used for. The message from Andrew > sounded like 'I want the precision for the shift operand but let me > just use that of the shifted anyway' Andrew should clarify. From what I understand, he does not want the precision for the shift operand (right operand), but an upper bound (this is what this bug is about). And this upper bound is deduced from the precision of the shifted element (promoted left operand).
[Bug tree-optimization/31178] VRP can infer a range for b in a >> b and a << b
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=31178 --- Comment #17 from Vincent Lefèvre --- (In reply to Richard Biener from comment #16) > Note for shifts the precision of the shift operand does not have to match > that of the shifted operand. In your case you have vector << scalar, so you > definitely want to look at scalars precision when deriving a range for > scalar, _not_ at element_precision of vector (or the LHS)! I'm not sure I understand what you mean, but the C standard says: The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined. So, what matters is the type of the promoted *left* operand (corresponding to vector above).
[Bug middle-end/56281] missed VRP optimization on i for signed i << n from undefined left shift in ISO C
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56281 --- Comment #5 from Vincent Lefèvre --- I've clarified the bug title to say that this is a range on the first operand.
[Bug middle-end/26374] Compile failure on long double
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=26374 --- Comment #21 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #16) > As for constant folding, even with double double gcc is able to fold some > constant arithmetics in that format, but because the emulation is only > approximate (it pretends it is 106-bit precision format while in reality it > is variable precision up to some thousands depending on the exact values). > As has been said elsewhere, the emulation would be implementable if gcc > handled double double in all the arithmetics as a pair of doubles with all > the rules for it. But e.g. mpfr/libmpc isn't able to do something like > that, so we'd need to compute everything with much bigger precision etc. Well, the C standard does not require correct rounding, and while correct rounding is important for the IEEE formats, it is rather useless for the double-double format, whose goal was just to provide more precision than double, but still be rather fast (compared to quad emulation). The main drawback would be that results could be different whether a FP expression is evaluated at run time or at compile time, but unless users seek to control everything (e.g. with IEEE formats), they should get use to that (FYI, you can have the same kind of issues with the contraction of FP expressions, such as FMA generation from mul-add, which GCC enable by default). So, in short, doing the compile-time evaluation at a 106-bit precision or more would be acceptable IMHO, at least better than a compiler error. Note: Even though double-double can be very interesting as a compromise between performance and accuracy, there exist various algorithms and which algorithm should be chosen depends on the context, which only the author of the program can know in general. Thus it was a bad idea to implement double-double as a native FP type (here, long double); instead, the selection of the algorithms should be left to the developer. So the switch to IEEE quad is a good thing. But for how long will old ABIs be around?
[Bug sanitizer/104690] UBSan does not detect undefined behavior on function without a specified return value
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104690 --- Comment #2 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #1) > It requires that the callee tells the caller that it reached end of non-void > function without return and the callee checks if the value is actually used > there. Note that the rule makes sense only in the same translation unit (otherwise, this is probable out of scope of the standard since functions may be written in different languages). So I think that a part of the check between the caller and the callee can be done at compile time.
[Bug c/93432] variable is used uninitialized, but gcc shows no warning
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93432 Vincent Lefèvre changed: What|Removed |Added CC||vincent-gcc at vinc17 dot net --- Comment #6 from Vincent Lefèvre --- (In reply to Manuel López-Ibáñez from comment #4) > It warns in gcc 10.1. It may good idea to add this one as a testcase, since > it seems it got fixed without noticing. Has this really been fixed, or does it work now just by chance? This looks like PR18501, where the concerned variable is initialized only at one place in the loop. Here, "z = 1" is followed by "z=z+1", so that this is equivalent to "z = 2". But if in the code, I do this change, the warning disappears (tested with gcc-12 (Debian 12-20220222-1) 12.0.1 20220222 (experimental) [master r12-7325-g2f59f067610] and -O1, -O2, -O3).
[Bug sanitizer/104690] New: UBSan does not detect undefined behavior on function without a specified return value
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104690 Bug ID: 104690 Summary: UBSan does not detect undefined behavior on function without a specified return value Product: gcc Version: 12.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: sanitizer Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net CC: dodji at gcc dot gnu.org, dvyukov at gcc dot gnu.org, jakub at gcc dot gnu.org, kcc at gcc dot gnu.org, marxin at gcc dot gnu.org Target Milestone: --- Consider the following C code: #include static int f (void) { } int main (void) { printf ("%d\n", f ()); return 0; } According to ISO C17 6.9.1p12, the behavior is undefined: "If the } that terminates a function is reached, and the value of the function call is used by the caller, the behavior is undefined." I don't know what "used by the caller" means exactly, but in the above code, the value is clearly used, since it is printed. However, when one compiles it with "gcc -std=c17 -fsanitize=undefined" (with or without -O), running the code does not trigger an error. (Well, I hope that UBSan doesn't think that the value isn't necessarily used because the printf may fail before printing the value.) Tested with gcc-12 (Debian 12-20220222-1) 12.0.1 20220222 (experimental) [master r12-7325-g2f59f067610] and some earlier versions. Note: with g++, one gets a "runtime error: execution reached the end of a value-returning function without returning a value" as expected.
[Bug gcov-profile/104677] Please update documentation about the name of the .gcda files
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104677 Vincent Lefèvre changed: What|Removed |Added Summary|With -fprofile-arcs, the|Please update documentation |name of the .gcda file is |about the name of the .gcda |incorrect |files --- Comment #2 from Vincent Lefèvre --- OK. The patch should have updated the -fprofile-arcs documentation. BTW, @file{@var{sourcename}.gcda} appears a couple of times in invoke.texi (for -fbranch-probabilities and for -fprofile-dir). This should be updated too.
[Bug gcov-profile/104677] New: With -fprofile-arcs, the name of the .gcda file is incorrect
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104677 Bug ID: 104677 Summary: With -fprofile-arcs, the name of the .gcda file is incorrect Product: gcc Version: unknown Status: UNCONFIRMED Severity: normal Priority: P3 Component: gcov-profile Assignee: unassigned at gcc dot gnu.org Reporter: vincent-gcc at vinc17 dot net CC: marxin at gcc dot gnu.org Target Milestone: --- Under Debian/unstable, if I do "gcc foo.c -o foo2 -fprofile-arcs", the name of the .gcda file is "foo2-foo.gcda" instead of "foo.gcda". The behavior is correct until GCC 10.3.0, but it is incorrect with GCC 11.2.0 and gcc-12 (Debian 12-20220222-1) 12.0.1 20220222 (experimental) [master r12-7325-g2f59f067610]. The current gcc/doc/invoke.texi documentation for -fprofile-arcs contains: Each object file's @var{auxname} is generated from the name of the output file, if explicitly specified and it is not the final executable, otherwise it is the basename of the source file. but this has not changed since GCC 9 at least. Here, foo2 is the final executable (but why the word "final"?), so that one should be in the "otherwise" case.
[Bug tree-optimization/24021] VRP does not work with floating points
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=24021 --- Comment #18 from Vincent Lefèvre --- I'm wondering whether it is possible to check on actual code what is needed. For instance, assume that you have a program that produces always the same results, e.g. by running it over a fixed dataset. GCC could get some information about actual FP values (a bit like profile generation). Then check what benefit you can get by using these data at compile time (contrary to optimization with profile use, you assume here that the obtained information is necessarily valid, which is true as long as the program is run on the chosen dataset). The difficulty is to find whether some benefit can be obtained by VRP, but this should give an upper bound on the speedup you can hope to get. So perhaps this can give some useful information about what to focus on.
[Bug tree-optimization/24021] VRP does not work with floating points
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=24021 --- Comment #14 from Vincent Lefèvre --- (In reply to Jakub Jelinek from comment #11) > And also take into account different rounding modes if > user wants that to be honored. And that would eliminate the need to consider the possibility of double rounding in case of intermediate extended precision (as with x87).
[Bug tree-optimization/24021] VRP does not work with floating points
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=24021 --- Comment #10 from Vincent Lefèvre --- (In reply to Vincent Lefèvre from comment #9) > Subnormals might also need to be considered as special cases: "Whether and > in what cases subnormal numbers are treated as zeros is implementation > defined." will be added to C23 (some behaviors are dictated by the hardware, > e.g. ARM in some non-IEEE configurations), but I've asked for clarification > in the CFP mailing-list. Some details in http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2797.htm
[Bug tree-optimization/24021] VRP does not work with floating points
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=24021 --- Comment #9 from Vincent Lefèvre --- (In reply to Aldy Hernandez from comment #6) > As I've mentioned, I'm hoping some floating expert can take this across to > goal line, as my head will start spinning as soon as we start talking about > NANs and such. The range-op work will likely require floating specialized > knowledge. Subnormals might also need to be considered as special cases: "Whether and in what cases subnormal numbers are treated as zeros is implementation defined." will be added to C23 (some behaviors are dictated by the hardware, e.g. ARM in some non-IEEE configurations), but I've asked for clarification in the CFP mailing-list.