[Bug testsuite/113226] [14 Regression] testsuite/std/ranges/iota/max_size_type.cc fails for cris-elf after r14-6888-ga138b99646a555
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113226 Richard Biener changed: What|Removed |Added Target Milestone|--- |14.0
[Bug testsuite/113226] [14 Regression] testsuite/std/ranges/iota/max_size_type.cc fails for cris-elf after r14-6888-ga138b99646a555
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113226 --- Comment #3 from Hans-Peter Nilsson --- (In reply to Patrick Palka from comment #1) > Huh, how bizarre. Indeed. I'm *not* ruling out an actual gcc bug. Whether in the target or middle-end this time I dare not guess; too few posts. JFTR; I already mentioned this in the gcc-patches post: I see only posts on gcc-testresults@ that include r14-6888-ga138b99646a555 for 64-bit-targets with "-m32" multilibs, and I don't trust them to treat that hw_type the same.
[Bug testsuite/113226] [14 Regression] testsuite/std/ranges/iota/max_size_type.cc fails for cris-elf after r14-6888-ga138b99646a555
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113226 --- Comment #2 from Patrick Palka --- (In reply to Patrick Palka from comment #1) > Huh, how bizarre. > > > i == 1, j == -100, i*j == 4294967196, max_type(i) == 1, max_type(i)*j == > > -100 > > Here i and j are just ordinary 'long long', so I don't get why i*j is > 4294967196 instead of -100? Everything else, in particular that int64_t(max_type(i)*j) is -100, seems correct/expected to me. FWIW that expression computes the product of the corresponding promoted/sign-extended 65-bit precision values, and the overall check is analogous to int32_t i = 1, j = -100; assert (int64_t(i*j) == int64_t(i)*j); except the two precisions are 64/65 bits instead of 32/64 bits. (When shorten_p is true, the overall check is analogous to assert (i*j == int32_t(int64_t(i)*j)) instead.)
[Bug testsuite/113226] [14 Regression] testsuite/std/ranges/iota/max_size_type.cc fails for cris-elf after r14-6888-ga138b99646a555
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113226 Patrick Palka changed: What|Removed |Added CC||ppalka at gcc dot gnu.org --- Comment #1 from Patrick Palka --- Huh, how bizarre. > i == 1, j == -100, i*j == 4294967196, max_type(i) == 1, max_type(i)*j == -100 Here i and j are just ordinary 'long long', so I don't get why i*j is 4294967196 instead of -100?