[Bug target/97457] [10/11 Regression] SVE: wrong code since r10-4752-g2d56600c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97457 --- Comment #4 from CVS Commits --- The master branch has been updated by Richard Sandiford : https://gcc.gnu.org/g:54ef7701a9dec8c923a12d1983f8a051ba88a7b9 commit r11-4495-g54ef7701a9dec8c923a12d1983f8a051ba88a7b9 Author: Richard Sandiford Date: Wed Oct 28 19:05:49 2020 + value-range: Give up on POLY_INT_CST ranges [PR97457] This PR shows another problem with calculating value ranges for POLY_INT_CSTs. We have: ivtmp_76 = ASSERT_EXPR POLY_INT_CST [9, 4294967294]> where the VQ coefficient is unsigned but is effectively acting as a negative number. We wrongly give the POLY_INT_CST the range: [9, INT_MAX] and things go downhill from there: later iterations of the unrolled epilogue are wrongly removed as dead. I guess this is the final nail in the coffin for doing VRP on POLY_INT_CSTs. For other similarly exotic testcases we could have overflow for any coefficient, not just those that could be treated as contextually negative. Testing TYPE_OVERFLOW_UNDEFINED doesn't seem like an option because we couldn't handle warn_strict_overflow properly. At this stage we're just recording a range that might or might not lead to strict-overflow assumptions later. It still feels like we should be able to do something here, but for now removing the code seems safest. It's also telling that there are no testsuite failures on SVE from doing this. gcc/ PR tree-optimization/97457 * value-range.cc (irange::set): Don't decay POLY_INT_CST ranges to integer ranges. gcc/testsuite/ PR tree-optimization/97457 * gcc.dg/vect/pr97457.c: New test.
[Bug target/97457] [10/11 Regression] SVE: wrong code since r10-4752-g2d56600c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97457 rsandifo at gcc dot gnu.org changed: What|Removed |Added Ever confirmed|0 |1 Assignee|unassigned at gcc dot gnu.org |rsandifo at gcc dot gnu.org CC||rsandifo at gcc dot gnu.org Status|UNCONFIRMED |ASSIGNED Last reconfirmed||2020-10-27 --- Comment #3 from rsandifo at gcc dot gnu.org --- Ugh. This is yet another problem with calculating value ranges for POLY_INT_CSTs. We have: ivtmp_76 = ASSERT_EXPR POLY_INT_CST [9, 4294967294]> where the VQ coefficient is unsigned but is effectively acting as a negative number. We wrongly give the POLY_INT_CST the range: [9, INT_MAX] and things go downhill from there. I guess this is the final nail in the coffin for doing VRP on POLY_INT_CSTs. :-( For other similarly exotic testcases we could have overflow for any coefficient, not just those that could be treated as contextually negative. Let's see what the fallout is from removing the code...
[Bug target/97457] [10/11 Regression] SVE: wrong code since r10-4752-g2d56600c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97457 --- Comment #2 from Alex Coplan --- For the similar testcase: long a; short b; signed char c(char d, char e) { return d + e; } int main(void) { a = -30; for (; a < 24; a = c(a, 5)) { short *f = (*f)--; } if (b != -11) __builtin_abort(); } we fail to assemble it after r10-4752. This is fixed by r10-5304-g30f8bf3d6c072a8fce14e8a003dff485a9068a97, but we have wrong code thereafter.
[Bug target/97457] [10/11 Regression] SVE: wrong code since r10-4752-g2d56600c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97457 --- Comment #1 from Alex Coplan --- To be clear, the second beq .L8 is in the body of the main loop is not taken either in the execution described here. The lack of a comment there might have suggested otherwise.
[Bug target/97457] [10/11 Regression] SVE: wrong code since r10-4752-g2d56600c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97457 Alex Coplan changed: What|Removed |Added Known to fail||11.0 Keywords||wrong-code Target||aarch64 Target Milestone|--- |10.3 CC||richard.sandiford at arm dot com