[Bug target/113281] [14] RISC-V rv64gcv_zvl256b vector: Runtime mismatch with rv64gc
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113281 --- Comment #5 from JuzheZhong --- (In reply to JuzheZhong from comment #4) > Confirm reduced case: > > #include > unsigned char a; > > int main() { > short b = a = 0; > for (; a != 19; a++) > if (a) > b = 32872 >> a; > > assert (b == 0); > } > > with -fno-vect-cost-model -march=rv64gcv -O3: > > https://godbolt.org/z/joGb3e9Eb > > Also run failed assertion "b == 0" failed: file "bug.c", line 10, function: > main > > I suspect ARM SVE has the same fail. > > Hi, Andrew. Could you test this case on ARM to see whether ARM has same > issue as RISC-V for me ? The vect dump tree is quite similar between ARM and RISC-V.
[Bug target/113281] [14] RISC-V rv64gcv_zvl256b vector: Runtime mismatch with rv64gc
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113281 --- Comment #4 from JuzheZhong --- Confirm reduced case: #include unsigned char a; int main() { short b = a = 0; for (; a != 19; a++) if (a) b = 32872 >> a; assert (b == 0); } with -fno-vect-cost-model -march=rv64gcv -O3: https://godbolt.org/z/joGb3e9Eb Also run failed assertion "b == 0" failed: file "bug.c", line 10, function: main I suspect ARM SVE has the same fail. Hi, Andrew. Could you test this case on ARM to see whether ARM has same issue as RISC-V for me ?
[Bug target/113281] [14] RISC-V rv64gcv_zvl256b vector: Runtime mismatch with rv64gc
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113281 --- Comment #3 from JuzheZhong --- I think there are 2 issues here: 1. We should adjust cost model to let loop vectorizer eliminate the unprofitable vectorization. It should be done in RISC-V backend. 2. We should fix run fail bug with -fno-vect-cost-model. I think 1st issue is simpler than second one. And I highly doubt that the second one is not RISC-V bug is middle-end bug.
[Bug target/113281] [14] RISC-V rv64gcv_zvl256b vector: Runtime mismatch with rv64gc
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113281 --- Comment #2 from Robin Dapp --- Confirmed. Funny, we shouldn't vectorize that but really optimize to "return 0". Costing might be questionable but we also haven't optimized away the loop when comparing costs. Disregarding that, of course the vectorization should be correct. The vect output doesn't really make sense to me but I haven't looked very closely yet: _177 = .SELECT_VL (2, POLY_INT_CST [16, 16]); vect_patt_82.18_166 = (vector([16,16]) unsigned short) { 17, 18, 19, ... }; vect_patt_84.19_168 = MIN_EXPR ; vect_patt_85.20_170 = { 32872, ... } >> vect_patt_84.19_168; vect_patt_87.21_171 = VIEW_CONVERT_EXPR(vect_patt_85.20_170); _173 = _177 + 18446744073709551615; # RANGE [irange] short int [0, 16436] MASK 0x7fff VALUE 0x0 _174 = .VEC_EXTRACT (vect_patt_87.21_171, _173); vect_patt_85.20_170 should be all zeros and then we'd just vec_extract a 0 and return that. However, 32872 >> 15 == 1 so we return 1.
[Bug target/113281] [14] RISC-V rv64gcv_zvl256b vector: Runtime mismatch with rv64gc
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113281 Patrick O'Neill changed: What|Removed |Added Keywords||wrong-code CC||juzhe.zhong at rivai dot ai, ||patrick at rivosinc dot com, ||rdapp at gcc dot gnu.org, ||vineetg at rivosinc dot com Target||riscv --- Comment #1 from Patrick O'Neill --- Godbolt: https://godbolt.org/z/efPhqzcr5