On Wed, Oct 21, 2015 at 11:39 AM, Bin.Cheng <amker.ch...@gmail.com> wrote: > On Wed, Oct 21, 2015 at 5:15 PM, Richard Biener > <richard.guent...@gmail.com> wrote: >> On Wed, Oct 21, 2015 at 6:46 AM, Bin Cheng <bin.ch...@arm.com> wrote: >>> Hi, >>> As analyzed in PR67921, I think the issue is caused by fold_binary_loc which >>> folds: >>> 4 - (sizetype) &c - (sizetype) ((int *) p1_8(D) + ((sizetype) a_23 * 24 + >>> 4)) >>> into below form: >>> ((sizetype) -((int *) p1_8(D) + ((sizetype) a_23 * 24 + 4)) - (sizetype) >>> &c) + 4 >>> >>> Look the minus sizetype expression is folded as negative pointer expression, >>> which seems incorrect. Apart from this, The direct reason of this ICE is in >>> CHREC because of an overlook. In general CHREC supports NEGATE_EXPR for >>> CHREC, the only problem is it uses pointer type for CHREC_RIGHT, rather than >>> sizetype, when building pointer type CHREC. >>> >>> This simple patch fixes the ICE issue. Bootstrap and test on x86 & x86_64. >>> >>> Is it OK? >> >> Hmm, I think not - we shouldn't ever get pointer typed >> multiplications. Did you track >> down which is the bogus fold transform (I agree the result above is >> bogus)? It's >> probably related to STRIP_NOPS stripping sizetype conversions from pointers >> so we might get split_tree to build such negate. Note that split_tree strips >> (sign!) nops itself and thus should probably simply receive op0 and op1 >> instead >> of arg0 and arg1. > Yes, I was going to send similar patch for fold stuff. Just thought > it might be useful to support POINTER chrec in *_multiply. I will > drop this and let you test yours.
The patch regresses Running target unix/ FAIL: gcc.dg/pr52578.c scan-tree-dump-times original "return 2;" 2 FAIL: gcc.dg/strict-overflow-5.c scan-tree-dump optimized "return 3" FAIL: gcc.dg/tree-ssa/loop-15.c scan-tree-dump-times optimized "\\\\+" 0 FAIL: gcc.dg/tree-ssa/loop-15.c scan-tree-dump-times optimized "n_. \\\\* n_." 1 FAIL: gcc.dg/vect/vect-align-3.c -flto -ffat-lto-objects scan-tree-dump-not vec t "vect_do_peeling_for_loop_bound" FAIL: gcc.dg/vect/vect-align-3.c scan-tree-dump-not vect "vect_do_peeling_for_lo op_bound" and for -m32 FAIL: gcc.target/i386/pr67985-3.c scan-assembler movd[ \\t]%xmm[0-7], %eax FAIL: gcc.target/i386/pr67985-3.c scan-assembler mulss as well. So it requires some more complicated handling. It might need to take the negation code in split-tree to be performed only implicitely during associate_trees (then on the correct type). But that requires (quite) some refactoring. I suggest a struct tree_with_negate { tree t; bool negate_p; } to pass this along. Not for me at this point (looks I have quite a few patches to review ;)) Richard. > Thanks, > bin >> >> I'm testing >> >> @@ -9505,8 +9523,8 @@ fold_binary_loc (location_t loc, >> then the result with variables. This increases the chances of >> literals being recombined later and of generating relocatable >> expressions for the sum of a constant and literal. */ >> - var0 = split_tree (arg0, code, &con0, &lit0, &minus_lit0, 0); >> - var1 = split_tree (arg1, code, &con1, &lit1, &minus_lit1, >> + var0 = split_tree (op0, code, &con0, &lit0, &minus_lit0, 0); >> + var1 = split_tree (op1, code, &con1, &lit1, &minus_lit1, >> code == MINUS_EXPR); >> >> /* Recombine MINUS_EXPR operands by using PLUS_EXPR. */ >> >> which fixes the testcase for me. >> >> Richard. >> >>> Note, I do think the associate logic in fold_binary_loc needs fix, but that >>> should be another patch. >>> >>> >>> 2015-10-20 Bin Cheng <bin.ch...@arm.com> >>> >>> PR tree-optimization/67921 >>> * tree-chrec.c (chrec_fold_multiply): Use sizetype for CHREC_RIGHT >>> if >>> type is pointer type. >>> >>> 2015-10-20 Bin Cheng <bin.ch...@arm.com> >>> >>> PR tree-optimization/67921 >>> * gcc.dg/ubsan/pr67921.c: New test.