Any guidance on the next steps for this patch?
I believe that we answered all open questions, but may have missed something.

With trunk open for new development, we would like to revise and land this…

Thanks,
Philipp.

On Mon, 20 Mar 2023 at 15:02, Manolis Tsamis <manolis.tsa...@vrull.eu> wrote:
>
> On Fri, Mar 17, 2023 at 10:31 AM Richard Biener
> <richard.guent...@gmail.com> wrote:
> >
> > On Thu, Mar 16, 2023 at 4:27 PM Manolis Tsamis <manolis.tsa...@vrull.eu> 
> > wrote:
> > >
> > > For this C testcase:
> > >
> > > void g();
> > > void f(unsigned int *a)
> > > {
> > >   if (++*a == 1)
> > >     g();
> > > }
> > >
> > > GCC will currently emit a comparison with 1 by using the value
> > > of *a after the increment. This can be improved by comparing
> > > against 0 and using the value before the increment. As a result
> > > there is a potentially shorter dependancy chain (no need to wait
> > > for the result of +1) and on targets with compare zero instructions
> > > the generated code is one instruction shorter.
> >
> > The downside is we now need two registers and their lifetime overlaps.
> >
> > Your patch mixes changing / inverting a parameter (which seems unneeded
> > for the actual change) with preferring compares against zero.
> >
>
> Indeed. I thought that without that change the original names wouldn't 
> properly
> describe what the parameter actually does and that's why I've changed it.
> I can undo that in the next revision.
>
> > What's the reason to specifically prefer compares against zero?  On x86
> > we have add that sets flags, so ++*a == 0 would be preferred, but
> > for your sequence we'd need a test reg, reg; branch on zero, so we do
> > not save any instruction.
> >
>
> My reasoning is that zero is treated preferentially  in most if not
> all architectures. Some specifically have zero/non-zero comparisons so
> we get one less instruction. X86 doesn't explicitly have that but I
> think that test reg, reg may not be always needed depending on the
> rest of the code. By what Andrew mentions below there may even be
> optimizations for zero in the microarchitecture level.
>
> Because this is still an arch-specific thing I initially tried to make
> it arch-depended by invoking the target's const functions (e.g. If I
> recall correctly aarch64 will return a lower cost for zero
> comparisons). But the code turned out complicated and messy so I came
> up with this alternative that just treats zero preferentially.
>
> If you have in mind a way that this can be done in a better way I
> could try to implement it.
>
> > We do have quite some number of bugreports with regards to making VRPs
> > life harder when splitting things this way.  It's easier for VRP to handle
> >
> >   _1 = _2 + 1;
> >   if (_1 == 1)
> >
> > than it is
> >
> >   _1 = _2 + 1;
> >   if (_2 == 0)
> >
> > where VRP fails to derive a range for _1 on the _2 == 0 branch.  So besides
> > the life-range issue there's other side-effects as well.  Maybe ranger 
> > meanwhile
> > can handle the above case?
> >
>
> Answered by Andrew MacLeod.
>
> > What's the overall effect of the change on a larger code base?
> >
>
> I made some quick runs of SPEC2017 and got the following results (# of
> folds of zero comparisons):
>
>  gcc             2586
>  xalancbmk 1456
>  perlbench   375
>  x264           307
>  omnetpp     137
>  leela           24
>  deepsjeng  15
>  exchange2  4
>  xz                4
>
> My test runs on Aarch64 do not show any significant change in runtime.
> In some cases (e.g. gcc) the binary is smaller in size, but that can
> depend on a number of other things.
>
> Thanks,
> Manolis
>
> > Thanks,
> > Richard.
> >
> > >
> > > Example from Aarch64:
> > >
> > > Before
> > >         ldr     w1, [x0]
> > >         add     w1, w1, 1
> > >         str     w1, [x0]
> > >         cmp     w1, 1
> > >         beq     .L4
> > >         ret
> > >
> > > After
> > >         ldr     w1, [x0]
> > >         add     w2, w1, 1
> > >         str     w2, [x0]
> > >         cbz     w1, .L4
> > >         ret
> > >
> > > gcc/ChangeLog:
> > >
> > >         * tree-ssa-forwprop.cc (combine_cond_expr_cond):
> > >         (forward_propagate_into_comparison_1): Optimize
> > >         for zero comparisons.
> > >
> > > Signed-off-by: Manolis Tsamis <manolis.tsa...@vrull.eu>
> > > ---
> > >
> > >  gcc/tree-ssa-forwprop.cc | 41 +++++++++++++++++++++++++++-------------
> > >  1 file changed, 28 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/gcc/tree-ssa-forwprop.cc b/gcc/tree-ssa-forwprop.cc
> > > index e34f0888954..93d5043821b 100644
> > > --- a/gcc/tree-ssa-forwprop.cc
> > > +++ b/gcc/tree-ssa-forwprop.cc
> > > @@ -373,12 +373,13 @@ rhs_to_tree (tree type, gimple *stmt)
> > >  /* Combine OP0 CODE OP1 in the context of a COND_EXPR.  Returns
> > >     the folded result in a form suitable for COND_EXPR_COND or
> > >     NULL_TREE, if there is no suitable simplified form.  If
> > > -   INVARIANT_ONLY is true only gimple_min_invariant results are
> > > -   considered simplified.  */
> > > +   ALWAYS_COMBINE is false then only combine it the resulting
> > > +   expression is gimple_min_invariant or considered simplified
> > > +   compared to the original.  */
> > >
> > >  static tree
> > >  combine_cond_expr_cond (gimple *stmt, enum tree_code code, tree type,
> > > -                       tree op0, tree op1, bool invariant_only)
> > > +                       tree op0, tree op1, bool always_combine)
> > >  {
> > >    tree t;
> > >
> > > @@ -398,17 +399,31 @@ combine_cond_expr_cond (gimple *stmt, enum 
> > > tree_code code, tree type,
> > >    /* Canonicalize the combined condition for use in a COND_EXPR.  */
> > >    t = canonicalize_cond_expr_cond (t);
> > >
> > > -  /* Bail out if we required an invariant but didn't get one.  */
> > > -  if (!t || (invariant_only && !is_gimple_min_invariant (t)))
> > > +  if (!t)
> > >      {
> > >        fold_undefer_overflow_warnings (false, NULL, 0);
> > >        return NULL_TREE;
> > >      }
> > >
> > > -  bool nowarn = warning_suppressed_p (stmt, OPT_Wstrict_overflow);
> > > -  fold_undefer_overflow_warnings (!nowarn, stmt, 0);
> > > +  if (always_combine || is_gimple_min_invariant (t))
> > > +    {
> > > +      bool nowarn = warning_suppressed_p (stmt, OPT_Wstrict_overflow);
> > > +      fold_undefer_overflow_warnings (!nowarn, stmt, 0);
> > > +      return t;
> > > +    }
> > >
> > > -  return t;
> > > +  /* If the result of folding is a zero comparison treat it 
> > > preferentially.  */
> > > +  if (TREE_CODE_CLASS (TREE_CODE (t)) == tcc_comparison
> > > +      && integer_zerop (TREE_OPERAND (t, 1))
> > > +      && !integer_zerop (op1))
> > > +    {
> > > +      bool nowarn = warning_suppressed_p (stmt, OPT_Wstrict_overflow);
> > > +      fold_undefer_overflow_warnings (!nowarn, stmt, 0);
> > > +      return t;
> > > +    }
> > > +
> > > +  fold_undefer_overflow_warnings (false, NULL, 0);
> > > +  return NULL_TREE;
> > >  }
> > >
> > >  /* Combine the comparison OP0 CODE OP1 at LOC with the defining 
> > > statements
> > > @@ -432,7 +447,7 @@ forward_propagate_into_comparison_1 (gimple *stmt,
> > >        if (def_stmt && can_propagate_from (def_stmt))
> > >         {
> > >           enum tree_code def_code = gimple_assign_rhs_code (def_stmt);
> > > -         bool invariant_only_p = !single_use0_p;
> > > +         bool always_combine = single_use0_p;
> > >
> > >           rhs0 = rhs_to_tree (TREE_TYPE (op1), def_stmt);
> > >
> > > @@ -442,10 +457,10 @@ forward_propagate_into_comparison_1 (gimple *stmt,
> > >                    && TREE_CODE (TREE_TYPE (TREE_OPERAND (rhs0, 0)))
> > >                       == BOOLEAN_TYPE)
> > >                   || TREE_CODE_CLASS (def_code) == tcc_comparison))
> > > -           invariant_only_p = false;
> > > +           always_combine = true;
> > >
> > >           tmp = combine_cond_expr_cond (stmt, code, type,
> > > -                                       rhs0, op1, invariant_only_p);
> > > +                                       rhs0, op1, always_combine);
> > >           if (tmp)
> > >             return tmp;
> > >         }
> > > @@ -459,7 +474,7 @@ forward_propagate_into_comparison_1 (gimple *stmt,
> > >         {
> > >           rhs1 = rhs_to_tree (TREE_TYPE (op0), def_stmt);
> > >           tmp = combine_cond_expr_cond (stmt, code, type,
> > > -                                       op0, rhs1, !single_use1_p);
> > > +                                       op0, rhs1, single_use1_p);
> > >           if (tmp)
> > >             return tmp;
> > >         }
> > > @@ -470,7 +485,7 @@ forward_propagate_into_comparison_1 (gimple *stmt,
> > >        && rhs1 != NULL_TREE)
> > >      tmp = combine_cond_expr_cond (stmt, code, type,
> > >                                   rhs0, rhs1,
> > > -                                 !(single_use0_p && single_use1_p));
> > > +                                 single_use0_p && single_use1_p);
> > >
> > >    return tmp;
> > >  }
> > > --
> > > 2.34.1
> > >

Reply via email to