Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-20 Thread Eric Botcazou
  It would be nice to only have to write the set+set version, and do
  some markup to say which of the clobber variants should be generated,
  yes.
 
 define_subst should be able to do that.

The Visium port uses that (but the other way around).

-- 
Eric Botcazou


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-18 Thread Jakub Jelinek
On Sun, Jan 18, 2015 at 05:28:39PM -0600, Segher Boessenkool wrote:
 On Sat, Jan 17, 2015 at 01:18:44PM -0500, Hans-Peter Nilsson wrote:
  The current cc-first order happened more of an accidental
  opinion than an architectural decision as I vaguely recall, when
  asking.  We also have the canonical location of a *cc clobber*,
  i.e. last in a parallel.  For that reason, it then makes sense
  to have the *cc-setting* last.  Changing rebelling ports doesn't
  solve that inconsistency.
 
 Except you also have the variant of the insn pattern where the CC is
 set and the GPR is clobbered (on PowerPC we have one of those for
 every insn, and only a few where CC is clobbered).
 
  So, my vote for canonically declaring the order non-canonical
  *and* automatically generating/matching both orders.
 
 It would be nice to only have to write the set+set version, and do
 some markup to say which of the clobber variants should be generated,
 yes.

define_subst should be able to do that.

Jakub


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-18 Thread Segher Boessenkool
On Sat, Jan 17, 2015 at 01:18:44PM -0500, Hans-Peter Nilsson wrote:
 The current cc-first order happened more of an accidental
 opinion than an architectural decision as I vaguely recall, when
 asking.  We also have the canonical location of a *cc clobber*,
 i.e. last in a parallel.  For that reason, it then makes sense
 to have the *cc-setting* last.  Changing rebelling ports doesn't
 solve that inconsistency.

Except you also have the variant of the insn pattern where the CC is
set and the GPR is clobbered (on PowerPC we have one of those for
every insn, and only a few where CC is clobbered).

 So, my vote for canonically declaring the order non-canonical
 *and* automatically generating/matching both orders.

It would be nice to only have to write the set+set version, and do
some markup to say which of the clobber variants should be generated,
yes.

Having the order canonical is nice for whatever has to match it.
There are a lot more places that have to match it than places that
have to generate it.  We could of course change gen* so you can
write patterns in any order you please in your machine description
files.


Segher


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-18 Thread Segher Boessenkool
On Sun, Jan 18, 2015 at 04:26:27PM -0500, Hans-Peter Nilsson wrote:
 For targets where most insns set condition-codes (and that don't
 use the deprecated CC0-machinery), those insns will always be
 expressed using a parallel with (most often) two members, one
 being the main part of the insn and the other either a
 (clobber (reg:CC ...)) or a (set (reg:CC ...) ...).
 
 There, it doesn't make sense to have a different canonical
 order.  For example: people have already brought up error-prone
 operand renumbering as a problem, from the perspective of
 changing *from* the compare-elim (aka. swapped) order.

You don't need to renumber operands, operand numbers can be in
any order.  A big nuisance though is having to move all your
match_dups and match_operands around (the latter should be first
always).

Although that can be fixed in gen* as well of course.

 Conversely, if it was declared canonical, you'd have to more
 often perform otherwise needless operand renumbering in bodies
 of define_expand's and define_insns (where you hopefully use
 e.g. define_subst to avoid pattern explosion), when you need to
 refer to the operands of the main, non-cc part, for both the
 set and the clobber substitution.

I wish I could use define_subst, but it a) is not generic enough
for most uses, and b) does not handle splitters at all :-(
That is something for a different discussion though.


Segher


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-18 Thread Hans-Peter Nilsson
On Sat, 17 Jan 2015, Jakub Jelinek wrote:
 On Sat, Jan 17, 2015 at 01:18:44PM -0500, Hans-Peter Nilsson wrote:
  (Waking up an old thread with my 2 cents due to being a little
  behind on reading...)
 
  On Sat, 6 Dec 2014, Jakub Jelinek wrote:
   On Sat, Dec 06, 2014 at 09:28:57AM +0100, Uros Bizjak wrote:
 That's already what it does though, did you mean the opposite?  Or 
 did you
 mean to write combine instead of compare?
   
The above should read ... that existing RTX *combine* pass be updated
..., thanks for pointing out!
  
   Which target actually uses the [(operation) (set (cc) ...)] order in their
   *.md patterns?  Even aarch64 and arm use the [(set (cc) ...) (operation)]
   order that combine expects, I thought compare-elim was written for those
   targets?  If the vast majority of mds use the order that combine expects,
   I think it should be easier to adjust compare-elim.c and those few targets
   that diverge.
 
  The current cc-first order happened more of an accidental
  opinion than an architectural decision as I vaguely recall, when
  asking.  We also have the canonical location of a *cc clobber*,
  i.e. last in a parallel.  For that reason, it then makes sense
  to have the *cc-setting* last.  Changing rebelling ports doesn't
  solve that inconsistency.

 Clobber is clobber, all clobbers come last, so it has nothing to do if
 cc set is first or second.

You honestly don't see the benefit of using that order also
when there's a cc-setting conceptually in place of a clobber?

For targets where most insns set condition-codes (and that don't
use the deprecated CC0-machinery), those insns will always be
expressed using a parallel with (most often) two members, one
being the main part of the insn and the other either a
(clobber (reg:CC ...)) or a (set (reg:CC ...) ...).

There, it doesn't make sense to have a different canonical
order.  For example: people have already brought up error-prone
operand renumbering as a problem, from the perspective of
changing *from* the compare-elim (aka. swapped) order.
Conversely, if it was declared canonical, you'd have to more
often perform otherwise needless operand renumbering in bodies
of define_expand's and define_insns (where you hopefully use
e.g. define_subst to avoid pattern explosion), when you need to
refer to the operands of the main, non-cc part, for both the
set and the clobber substitution.

I don't insist on changing compare-elim and matching targets,
but declaring the *lack of canonical order* and having gcc cope
(by e.g. matching both), as others have suggested, makes sense.

brgds, H-P


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-17 Thread Hans-Peter Nilsson
(Waking up an old thread with my 2 cents due to being a little
behind on reading...)

On Sat, 6 Dec 2014, Jakub Jelinek wrote:
 On Sat, Dec 06, 2014 at 09:28:57AM +0100, Uros Bizjak wrote:
   That's already what it does though, did you mean the opposite?  Or did you
   mean to write combine instead of compare?
 
  The above should read ... that existing RTX *combine* pass be updated
  ..., thanks for pointing out!

 Which target actually uses the [(operation) (set (cc) ...)] order in their
 *.md patterns?  Even aarch64 and arm use the [(set (cc) ...) (operation)]
 order that combine expects, I thought compare-elim was written for those
 targets?  If the vast majority of mds use the order that combine expects,
 I think it should be easier to adjust compare-elim.c and those few targets
 that diverge.

The current cc-first order happened more of an accidental
opinion than an architectural decision as I vaguely recall, when
asking.  We also have the canonical location of a *cc clobber*,
i.e. last in a parallel.  For that reason, it then makes sense
to have the *cc-setting* last.  Changing rebelling ports doesn't
solve that inconsistency.

So, my vote for canonically declaring the order non-canonical
*and* automatically generating/matching both orders.

brgds, H-P


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2015-01-17 Thread Jakub Jelinek
On Sat, Jan 17, 2015 at 01:18:44PM -0500, Hans-Peter Nilsson wrote:
 (Waking up an old thread with my 2 cents due to being a little
 behind on reading...)
 
 On Sat, 6 Dec 2014, Jakub Jelinek wrote:
  On Sat, Dec 06, 2014 at 09:28:57AM +0100, Uros Bizjak wrote:
That's already what it does though, did you mean the opposite?  Or did 
you
mean to write combine instead of compare?
  
   The above should read ... that existing RTX *combine* pass be updated
   ..., thanks for pointing out!
 
  Which target actually uses the [(operation) (set (cc) ...)] order in their
  *.md patterns?  Even aarch64 and arm use the [(set (cc) ...) (operation)]
  order that combine expects, I thought compare-elim was written for those
  targets?  If the vast majority of mds use the order that combine expects,
  I think it should be easier to adjust compare-elim.c and those few targets
  that diverge.
 
 The current cc-first order happened more of an accidental
 opinion than an architectural decision as I vaguely recall, when
 asking.  We also have the canonical location of a *cc clobber*,
 i.e. last in a parallel.  For that reason, it then makes sense
 to have the *cc-setting* last.  Changing rebelling ports doesn't
 solve that inconsistency.

Clobber is clobber, all clobbers come last, so it has nothing to do if
cc set is first or second.

Jakub


Re: [PATCH] Fix PR 61225

2015-01-14 Thread Jeff Law

On 12/10/14 06:47, Segher Boessenkool wrote:

On Tue, Dec 09, 2014 at 12:15:30PM -0700, Jeff Law wrote:

@@ -3323,7 +3396,11 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn
*i1, rtx_insn *i0,
  rtx old = newpat;
  total_sets = 1 + extra_sets;
  newpat = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (total_sets));
- XVECEXP (newpat, 0, 0) = old;
+
+ if (to_combined_insn)
+   XVECEXP (newpat, 0, --total_sets) = old;
+ else
+   XVECEXP (newpat, 0, 0) = old;
}


Is this correct?  If so, it needs a big fat comment, because it is
not exactly obvious :-)

Also, it doesn't handle at all the case where the new pattern already is
a PARALLEL; can that never happen?

I'd convinced myself it was.  But yes, a comment here would be good.

Presumably you're thinking about a PARALLEL that satisfies single_set_p?


I wasn't thinking about anything in particular; this code does not handle
a PARALLEL newpat with to_combined_insn correctly, and it doesn't say it
cannot happen.
It situations like this where I really need to just put the damn patch 
into my tree and fire up the debugger and poke at it for a while.


Regardless, I got mail from Zhenqiang that he left ARM at the start of 
the year for other opportunities and won't be doing GCC work.


My initial thought is to attach his work to date to the BZ, we can use 
it as a starting point if we want to pursue this missed optimization 
further (it's a regression and thus suitable for stage4 if we're so 
inclined).


Thoughts?

jeff



Re: [PATCH] Fix PR 61225

2014-12-12 Thread Segher Boessenkool
On Fri, Dec 12, 2014 at 03:27:17PM +0800, Zhenqiang Chen wrote:
  Presumably you're thinking about a PARALLEL that satisfies single_set_p?
 
 No. It has nothing to do with single_set_p. I just want to reuse the code to
 match the instruction pattern.
 
 In common, the new PARALLEL is like
 
   Parallel
 newpat from I3
 newpat from I2 // if have
 newpat from I1 // if have
 newpat from I0 // if have
 
 For to_combined_insn, i0 is NULL and there should have no
 
 newpat from I1
 
 When handling I1-I2-I3, with normal order, it will get
   Parallel
 newpat from I3
 
 After I2- to_combined_insn, the parallel will be
   Parallel
 newpat from I3
 newpat from to_combined_insn.
 
 But this can not match the insn pattern. So I swap the order to.
   Parallel
 newpat from to_combined_insn.
 newpat from I3

Maybe I wasn't clear, sorry.  My concern is you only handle a SET as
newpat, not a PARALLEL.  It can be a PARALLEL just fine, even if it
satisfies single_set (it can have a clobber, it can have multiple sets,
all but one dead).


Thanks for the other changes, much appreciated.


Segher


RE: [PATCH] Fix PR 61225

2014-12-11 Thread Zhenqiang Chen


 -Original Message-
 From: Jeff Law [mailto:l...@redhat.com]
 Sent: Wednesday, December 10, 2014 3:16 AM
 To: Segher Boessenkool; Zhenqiang Chen
 Cc: gcc-patches@gcc.gnu.org
 Subject: Re: [PATCH] Fix PR 61225
 
 On 12/09/14 12:07, Segher Boessenkool wrote:
  On Tue, Dec 09, 2014 at 05:49:18PM +0800, Zhenqiang Chen wrote:
  Do you need to verify SETA and SETB satisfy single_set?  Or has that
  already been done elsewhere?
 
  A is NEXT_INSN (insn)
  B is prev_nonnote_nondebug_insn (insn),
 
  For I1 - I2 - B; I2 - A;
  LOG_LINK can make sure I1 and I2 are single_set,
 
  It cannot, not anymore anyway.  LOG_LINKs can point to an insn with
  multiple SETs; multiple LOG_LINKs can point to such an insn.
 So let's go ahead and put a single_set test in this function.
 
  Is this fragment really needed?  Does it ever trigger?  I'd think
  that
  for  2 uses punting would be fine.  Do we really commonly have
  cases with  2 uses, but where they're all in SETA and SETB?
 
  Can't you just check for a death note on the second insn?  Together
  with reg_used_between_p?
 Yea, that'd accomplish the same thing I think Zhenqiang is trying to catch
and
 is simpler than walking the lists.

Updated. Check for a death note is enough since b is
prev_nonnote_nondebug_insn (a). 
 
 
  +  /* Try to combine a compare insn that sets CC
  + with a preceding insn that can set CC, and maybe with
its
  + logical predecessor as well.
  + We need this special code because data flow connections
  + do not get entered in LOG_LINKS.  */
 
  I think you mean not _all_ data flow connections?
 I almost said something about this comment, but figured I was nitpicking
too
 much :-)

Updated. 

  So you've got two new combine cases here, but I think the testcase
  only tests one of them.  Can you include a testcase for both of hte
  major paths above (I1-I2-I3; I2-insn and I2-I3; I2-INSN)
 
  pr61225.c is the case to cover I1-I2-I3; I2-insn.
 
  For I2 - I3; I2 - insn, I tried my test cases and found peephole2
  can also handle them. So I removed the code from the patch.
 
  Why?  The simpler case has much better chances of being used.
 The question does it actually catch anything not already handled?  I guess
you
 could argue that doing it in combine is better than peep2 and I'd agree
with
 that.
 
 
  In fact, there are many more cases you could handle:
 
  You handle
 
  I1 - I2 - I3; I2 - insn
 I2 - I3; I2 - insn
 
  but there are also
 
  I1,I2 - I3; I2 - insn
 
  and the many 4-insn combos, too.
 Yes, but I wonder how much of this is really necessary in practice.  We
 could do exhaustive testing here, but I suspect the payoff isn't all
 that great.  Thus I'm comfortable with faulting in the cases we actually
 find are useful in practice.
 
 
  +/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
  +   It returns TRUE, if reg1 == reg2, and no other refer of reg1
  +   except A and B.  */
 
  That sound like the only correct inputs are such a compare etc., but the
  routine tests whether that is true.
 Correct, the RTL has to have a specific form and that is tested for.
 Comment updates can't hurt.
 
Updated.
 
 
  +static bool
  +can_reuse_cc_set_p (rtx_insn *a, rtx_insn *b)
  +{
  +  rtx seta = single_set (a);
  +  rtx setb = single_set (b);
  +
  +  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
 
  Neither the comment nor the function name mention this.  This test is
  better placed in the caller of this function, anyway.
 Didn't consider it terribly important.  Moving it to the caller doesn't
 change anything significantly, though I would agree it's martinally
cleaner.

Updated.
 
 
  @@ -3323,7 +3396,11 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
 rtx_insn
  *i1, rtx_insn *i0,
   rtx old = newpat;
   total_sets = 1 + extra_sets;
   newpat = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (total_sets));
  -XVECEXP (newpat, 0, 0) = old;
  +
  +if (to_combined_insn)
  +  XVECEXP (newpat, 0, --total_sets) = old;
  +else
  +  XVECEXP (newpat, 0, 0) = old;
 }
 
  Is this correct?  If so, it needs a big fat comment, because it is
  not exactly obvious :-)
 
  Also, it doesn't handle at all the case where the new pattern already is
  a PARALLEL; can that never happen?
 I'd convinced myself it was.  But yes, a comment here would be good.

The following comments are added.

+   /* This is a hack to match i386 instruction pattern, which
+   is like
+   (parallel [
+   (set (reg:CCZ 17 flags)
+   ...)
+   (set ...)})
+   we have to swap the newpat order of I3 and TO_COMBINED_INSN.
*/

 Presumably you're thinking about a PARALLEL that satisfies single_set_p?

No. It has nothing to do with single_set_p. I just want to reuse the code to
match the instruction pattern.

In common, the new PARALLEL is like

  Parallel

Re: [PATCH] Fix PR 61225

2014-12-10 Thread Segher Boessenkool
On Tue, Dec 09, 2014 at 12:15:30PM -0700, Jeff Law wrote:
 @@ -3323,7 +3396,11 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn
 *i1, rtx_insn *i0,
   rtx old = newpat;
   total_sets = 1 + extra_sets;
   newpat = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (total_sets));
 - XVECEXP (newpat, 0, 0) = old;
 +
 + if (to_combined_insn)
 +   XVECEXP (newpat, 0, --total_sets) = old;
 + else
 +   XVECEXP (newpat, 0, 0) = old;
 }
 
 Is this correct?  If so, it needs a big fat comment, because it is
 not exactly obvious :-)
 
 Also, it doesn't handle at all the case where the new pattern already is
 a PARALLEL; can that never happen?
 I'd convinced myself it was.  But yes, a comment here would be good.
 
 Presumably you're thinking about a PARALLEL that satisfies single_set_p?

I wasn't thinking about anything in particular; this code does not handle
a PARALLEL newpat with to_combined_insn correctly, and it doesn't say it
cannot happen.

But yes, I don't see why it could not happen?  E.g. a parallel of multiple
sets with all but one of those dead?

Why should it be single_set here anyway?  (Maybe I need more coffee, sorry
if so).


Segher


RE: [PATCH] Fix PR 61225

2014-12-09 Thread Zhenqiang Chen

 -Original Message-
 From: Jeff Law [mailto:l...@redhat.com]
 Sent: Tuesday, December 09, 2014 5:29 AM
 To: Zhenqiang Chen
 Cc: Steven Bosscher; gcc-patches@gcc.gnu.org; Jakub Jelinek
 Subject: Re: [PATCH] Fix PR 61225
 
 On 12/04/14 01:43, Zhenqiang Chen wrote:
   
 Part of PR rtl-optimization/61225
 * combine.c (refer_same_reg_p): New function.
 (combine_instructions): Handle I1 - I2 - I3; I2 -
insn.
 (try_combine): Add one more parameter TO_COMBINED_INSN,
which
  is
 used to create a new insn parallel (TO_COMBINED_INSN,
I3).
   
   testsuite/ChangeLog:
   2014-08-04  Zhenqiang Chenzhenqiang.c...@linaro.org
   
 * gcc.target/i386/pr61225.c: New test.
 THanks for the updates and clarifications.  Just a few minor things and
while
 it's a bit of a hack, I'll approve:
 
 
 
  +
  +/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
  +   It returns TRUE, if reg1 == reg2, and no other refer of reg1
  +   except A and B.  */
  +
  +static bool
  +refer_same_reg_p (rtx_insn *a, rtx_insn *b)
  +{
  +  rtx seta = single_set (a);
  +  rtx setb = single_set (b);
  +
  +  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
  +  || !seta
  +  || !setb)
  +return false;
  
  +  if (GET_CODE (SET_SRC (seta)) != COMPARE
  +  || GET_MODE_CLASS (GET_MODE (SET_DEST (seta))) != MODE_CC
  +  || !REG_P (XEXP (SET_SRC (seta), 0))
  +  || XEXP (SET_SRC (seta), 1) != const0_rtx
  +  || !REG_P (SET_SRC (setb))
  +  || REGNO (SET_SRC (setb)) != REGNO (XEXP (SET_SRC (seta), 0)))
  +return false;
 Do you need to verify SETA and SETB satisfy single_set?  Or has that
 already been done elsewhere?

A is NEXT_INSN (insn)
B is prev_nonnote_nondebug_insn (insn),

For I1 - I2 - B; I2 - A;
LOG_LINK can make sure I1 and I2 are single_set, but not A and B. And I did
found codes in function try_combine, which can make sure B (or I3) is
single_set. 

So I think the check can skip failed cases at early stage.
 
 The name refer_same_reg_p seems wrong -- your function is verifying the
 underlying RTL store as well as the existence of a a dependency between
 the insns.  Can you try to come up with a better name?

Change it to can_reuse_cc_set_p.

 Please use CONST0_RTX (mode)  IIRC that'll allow this to work regardless
 of the size of the modes relative to the host word size.
 
Updated. 
 
  +
  +  if (DF_REG_USE_COUNT (REGNO (SET_SRC (setb)))  2)
  +{
  +  df_ref use;
  +  rtx insn;
  +  unsigned int i = REGNO (SET_SRC (setb));
  +
  +  for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG
(use))
  +{
  + insn = DF_REF_INSN (use);
  + if (insn != a  insn != b  !(NOTE_P (insn) || DEBUG_INSN_P
 (insn)))
  +   return false;
  +   }
  +}
  +
  +  return true;
  +}
 Is this fragment really needed?  Does it ever trigger?  I'd think that
 for  2 uses punting would be fine.  Do we really commonly have cases
 with  2 uses, but where they're all in SETA and SETB?

The check is to make sure the correctness.  Here is a case,

int 
f1 (int *x)
{
  int t = --*x;
  if (!t)
foo (x);
  return t;
}

  _4 = *x_3(D);
  _5 = _4 + -1;
  *x_3(D) = _5;
  # DEBUG t = _5
  if (_5 == 0)
   ...
  bb 4:
  return _5;

_5 is used in return _5. So we can not remove _5 = _4 + -1.
 
}
}
 
  + /* Try to combine a compare insn that sets CC
  +with a preceding insn that can set CC, and maybe with its
  +logical predecessor as well.
  +We need this special code because data flow connections
  +do not get entered in LOG_LINKS.  */
  + if ((prev = prev_nonnote_nondebug_insn (insn)) != NULL_RTX
  +  refer_same_reg_p (insn, prev)
  +  max_combine = 4)
  +   {
  +   struct insn_link *next1;
  +   FOR_EACH_LOG_LINK (next1, prev)
  + {
  +   rtx_insn *link1 = next1-insn;
  +   if (NOTE_P (link1))
  + continue;
  +   /* I1 - I2 - I3; I2 - insn;
  +  output parallel (insn, I3).  */
  +   FOR_EACH_LOG_LINK (nextlinks, link1)
  + if ((next = try_combine (prev, link1,
  +  nextlinks-insn, NULL,
  +  new_direct_jump_p,
  +  last_combined_insn, insn)) !=
0)
  +
  +   {
  + delete_insn (insn);
  + insn = next;
  + statistics_counter_event (cfun, four-insn
combine,
 1);
  + goto retry;
  +   }
  +   /* I2 - I3; I2 - insn
  +  output next = parallel (insn, I3).  */
  +   if ((next = try_combine (prev, link1,
  +NULL, NULL,
  +new_direct_jump_p

Re: [PATCH] Fix PR 61225

2014-12-09 Thread Jeff Law

On 12/09/14 02:49, Zhenqiang Chen wrote:

Do you need to verify SETA and SETB satisfy single_set?  Or has that
already been done elsewhere?


A is NEXT_INSN (insn)
B is prev_nonnote_nondebug_insn (insn),

For I1 - I2 - B; I2 - A;
LOG_LINK can make sure I1 and I2 are single_set, but not A and B. And I did
found codes in function try_combine, which can make sure B (or I3) is
single_set.

So I think the check can skip failed cases at early stage.

Thanks for doing the research on this.



The check is to make sure the correctness.  Here is a case,

int
f1 (int *x)
{
   int t = --*x;
   if (!t)
 foo (x);
   return t;
}

   _4 = *x_3(D);
   _5 = _4 + -1;
   *x_3(D) = _5;
   # DEBUG t = _5
   if (_5 == 0)
...
   bb 4:
   return _5;

_5 is used in return _5. So we can not remove _5 = _4 + -1.
Right, but ISTM that if the # uses  2, then we could just return false 
rather than bothering to see if all the uses are consumed by A or B. 
It's not a big deal, I just have a hard time seeing that doing something 
more complex than if (# uses  2) return false; makes sense.






So you've got two new combine cases here, but I think the testcase only
tests one of them.  Can you include a testcase for both of hte major
paths above (I1-I2-I3; I2-insn and I2-I3; I2-INSN)


pr61225.c is the case to cover I1-I2-I3; I2-insn.

For I2 - I3; I2 - insn, I tried my test cases and found peephole2 can also
handle them. So I removed the code from the patch.

Seems like the reasonable thing to do.



Here is the final patch.
Bootstrap and no make check regression on X86-64.

ChangeLog:
2014-11-09  Zhenqiang Chen  zhenqiang.c...@linaro.org

Part of PR rtl-optimization/61225
* combine.c (can_reuse_cc_set_p): New function.
(combine_instructions): Handle I1 - I2 - I3; I2 - insn.
(try_combine): Add one more parameter TO_COMBINED_INSN, which
is used to create a new insn parallel (TO_COMBINED_INSN, I3).

testsuite/ChangeLog:
2014-11-09  Zhenqiang Chenzhenqiang.c...@linaro.org

* gcc.target/i386/pr61225.c: New test.



OK for the trunk.

jeff


Re: [PATCH] Fix PR 61225

2014-12-09 Thread Segher Boessenkool
On Tue, Dec 09, 2014 at 05:49:18PM +0800, Zhenqiang Chen wrote:
  Do you need to verify SETA and SETB satisfy single_set?  Or has that
  already been done elsewhere?
 
 A is NEXT_INSN (insn)
 B is prev_nonnote_nondebug_insn (insn),
 
 For I1 - I2 - B; I2 - A;
 LOG_LINK can make sure I1 and I2 are single_set,

It cannot, not anymore anyway.  LOG_LINKs can point to an insn with multiple
SETs; multiple LOG_LINKs can point to such an insn.

The only thing a LOG_LINK from Y to X tells you is that Y is the earliest
insn after X that uses some register set by X (and it knows which register,
too, nowadays).

   +  if (DF_REG_USE_COUNT (REGNO (SET_SRC (setb)))  2)
   +{
   +  df_ref use;
   +  rtx insn;
   +  unsigned int i = REGNO (SET_SRC (setb));
   +
   +  for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG
 (use))
   +{
   +   insn = DF_REF_INSN (use);
   +   if (insn != a  insn != b  !(NOTE_P (insn) || DEBUG_INSN_P
  (insn)))
   + return false;
   + }
   +}
   +
   +  return true;
   +}
  Is this fragment really needed?  Does it ever trigger?  I'd think that
  for  2 uses punting would be fine.  Do we really commonly have cases
  with  2 uses, but where they're all in SETA and SETB?

Can't you just check for a death note on the second insn?  Together with
reg_used_between_p?

   +   /* Try to combine a compare insn that sets CC
   +  with a preceding insn that can set CC, and maybe with its
   +  logical predecessor as well.
   +  We need this special code because data flow connections
   +  do not get entered in LOG_LINKS.  */

I think you mean not _all_ data flow connections?

   +   if ((prev = prev_nonnote_nondebug_insn (insn)) != NULL_RTX
   +refer_same_reg_p (insn, prev)
   +max_combine = 4)
   + {
   + struct insn_link *next1;
   + FOR_EACH_LOG_LINK (next1, prev)
   +   {
   + rtx_insn *link1 = next1-insn;
   + if (NOTE_P (link1))
   +   continue;
   + /* I1 - I2 - I3; I2 - insn;
   +output parallel (insn, I3).  */
   + FOR_EACH_LOG_LINK (nextlinks, link1)
   +   if ((next = try_combine (prev, link1,
   +nextlinks-insn, NULL,
   +new_direct_jump_p,
   +last_combined_insn, insn)) !=
 0)
   +
   + {
   +   delete_insn (insn);
   +   insn = next;
   +   statistics_counter_event (cfun, four-insn
 combine,
  1);
   +   goto retry;
   + }
   + /* I2 - I3; I2 - insn
   +output next = parallel (insn, I3).  */
   + if ((next = try_combine (prev, link1,
   +  NULL, NULL,
   +  new_direct_jump_p,
   +  last_combined_insn, insn)) !=
 0)
   +
   +   {
   + delete_insn (insn);
   + insn = next;
   + statistics_counter_event (cfun, three-insn
 combine,
  1);
   + goto retry;
   +   }
   +   }
   + }
  So you've got two new combine cases here, but I think the testcase only
  tests one of them.  Can you include a testcase for both of hte major
  paths above (I1-I2-I3; I2-insn and I2-I3; I2-INSN)
 
 pr61225.c is the case to cover I1-I2-I3; I2-insn.
 
 For I2 - I3; I2 - insn, I tried my test cases and found peephole2 can also
 handle them. So I removed the code from the patch.

Why?  The simpler case has much better chances of being used.

In fact, there are many more cases you could handle:

You handle

I1 - I2 - I3; I2 - insn
  I2 - I3; I2 - insn

but there are also

   I1,I2 - I3; I2 - insn

and the many 4-insn combos, too.
But that's not all: instead of just dealing with I2-insn, you can just as
well have I1-insn or I0-insn, and if you could handle the SET not dying
in the resulting insn, I3-insn.

In fact, in that case you really only need to handle I3-insn (no other
instructions involved), as a simple 2-insn combination that combines into
the earlier insn instead of the later, to get the effect you want.

Just like your patch, that would pull insn earlier, but it would do it
much more explicitly.


Some comments on the patch...

 +/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
 +   It returns TRUE, if reg1 == reg2, and no other refer of reg1
 +   except A and B.  */

That sound like the only correct inputs are such a compare etc., but the
routine tests whether that is true.

 +static bool
 +can_reuse_cc_set_p (rtx_insn *a, rtx_insn *b)
 +{
 +  rtx seta = single_set (a);
 +  rtx setb = single_set (b);
 +
 +  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)

Neither the comment nor the function name mention this.  This test is
better placed in the 

Re: [PATCH] Fix PR 61225

2014-12-09 Thread Jeff Law

On 12/09/14 12:07, Segher Boessenkool wrote:

On Tue, Dec 09, 2014 at 05:49:18PM +0800, Zhenqiang Chen wrote:

Do you need to verify SETA and SETB satisfy single_set?  Or has that
already been done elsewhere?


A is NEXT_INSN (insn)
B is prev_nonnote_nondebug_insn (insn),

For I1 - I2 - B; I2 - A;
LOG_LINK can make sure I1 and I2 are single_set,


It cannot, not anymore anyway.  LOG_LINKs can point to an insn with multiple
SETs; multiple LOG_LINKs can point to such an insn.

So let's go ahead and put a single_set test in this function.


Is this fragment really needed?  Does it ever trigger?  I'd think that

for  2 uses punting would be fine.  Do we really commonly have cases
with  2 uses, but where they're all in SETA and SETB?


Can't you just check for a death note on the second insn?  Together with
reg_used_between_p?
Yea, that'd accomplish the same thing I think Zhenqiang is trying to 
catch and is simpler than walking the lists.





+ /* Try to combine a compare insn that sets CC
+with a preceding insn that can set CC, and maybe with its
+logical predecessor as well.
+We need this special code because data flow connections
+do not get entered in LOG_LINKS.  */


I think you mean not _all_ data flow connections?
I almost said something about this comment, but figured I was nitpicking 
too much :-)



So you've got two new combine cases here, but I think the testcase only
tests one of them.  Can you include a testcase for both of hte major
paths above (I1-I2-I3; I2-insn and I2-I3; I2-INSN)


pr61225.c is the case to cover I1-I2-I3; I2-insn.

For I2 - I3; I2 - insn, I tried my test cases and found peephole2 can also
handle them. So I removed the code from the patch.


Why?  The simpler case has much better chances of being used.
The question does it actually catch anything not already handled?  I 
guess you could argue that doing it in combine is better than peep2 and 
I'd agree with that.




In fact, there are many more cases you could handle:

You handle

I1 - I2 - I3; I2 - insn
   I2 - I3; I2 - insn

but there are also

I1,I2 - I3; I2 - insn

and the many 4-insn combos, too.
Yes, but I wonder how much of this is really necessary in practice.  We 
could do exhaustive testing here, but I suspect the payoff isn't all 
that great.  Thus I'm comfortable with faulting in the cases we actually 
find are useful in practice.





+/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
+   It returns TRUE, if reg1 == reg2, and no other refer of reg1
+   except A and B.  */


That sound like the only correct inputs are such a compare etc., but the
routine tests whether that is true.
Correct, the RTL has to have a specific form and that is tested for. 
Comment updates can't hurt.






+static bool
+can_reuse_cc_set_p (rtx_insn *a, rtx_insn *b)
+{
+  rtx seta = single_set (a);
+  rtx setb = single_set (b);
+
+  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)


Neither the comment nor the function name mention this.  This test is
better placed in the caller of this function, anyway.
Didn't consider it terribly important.  Moving it to the caller doesn't 
change anything significantly, though I would agree it's martinally cleaner.





@@ -3323,7 +3396,11 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn
*i1, rtx_insn *i0,
  rtx old = newpat;
  total_sets = 1 + extra_sets;
  newpat = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (total_sets));
- XVECEXP (newpat, 0, 0) = old;
+
+ if (to_combined_insn)
+   XVECEXP (newpat, 0, --total_sets) = old;
+ else
+   XVECEXP (newpat, 0, 0) = old;
}


Is this correct?  If so, it needs a big fat comment, because it is
not exactly obvious :-)

Also, it doesn't handle at all the case where the new pattern already is
a PARALLEL; can that never happen?

I'd convinced myself it was.  But yes, a comment here would be good.

Presumably you're thinking about a PARALLEL that satisfies single_set_p?

jeff



Re: [PATCH] Fix PR 61225

2014-12-08 Thread Jeff Law

On 12/05/14 17:16, Segher Boessenkool wrote:

On Fri, Dec 05, 2014 at 03:31:54PM -0700, Jeff Law wrote:

Combine does not consider combining 9 into 7 because there is no LOG_LINK
between them (the link for r88 is between 8 and 7 already).

OK, yea, that's a long standing design decision.  We don't feed a single
def into multiple use sites.


There is no real reason not to do that.  It doesn't increase computational
complexity, although it is of course more expensive than what combine does
today (it is more work, after all).  And combining with a later use does
not have too big a chance to succeed (since it has to keep the result of
the earlier insn around always).
No fundamental reason, it's just always been that way.  One could argue 
that a bridge pattern often makes this unnecessary and bridges have been 
a well known way to work around combine's failings for a long time.



GCC 6 or later ;-)

Yea, I think so :)

jeff


Re: [PATCH] Fix PR 61225

2014-12-08 Thread Jeff Law

On 12/04/14 01:43, Zhenqiang Chen wrote:

 
   Part of PR rtl-optimization/61225
   * combine.c (refer_same_reg_p): New function.
   (combine_instructions): Handle I1 - I2 - I3; I2 - insn.
   (try_combine): Add one more parameter TO_COMBINED_INSN, which

is

   used to create a new insn parallel (TO_COMBINED_INSN, I3).
 
 testsuite/ChangeLog:
 2014-08-04  Zhenqiang Chenzhenqiang.c...@linaro.org
 
   * gcc.target/i386/pr61225.c: New test.
THanks for the updates and clarifications.  Just a few minor things and 
while it's a bit of a hack, I'll approve:





+
+/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
+   It returns TRUE, if reg1 == reg2, and no other refer of reg1
+   except A and B.  */
+
+static bool
+refer_same_reg_p (rtx_insn *a, rtx_insn *b)
+{
+  rtx seta = single_set (a);
+  rtx setb = single_set (b);
+
+  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
+  || !seta
+  || !setb)
+return false;
+
+  if (GET_CODE (SET_SRC (seta)) != COMPARE
+  || GET_MODE_CLASS (GET_MODE (SET_DEST (seta))) != MODE_CC
+  || !REG_P (XEXP (SET_SRC (seta), 0))
+  || XEXP (SET_SRC (seta), 1) != const0_rtx
+  || !REG_P (SET_SRC (setb))
+  || REGNO (SET_SRC (setb)) != REGNO (XEXP (SET_SRC (seta), 0)))
+return false;
Do you need to verify SETA and SETB satisfy single_set?  Or has that 
already been done elsewhere?


The name refer_same_reg_p seems wrong -- your function is verifying the 
underlying RTL store as well as the existence of a a dependency between 
the insns.  Can you try to come up with a better name?


Please use CONST0_RTX (mode)  IIRC that'll allow this to work regardless 
of the size of the modes relative to the host word size.





+
+  if (DF_REG_USE_COUNT (REGNO (SET_SRC (setb)))  2)
+{
+  df_ref use;
+  rtx insn;
+  unsigned int i = REGNO (SET_SRC (setb));
+
+  for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG (use))
+{
+ insn = DF_REF_INSN (use);
+ if (insn != a  insn != b  !(NOTE_P (insn) || DEBUG_INSN_P (insn)))
+   return false;
+   }
+}
+
+  return true;
+}
Is this fragment really needed?  Does it ever trigger?  I'd think that 
for  2 uses punting would be fine.  Do we really commonly have cases 
with  2 uses, but where they're all in SETA and SETB?



  }
  }

+ /* Try to combine a compare insn that sets CC
+with a preceding insn that can set CC, and maybe with its
+logical predecessor as well.
+We need this special code because data flow connections
+do not get entered in LOG_LINKS.  */
+ if ((prev = prev_nonnote_nondebug_insn (insn)) != NULL_RTX
+  refer_same_reg_p (insn, prev)
+  max_combine = 4)
+   {
+   struct insn_link *next1;
+   FOR_EACH_LOG_LINK (next1, prev)
+ {
+   rtx_insn *link1 = next1-insn;
+   if (NOTE_P (link1))
+ continue;
+   /* I1 - I2 - I3; I2 - insn;
+  output parallel (insn, I3).  */
+   FOR_EACH_LOG_LINK (nextlinks, link1)
+ if ((next = try_combine (prev, link1,
+  nextlinks-insn, NULL,
+  new_direct_jump_p,
+  last_combined_insn, insn)) != 0)
+
+   {
+ delete_insn (insn);
+ insn = next;
+ statistics_counter_event (cfun, four-insn combine, 
1);
+ goto retry;
+   }
+   /* I2 - I3; I2 - insn
+  output next = parallel (insn, I3).  */
+   if ((next = try_combine (prev, link1,
+NULL, NULL,
+new_direct_jump_p,
+last_combined_insn, insn)) != 0)
+
+ {
+   delete_insn (insn);
+   insn = next;
+   statistics_counter_event (cfun, three-insn combine, 
1);
+   goto retry;
+ }
+ }
+   }
So you've got two new combine cases here, but I think the testcase only 
tests one of them.  Can you include a testcase for both of hte major 
paths above (I1-I2-I3; I2-insn and I2-I3; I2-INSN)


Please make those changes and repost for final approval.

jeff


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-08 Thread Richard Henderson
On 12/06/2014 12:56 AM, Jakub Jelinek wrote:
 So, any other md than rx and mn10300 that uses the non-standard order?
 

Not that I'm aware of.


r~


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-06 Thread Uros Bizjak
On Fri, Dec 5, 2014 at 6:15 PM, Eric Botcazou ebotca...@adacore.com wrote:
 --quote--
 If we want to use this pass for x86, then for 4.8 we should also fix the
 discrepancy between the compare-elim canonical

   [(operate)
(set-cc)]

 and the combine canonical

   [(set-cc)
(operate)]

 (Because of the simplicity of the substitution in compare-elim, I prefer
 the former as the canonical canonical.)
 --/quote--

 I agree with the above.

 There were some patches flowing around [2], [3] that enhanced
 compare-elim pass for x86 needs, but the target never switched to new
 pass, mostly because compare-elim pass did not catch all cases that
 traditional RTX combine pass did.

 Does [2] really work with the mode mismatch?  See the pending patch at
   https://gcc.gnu.org/ml/gcc-patches/2014-11/msg03458.html

 Due to the above, I would like to propose that existing RTX compare
 pass be updated to handle [(operate)(set-cc)] patterns (exclusively?).

 That's already what it does though, did you mean the opposite?  Or did you
 mean to write combine instead of compare?

The above should read ... that existing RTX *combine* pass be updated
..., thanks for pointing out!

Uros.


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-06 Thread Jakub Jelinek
On Sat, Dec 06, 2014 at 09:28:57AM +0100, Uros Bizjak wrote:
  That's already what it does though, did you mean the opposite?  Or did you
  mean to write combine instead of compare?
 
 The above should read ... that existing RTX *combine* pass be updated
 ..., thanks for pointing out!

Which target actually uses the [(operation) (set (cc) ...)] order in their
*.md patterns?  Even aarch64 and arm use the [(set (cc) ...) (operation)]
order that combine expects, I thought compare-elim was written for those
targets?  If the vast majority of mds use the order that combine expects,
I think it should be easier to adjust compare-elim.c and those few targets
that diverge.

Jakub


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-06 Thread Jakub Jelinek
On Sat, Dec 06, 2014 at 09:38:43AM +0100, Jakub Jelinek wrote:
 On Sat, Dec 06, 2014 at 09:28:57AM +0100, Uros Bizjak wrote:
   That's already what it does though, did you mean the opposite?  Or did you
   mean to write combine instead of compare?
  
  The above should read ... that existing RTX *combine* pass be updated
  ..., thanks for pointing out!
 
 Which target actually uses the [(operation) (set (cc) ...)] order in their
 *.md patterns?  Even aarch64 and arm use the [(set (cc) ...) (operation)]
 order that combine expects, I thought compare-elim was written for those
 targets?  If the vast majority of mds use the order that combine expects,
 I think it should be easier to adjust compare-elim.c and those few targets
 that diverge.

So, any other md than rx and mn10300 that uses the non-standard order?

Jakub


Re: [PATCH] Fix PR 61225

2014-12-06 Thread Segher Boessenkool
On Fri, Dec 05, 2014 at 06:09:11PM -0600, Segher Boessenkool wrote:
 On Fri, Dec 05, 2014 at 03:36:01PM -0700, Jeff Law wrote:
  Zhenqiang, can you look at what happens if you provide a pattern for 
  6+7+8 (probably via a define_and_split)?
 
 I tried this out yesterday.  There are a few options (a bridge pattern
 for 6+7+8, or one for 7+8).  I went with 6+7+8.
 
 So the code combine is asked to optimise is
 
 6  A = M
 7  T = A + B
 8  M = T
 9  C = cmp T, 0

... and combine will never combine a write to memory (8 here) into a
later insn (see can_combine_p).  So this won't ever fly.

I see no reasonably simple way combine can be convinced to do this.
There are various possible schemes to pull insn 9 to before 8, but when
does this help and when does it hurt?  It all depends on the target :-(


Segher


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-05 Thread Eric Botcazou
 --quote--
 If we want to use this pass for x86, then for 4.8 we should also fix the
 discrepancy between the compare-elim canonical
 
   [(operate)
(set-cc)]
 
 and the combine canonical
 
   [(set-cc)
(operate)]
 
 (Because of the simplicity of the substitution in compare-elim, I prefer
 the former as the canonical canonical.)
 --/quote--

I agree with the above.

 There were some patches flowing around [2], [3] that enhanced
 compare-elim pass for x86 needs, but the target never switched to new
 pass, mostly because compare-elim pass did not catch all cases that
 traditional RTX combine pass did.

Does [2] really work with the mode mismatch?  See the pending patch at
  https://gcc.gnu.org/ml/gcc-patches/2014-11/msg03458.html

 Due to the above, I would like to propose that existing RTX compare
 pass be updated to handle [(operate)(set-cc)] patterns (exclusively?).

That's already what it does though, did you mean the opposite?  Or did you 
mean to write combine instead of compare?

 There is also hidden benefit for other, compare-elim only targets.
 Having this pass enabled on a wildly popular target would help
 catching eventual bugs in the pass.

FWIW we're about to submit a port that makes a heavy use of it.

-- 
Eric Botcazou


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-05 Thread Jeff Law

On 12/04/14 00:41, Uros Bizjak wrote:

Hello!


I also wonder if compare-elim ought to be helping here.  Isn't that the
point here, to eliminate the comparison and instead get it for free as
part of the arithmetic?  If so, is it the fact that we have memory
references that prevents compare-elim from kicking in?


Yes, compare-elim doesn't work with memory references but, more radically, it
is not enabled for x86 (it is only enabled for aarch64, mn10300 and rx).


I did experiment a bit with a compare-elim pass on x86. However, as
rth said in [1]:

--quote--
If we want to use this pass for x86, then for 4.8 we should also fix the
discrepancy between the compare-elim canonical

   [(operate)
(set-cc)]

and the combine canonical

   [(set-cc)
(operate)]

(Because of the simplicity of the substitution in compare-elim, I prefer
the former as the canonical canonical.)
--/quote--

There were some patches flowing around [2], [3] that enhanced
compare-elim pass for x86 needs, but the target never switched to new
pass, mostly because compare-elim pass did not catch all cases that
traditional RTX combine pass did. However, combine-elim pass can cross
BB boundaries, where traditional RTX combine doesn't (and IIRC it even
has a comment why it doesn't try too hard to do so).

The reason why x86 doesn't use both passes is simply due to the fact
quoted above. compare-elim pass substitutes the clobber in the
PARALLEL RTX with a new set-cc in-place, so all relevant patterns in
i386.md (and a couple of support functions in i386.c) would have to be
swapped around. Unfortunately, simply changing i386.md insn patterns
would disable existing RTX combiner functionality, leading to various
missed-optimization regressions.

Due to the above, I would like to propose that existing RTX compare
pass be updated to handle [(operate)(set-cc)] patterns (exclusively?).
 From my experience, compare-elim post-reload pass would catch a bunch
of remaining cross-BB opportunities, left by RTX combine pass, so
compare-elim pass would be effective on x86 also after RTX combiner
does its job. While target-dependent changes would be fairly trivial,
I don't know about the amount of work in combine.c to handle new
canonical patterns. Maybe RTL maintainer can chime in (hint, hint, wnk
wink ;)

There is also hidden benefit for other, compare-elim only targets.
Having this pass enabled on a wildly popular target would help
catching eventual bugs in the pass.

[1] https://gcc.gnu.org/ml/gcc-patches/2012-02/msg00251.html
[2] https://gcc.gnu.org/ml/gcc-patches/2012-02/msg00466.html
[3] https://gcc.gnu.org/ml/gcc-patches/2012-04/msg01487.html
My first thought would be to allow both and have combine swap the order 
in the vector if recog doesn't recognize the pattern.  One could argue 
we could go through a full permutation of ordering in the vector, but 
that's probably above and beyond the call of duty.


Or maybe have the genfoo programs generate the multiple permutations of 
patterns for different ordererings of the elements within the parallel.


Jeff


Uros.





Re: [PATCH] Fix PR 61225

2014-12-05 Thread Jeff Law

On 12/04/14 13:49, Segher Boessenkool wrote:

On Thu, Dec 04, 2014 at 04:43:34PM +0800, Zhenqiang Chen wrote:

C code:

 if (!--*p)

rtl code:

 6: r91:SI=[r90:SI]
 7: {r88:SI=r91:SI-0x1;clobber flags:CC;}
 8: [r90:SI]=r88:SI
 9: flags:CCZ=cmp(r88:SI,0)

expected output:

 8: {flags:CCZ=cmp([r90:SI]-0x1,0);[r90:SI]=[r90:SI]-0x1;}

in assemble, it is

   decl (%eax)


Combine does not consider combining 9 into 7 because there is no LOG_LINK
between them (the link for r88 is between 8 and 7 already).
OK, yea, that's a long standing design decision.  We don't feed a single 
def into multiple use sites.


jeff




Re: [PATCH] Fix PR 61225

2014-12-05 Thread Jeff Law

On 12/04/14 13:57, Segher Boessenkool wrote:


So combine tries to combine 6+7+8; the RTL it comes up with is a parallel
of the memory decrement (without cc clobber, but that is fine), and setting
r88 to the mem minus one.  There is no such pattern in the target, and
combine cannot break the parallel into two sets (because the first modifies
the mem used by the second), so 6+7+8 doesn't combine.

Adding a bridge pattern in the target would work; or you can enhance combine
so it can break up this parallel correctly.
I think myself or someone suggested a bridge pattern in the past, but I 
can't find it, perhaps it was one of the other threads WRT limitations 
of the combiner.


Zhenqiang, can you look at what happens if you provide a pattern for 
6+7+8 (probably via a define_and_split)?


Jeff



Re: [PATCH] Fix PR 61225

2014-12-05 Thread Segher Boessenkool
On Fri, Dec 05, 2014 at 03:36:01PM -0700, Jeff Law wrote:
 So combine tries to combine 6+7+8; the RTL it comes up with is a parallel
 of the memory decrement (without cc clobber, but that is fine), and setting
 r88 to the mem minus one.  There is no such pattern in the target, and
 combine cannot break the parallel into two sets (because the first modifies
 the mem used by the second), so 6+7+8 doesn't combine.
 
 Adding a bridge pattern in the target would work; or you can enhance 
 combine
 so it can break up this parallel correctly.
 I think myself or someone suggested a bridge pattern in the past, but I 
 can't find it, perhaps it was one of the other threads WRT limitations 
 of the combiner.
 
 Zhenqiang, can you look at what happens if you provide a pattern for 
 6+7+8 (probably via a define_and_split)?

I tried this out yesterday.  There are a few options (a bridge pattern
for 6+7+8, or one for 7+8).  I went with 6+7+8.

So the code combine is asked to optimise is

6  A = M
7  T = A + B
8  M = T
9  C = cmp T, 0

and the bridge pattern I added is

M = M + B  ::  T = M + B

(I made it to split to  M = M + B ; T = M  which is probably not optimal,
but irrelevant for the rest here).

So combine happily combines 6+7+8 to the bridge pattern.  But then it
forgets to make a link from 9.  I suppose it just doesn't know how to
make a link to a parallel (it wouldn't ever be useful before my recent
patches).

Investigating...


Segher


Re: [PATCH] Fix PR 61225

2014-12-05 Thread Segher Boessenkool
On Fri, Dec 05, 2014 at 03:31:54PM -0700, Jeff Law wrote:
 Combine does not consider combining 9 into 7 because there is no LOG_LINK
 between them (the link for r88 is between 8 and 7 already).
 OK, yea, that's a long standing design decision.  We don't feed a single 
 def into multiple use sites.

There is no real reason not to do that.  It doesn't increase computational
complexity, although it is of course more expensive than what combine does
today (it is more work, after all).  And combining with a later use does
not have too big a chance to succeed (since it has to keep the result of
the earlier insn around always).

GCC 6 or later ;-)


Segher


Re: [PATCH] Fix PR 61225

2014-12-05 Thread Segher Boessenkool
On Thu, Dec 04, 2014 at 02:57:56PM -0600, Segher Boessenkool wrote:
 Adding a bridge pattern in the target would work; or you can enhance combine
 so it can break up this parallel correctly.

I also investigated that second option.  The enhancement transforms
the combine result

M = XXX  ::  T = XXX

into

M = XXX
T = M

and then the set of T can combine with its later use (the compare), but
it won't ever combine that with the store to M: there is never a link
for memory, only for registers.

Never mind that this is unsuitable for many targets anyway (it creates
a read-after-write hazard).


Segher


Re: Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-05 Thread Segher Boessenkool
On Fri, Dec 05, 2014 at 01:40:56PM -0700, Jeff Law wrote:
 My first thought would be to allow both and have combine swap the order 
 in the vector if recog doesn't recognize the pattern.  One could argue 
 we could go through a full permutation of ordering in the vector, but 
 that's probably above and beyond the call of duty.

Combine also expects a certain ordering for its *inputs*.  It might not
be the only pass that does this either.  All targets (where this works)
have the compare first in all their patterns.  This goes back, what,
20+ years?  It might not be officially canonical, but it's the only
ordering that works everywhere.

Why can the compare-elim pass not simply swap the two elts of the parallel
around?  The alternative is to 1) modify all machine descriptions: this
is used many hundreds of times, if not thousands, and modifications are
not trivial (match_dups change location, for example); and 2) read all RTL
code to identify all the places where these implicit assumptions are made.


Segher


RE: [PATCH] Fix PR 61225

2014-12-04 Thread Zhenqiang Chen

 -Original Message-
 From: gcc-patches-ow...@gcc.gnu.org [mailto:gcc-patches-
 ow...@gcc.gnu.org] On Behalf Of Jeff Law
 Sent: Tuesday, December 02, 2014 6:11 AM
 To: Zhenqiang Chen
 Cc: Steven Bosscher; gcc-patches@gcc.gnu.org; Jakub Jelinek
 Subject: Re: [PATCH] Fix PR 61225
 
 On 08/04/14 02:24, Zhenqiang Chen wrote:
 
 
  ChangeLog:
  2014-05-22  Zhenqiang Chen  zhenqiang.c...@linaro.org
 
Part of PR rtl-optimization/61225
* config/i386/i386-protos.h (ix86_peephole2_rtx_equal_p):
  New proto.
* config/i386/i386.c (ix86_peephole2_rtx_equal_p): New
function.
* regcprop.c (replace_oldest_value_reg): Add REG_EQUAL note
 when
propagating to SET.
 
  I can't help but wonder why the new 4 insn combination code isn't
  presenting this as a nice big fat insn to the x86 backend which would
  eliminate the need for the peep2.
 
  But, assuming there's a fundamental reason why that's not kicking in...
 
  Current combine pass can only handle
 
  I0 - I1 - I2 - I3.
  I0, I1 - I2, I2 - I3.
  I0 - I2; I1, I2 - I3.
  I0 - I1; I1, I2 - I3.
 
  For the case, it is
  I1 - I2 - I3; I2 - INSN
 
  I3 and INSN looks like not related. But INSN is a COMPARE to set CC
  and I3 can also set CC. I3 and INSN can be combined together as one
  instruction to set CC.
 Presumably there's no dataflow between I3 and INSN because they both set
 CC (doesn't that make them anti-dependent?
 
 Can you show me the RTL corresponding to I1, I2, I3 and INSN, I simply
find it
 easier to look at RTL rather than guess why we don't have the appropriate
 linkage and thus not attempting the combinations we want.
 
 Pseudo code for the resulting I3 and INSN would help -- as I work through
 this there's some inconsistencies in how I'm interpreting a few things and
RTL
 and pseudo-rtl for the desired output RTL would help a lot.

C code:

if (!--*p)

rtl code:

6: r91:SI=[r90:SI]
7: {r88:SI=r91:SI-0x1;clobber flags:CC;}
8: [r90:SI]=r88:SI
9: flags:CCZ=cmp(r88:SI,0)

expected output:

8: {flags:CCZ=cmp([r90:SI]-0x1,0);[r90:SI]=[r90:SI]-0x1;}

in assemble, it is

  decl (%eax)

 
  ChangeLog
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org
 
   Part of PR rtl-optimization/61225
   * combine.c (refer_same_reg_p): New function.
   (combine_instructions): Handle I1 - I2 - I3; I2 - insn.
   (try_combine): Add one more parameter TO_COMBINED_INSN, which
 is
   used to create a new insn parallel (TO_COMBINED_INSN, I3).
 
  testsuite/ChangeLog:
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org
 
   * gcc.target/i386/pr61225.c: New test.
 
  diff --git a/gcc/combine.c b/gcc/combine.c index 53ac1d6..42098ab
  100644
  --- a/gcc/combine.c
  +++ b/gcc/combine.c
  @@ -412,7 +412,7 @@ static int cant_combine_insn_p (rtx);
static int can_combine_p (rtx, rtx, rtx, rtx, rtx, rtx, rtx *, rtx *);
static int combinable_i3pat (rtx, rtx *, rtx, rtx, rtx, int, int, rtx
*);
static int contains_muldiv (rtx);
  -static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx);
  +static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx, rtx);
static void undo_all (void);
static void undo_commit (void);
static rtx *find_split_point (rtx *, rtx, bool); @@ -1099,6 +1099,46
  @@ insn_a_feeds_b (rtx a, rtx b)
#endif
  return false;
}
  +
  +/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
  +   It returns TRUE, if reg1 == reg2, and no other refer of reg1
  +   except A and B.  */
  +
  +static bool
  +refer_same_reg_p (rtx a, rtx b)
  +{
  +  rtx seta = single_set (a);
  +  rtx setb = single_set (b);
  +
  +  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
  + || !seta || !setb)
  +return false;
 Go ahead and use
  || !setb
  || !setb
 
 It's a bit more vertical space, but I believe closer in line with our
coding
 standards.

Updated.
 
  +
  +  if (GET_CODE (SET_SRC (seta)) != COMPARE
  +  || GET_MODE_CLASS (GET_MODE (SET_DEST (seta))) != MODE_CC
  +  || !REG_P (XEXP (SET_SRC (seta), 0))
  +  || !const0_rtx
  +  || !REG_P (SET_SRC (setb))
  +  || REGNO (SET_SRC (setb)) != REGNO (XEXP (SET_SRC (seta), 0)))
  +return false;
 What's the !const0_rtx test here?  Don't you want to test some object
 from SETA against const0_rtx?  Also note that you may need to test
 against CONST0_RTX (mode)

It's my fault. It should be

XEXP (SET_SRC (seta), 1) != const0_rtx

The updated patch is attached.

Thanks!
-Zhenqiang

   @@ -1431,6 +1468,50 @@ combine_instructions (rtx f, unsigned int nregs)
 }
 }
 
  + /* Try to combine a compare insn that sets CC
  +with a preceding insn that can set CC, and maybe with its
  +logical predecessor as well.
  +We need this special code because data flow connections
  +do not get entered in LOG_LINKS.  */
 So you'd want to be more

Re: [PATCH] Fix PR 61225

2014-12-04 Thread Segher Boessenkool
On Thu, Dec 04, 2014 at 04:43:34PM +0800, Zhenqiang Chen wrote:
 C code:
 
 if (!--*p)
 
 rtl code:
 
 6: r91:SI=[r90:SI]
 7: {r88:SI=r91:SI-0x1;clobber flags:CC;}
 8: [r90:SI]=r88:SI
 9: flags:CCZ=cmp(r88:SI,0)
 
 expected output:
 
 8: {flags:CCZ=cmp([r90:SI]-0x1,0);[r90:SI]=[r90:SI]-0x1;}
 
 in assemble, it is
 
   decl (%eax)

Combine does not consider combining 9 into 7 because there is no LOG_LINK
between them (the link for r88 is between 8 and 7 already).


Segher


Re: [PATCH] Fix PR 61225

2014-12-04 Thread Segher Boessenkool
On Thu, Dec 04, 2014 at 02:49:56PM -0600, Segher Boessenkool wrote:
 On Thu, Dec 04, 2014 at 04:43:34PM +0800, Zhenqiang Chen wrote:
  C code:
  
  if (!--*p)
  
  rtl code:
  
  6: r91:SI=[r90:SI]
  7: {r88:SI=r91:SI-0x1;clobber flags:CC;}
  8: [r90:SI]=r88:SI
  9: flags:CCZ=cmp(r88:SI,0)
  
  expected output:
  
  8: {flags:CCZ=cmp([r90:SI]-0x1,0);[r90:SI]=[r90:SI]-0x1;}
  
  in assemble, it is
  
decl (%eax)
 
 Combine does not consider combining 9 into 7 because there is no LOG_LINK
 between them (the link for r88 is between 8 and 7 already).

So combine tries to combine 6+7+8; the RTL it comes up with is a parallel
of the memory decrement (without cc clobber, but that is fine), and setting
r88 to the mem minus one.  There is no such pattern in the target, and
combine cannot break the parallel into two sets (because the first modifies
the mem used by the second), so 6+7+8 doesn't combine.

Adding a bridge pattern in the target would work; or you can enhance combine
so it can break up this parallel correctly.


Segher


Compare-elim pass (was: Re: [PATCH] Fix PR 61225)

2014-12-03 Thread Uros Bizjak
Hello!

 I also wonder if compare-elim ought to be helping here.  Isn't that the
 point here, to eliminate the comparison and instead get it for free as
 part of the arithmetic?  If so, is it the fact that we have memory
 references that prevents compare-elim from kicking in?

 Yes, compare-elim doesn't work with memory references but, more radically, it
 is not enabled for x86 (it is only enabled for aarch64, mn10300 and rx).

I did experiment a bit with a compare-elim pass on x86. However, as
rth said in [1]:

--quote--
If we want to use this pass for x86, then for 4.8 we should also fix the
discrepancy between the compare-elim canonical

  [(operate)
   (set-cc)]

and the combine canonical

  [(set-cc)
   (operate)]

(Because of the simplicity of the substitution in compare-elim, I prefer
the former as the canonical canonical.)
--/quote--

There were some patches flowing around [2], [3] that enhanced
compare-elim pass for x86 needs, but the target never switched to new
pass, mostly because compare-elim pass did not catch all cases that
traditional RTX combine pass did. However, combine-elim pass can cross
BB boundaries, where traditional RTX combine doesn't (and IIRC it even
has a comment why it doesn't try too hard to do so).

The reason why x86 doesn't use both passes is simply due to the fact
quoted above. compare-elim pass substitutes the clobber in the
PARALLEL RTX with a new set-cc in-place, so all relevant patterns in
i386.md (and a couple of support functions in i386.c) would have to be
swapped around. Unfortunately, simply changing i386.md insn patterns
would disable existing RTX combiner functionality, leading to various
missed-optimization regressions.

Due to the above, I would like to propose that existing RTX compare
pass be updated to handle [(operate)(set-cc)] patterns (exclusively?).
From my experience, compare-elim post-reload pass would catch a bunch
of remaining cross-BB opportunities, left by RTX combine pass, so
compare-elim pass would be effective on x86 also after RTX combiner
does its job. While target-dependent changes would be fairly trivial,
I don't know about the amount of work in combine.c to handle new
canonical patterns. Maybe RTL maintainer can chime in (hint, hint, wnk
wink ;)

There is also hidden benefit for other, compare-elim only targets.
Having this pass enabled on a wildly popular target would help
catching eventual bugs in the pass.

[1] https://gcc.gnu.org/ml/gcc-patches/2012-02/msg00251.html
[2] https://gcc.gnu.org/ml/gcc-patches/2012-02/msg00466.html
[3] https://gcc.gnu.org/ml/gcc-patches/2012-04/msg01487.html

Uros.


Re: [PATCH] Fix PR 61225

2014-12-01 Thread Jeff Law

On 08/04/14 02:24, Zhenqiang Chen wrote:



ChangeLog:
2014-05-22  Zhenqiang Chen  zhenqiang.c...@linaro.org

  Part of PR rtl-optimization/61225
  * config/i386/i386-protos.h (ix86_peephole2_rtx_equal_p): New
proto.
  * config/i386/i386.c (ix86_peephole2_rtx_equal_p): New function.
  * regcprop.c (replace_oldest_value_reg): Add REG_EQUAL note when
  propagating to SET.


I can't help but wonder why the new 4 insn combination code isn't presenting
this as a nice big fat insn to the x86 backend which would eliminate the
need for the peep2.

But, assuming there's a fundamental reason why that's not kicking in...


Current combine pass can only handle

I0 - I1 - I2 - I3.
I0, I1 - I2, I2 - I3.
I0 - I2; I1, I2 - I3.
I0 - I1; I1, I2 - I3.

For the case, it is
I1 - I2 - I3; I2 - INSN

I3 and INSN looks like not related. But INSN is a COMPARE to set CC
and I3 can also set CC. I3 and INSN can be combined together as one
instruction to set CC.
Presumably there's no dataflow between I3 and INSN because they both set 
CC (doesn't that make them anti-dependent?


Can you show me the RTL corresponding to I1, I2, I3 and INSN, I simply 
find it easier to look at RTL rather than guess why we don't have the 
appropriate linkage and thus not attempting the combinations we want.


Pseudo code for the resulting I3 and INSN would help -- as I work 
through this there's some inconsistencies in how I'm interpreting a few 
things and RTL and pseudo-rtl for the desired output RTL would help a lot.



I


ChangeLog
2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org

 Part of PR rtl-optimization/61225
 * combine.c (refer_same_reg_p): New function.
 (combine_instructions): Handle I1 - I2 - I3; I2 - insn.
 (try_combine): Add one more parameter TO_COMBINED_INSN, which is
 used to create a new insn parallel (TO_COMBINED_INSN, I3).

testsuite/ChangeLog:
2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org

 * gcc.target/i386/pr61225.c: New test.

diff --git a/gcc/combine.c b/gcc/combine.c
index 53ac1d6..42098ab 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -412,7 +412,7 @@ static int cant_combine_insn_p (rtx);
  static int can_combine_p (rtx, rtx, rtx, rtx, rtx, rtx, rtx *, rtx *);
  static int combinable_i3pat (rtx, rtx *, rtx, rtx, rtx, int, int, rtx *);
  static int contains_muldiv (rtx);
-static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx);
+static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx, rtx);
  static void undo_all (void);
  static void undo_commit (void);
  static rtx *find_split_point (rtx *, rtx, bool);
@@ -1099,6 +1099,46 @@ insn_a_feeds_b (rtx a, rtx b)
  #endif
return false;
  }
+
+/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
+   It returns TRUE, if reg1 == reg2, and no other refer of reg1
+   except A and B.  */
+
+static bool
+refer_same_reg_p (rtx a, rtx b)
+{
+  rtx seta = single_set (a);
+  rtx setb = single_set (b);
+
+  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
+ || !seta || !setb)
+return false;

Go ahead and use
|| !setb
|| !setb

It's a bit more vertical space, but I believe closer in line with our 
coding standards.




+
+  if (GET_CODE (SET_SRC (seta)) != COMPARE
+  || GET_MODE_CLASS (GET_MODE (SET_DEST (seta))) != MODE_CC
+  || !REG_P (XEXP (SET_SRC (seta), 0))
+  || !const0_rtx
+  || !REG_P (SET_SRC (setb))
+  || REGNO (SET_SRC (setb)) != REGNO (XEXP (SET_SRC (seta), 0)))
+return false;
What's the !const0_rtx test here?  Don't you want to test some object 
from SETA against const0_rtx?  Also note that you may need to test 
against CONST0_RTX (mode)



@@ -1431,6 +1468,50 @@ combine_instructions (rtx f, unsigned int nregs)
   }
   }

+ /* Try to combine a compare insn that sets CC
+with a preceding insn that can set CC, and maybe with its
+logical predecessor as well.
+We need this special code because data flow connections
+do not get entered in LOG_LINKS.  */
So you'd want to be more specific about what dataflow connections are 
not in the LOG_LINKS that we want.


It feels to me like we're missing the anti-dependence links on CC and 
that there's a general aspect to combine missing here.  But I want to 
hold off on final judgement until I know more.


I also wonder if compare-elim ought to be helping here.  Isn't that the 
point here, to eliminate the comparison and instead get it for free as 
part of the arithmetic?  If so, is it the fact that we have memory 
references that prevents compare-elim from kicking in?


jeff



Re: [PATCH] Fix PR 61225

2014-12-01 Thread Eric Botcazou
 I also wonder if compare-elim ought to be helping here.  Isn't that the
 point here, to eliminate the comparison and instead get it for free as
 part of the arithmetic?  If so, is it the fact that we have memory
 references that prevents compare-elim from kicking in?

Yes, compare-elim doesn't work with memory references but, more radically, it 
is not enabled for x86 (it is only enabled for aarch64, mn10300 and rx).

-- 
Eric Botcazou


Re: [PATCH] Fix PR 61225

2014-08-04 Thread Zhenqiang Chen
On 17 July 2014 11:10, Jeff Law l...@redhat.com wrote:
 On 05/22/14 03:52, Zhenqiang Chen wrote:

 On 21 May 2014 20:43, Steven Bosscher stevenb@gmail.com wrote:

 On Wed, May 21, 2014 at 11:58 AM, Zhenqiang Chen wrote:

 Hi,

 The patch fixes the gcc.target/i386/pr49095.c FAIL in PR61225. The
 test case tends to check a peephole2 optimization, which optimizes the
 following sequence

  2: bx:SI=ax:SI
  25: ax:SI=[bx:SI]
  7: {ax:SI=ax:SI-0x1;clobber flags:CC;}
  8: [bx:SI]=ax:SI
  9: flags:CCZ=cmp(ax:SI,0)
 to
 2: bx:SI=ax:SI
 41: {flags:CCZ=cmp([bx:SI]-0x1,0);[bx:SI]=[bx:SI]-0x1;}

 The enhanced shrink-wrapping, which calls copyprop_hardreg_forward
 changes the INSN 25 to

  25: ax:SI=[ax:SI]

 Then peephole2 can not optimize it since two memory_operands look like
 different.

 To fix it, the patch adds another peephole2 rule to read one more
 insn. From the register copy, it knows the address is the same.


 That is one complex peephole2 to deal with a transformation like this.
 It seems to be like it's a too specific solution for a bigger problem.

 Could you please try one of the following solutions instead:

 1. Track register values for peephole2 and try different alternatives
 based on known register equivalences? E.g. in your example, perhaps
 there is already a REG_EQUAL/REG_EQUIV note available on insn 25 after
 copyprop_hardreg_forward, to annotate that [ax:SI] is equivalent to
 [bx:SI] at that point (or if that information is not available, it is
 not very difficult to make it available). Then you could try applying
 peephole2 on the original pattern but also on patterns modified with
 the known equivalences (i.e. try peephole2 on multiple equivalent
 patterns for the same insn). This may expose other peephole2
 opportunities, not just the specific one your patch addresses.


 Patch is updated according to the comment. There is no REG_EQUAL. So I
 add it when replace_oldest_value_reg.

 ChangeLog:
 2014-05-22  Zhenqiang Chen  zhenqiang.c...@linaro.org

  Part of PR rtl-optimization/61225
  * config/i386/i386-protos.h (ix86_peephole2_rtx_equal_p): New
 proto.
  * config/i386/i386.c (ix86_peephole2_rtx_equal_p): New function.
  * regcprop.c (replace_oldest_value_reg): Add REG_EQUAL note when
  propagating to SET.

 I can't help but wonder why the new 4 insn combination code isn't presenting
 this as a nice big fat insn to the x86 backend which would eliminate the
 need for the peep2.

 But, assuming there's a fundamental reason why that's not kicking in...

Current combine pass can only handle

I0 - I1 - I2 - I3.
I0, I1 - I2, I2 - I3.
I0 - I2; I1, I2 - I3.
I0 - I1; I1, I2 - I3.

For the case, it is
I1 - I2 - I3; I2 - INSN

I3 and INSN looks like not related. But INSN is a COMPARE to set CC
and I3 can also set CC. I3 and INSN can be combined together as one
instruction to set CC.

The following patch enhances combine pass to handle the case.

A new parameter is added for try_combine to accept INSN as
TO_COMBINED_INSN. It reuses the 3-insn combine method to combine I1 -
I2 - I3. If there is TO_COMBINED_INSN, combine I2 -
TO_COMBINED_INSN. Then create an new insn parallel (TO_COMBINED_INSN,
I3). refer_same_reg_p is some check to make sure the change is safe.

Bootstrap and no make check regression on X86-64 and i686.

X86-64 bootstrap logs show 358 cases were combined by the patch.

Ok for trunk?

Thanks!
-Zhenqiang

ChangeLog
2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org

Part of PR rtl-optimization/61225
* combine.c (refer_same_reg_p): New function.
(combine_instructions): Handle I1 - I2 - I3; I2 - insn.
(try_combine): Add one more parameter TO_COMBINED_INSN, which is
used to create a new insn parallel (TO_COMBINED_INSN, I3).

testsuite/ChangeLog:
2014-08-04  Zhenqiang Chen  zhenqiang.c...@linaro.org

* gcc.target/i386/pr61225.c: New test.

diff --git a/gcc/combine.c b/gcc/combine.c
index 53ac1d6..42098ab 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -412,7 +412,7 @@ static int cant_combine_insn_p (rtx);
 static int can_combine_p (rtx, rtx, rtx, rtx, rtx, rtx, rtx *, rtx *);
 static int combinable_i3pat (rtx, rtx *, rtx, rtx, rtx, int, int, rtx *);
 static int contains_muldiv (rtx);
-static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx);
+static rtx try_combine (rtx, rtx, rtx, rtx, int *, rtx, rtx);
 static void undo_all (void);
 static void undo_commit (void);
 static rtx *find_split_point (rtx *, rtx, bool);
@@ -1099,6 +1099,46 @@ insn_a_feeds_b (rtx a, rtx b)
 #endif
   return false;
 }
+
+/* A is a compare (reg1, 0) and B is SINGLE_SET which SET_SRC is reg2.
+   It returns TRUE, if reg1 == reg2, and no other refer of reg1
+   except A and B.  */
+
+static bool
+refer_same_reg_p (rtx a, rtx b)
+{
+  rtx seta = single_set (a);
+  rtx setb = single_set (b);
+
+  if (BLOCK_FOR_INSN (a) != BLOCK_FOR_INSN (b)
+ || !seta || !setb)
+return false;
+
+  if 

Re: [PATCH] Fix PR 61225

2014-07-17 Thread Jeff Law

On 05/22/14 03:52, Zhenqiang Chen wrote:

On 21 May 2014 20:43, Steven Bosscher stevenb@gmail.com wrote:

On Wed, May 21, 2014 at 11:58 AM, Zhenqiang Chen wrote:

Hi,

The patch fixes the gcc.target/i386/pr49095.c FAIL in PR61225. The
test case tends to check a peephole2 optimization, which optimizes the
following sequence

 2: bx:SI=ax:SI
 25: ax:SI=[bx:SI]
 7: {ax:SI=ax:SI-0x1;clobber flags:CC;}
 8: [bx:SI]=ax:SI
 9: flags:CCZ=cmp(ax:SI,0)
to
2: bx:SI=ax:SI
41: {flags:CCZ=cmp([bx:SI]-0x1,0);[bx:SI]=[bx:SI]-0x1;}

The enhanced shrink-wrapping, which calls copyprop_hardreg_forward
changes the INSN 25 to

 25: ax:SI=[ax:SI]

Then peephole2 can not optimize it since two memory_operands look like
different.

To fix it, the patch adds another peephole2 rule to read one more
insn. From the register copy, it knows the address is the same.


That is one complex peephole2 to deal with a transformation like this.
It seems to be like it's a too specific solution for a bigger problem.

Could you please try one of the following solutions instead:

1. Track register values for peephole2 and try different alternatives
based on known register equivalences? E.g. in your example, perhaps
there is already a REG_EQUAL/REG_EQUIV note available on insn 25 after
copyprop_hardreg_forward, to annotate that [ax:SI] is equivalent to
[bx:SI] at that point (or if that information is not available, it is
not very difficult to make it available). Then you could try applying
peephole2 on the original pattern but also on patterns modified with
the known equivalences (i.e. try peephole2 on multiple equivalent
patterns for the same insn). This may expose other peephole2
opportunities, not just the specific one your patch addresses.


Patch is updated according to the comment. There is no REG_EQUAL. So I
add it when replace_oldest_value_reg.

ChangeLog:
2014-05-22  Zhenqiang Chen  zhenqiang.c...@linaro.org

 Part of PR rtl-optimization/61225
 * config/i386/i386-protos.h (ix86_peephole2_rtx_equal_p): New proto.
 * config/i386/i386.c (ix86_peephole2_rtx_equal_p): New function.
 * regcprop.c (replace_oldest_value_reg): Add REG_EQUAL note when
 propagating to SET.
I can't help but wonder why the new 4 insn combination code isn't 
presenting this as a nice big fat insn to the x86 backend which would 
eliminate the need for the peep2.


But, assuming there's a fundamental reason why that's not kicking in...

In replace_oldest_value_reg, why not use reg_overlap_mentioned_p to 
determine if the REGNO of NEW_RTX is modified by INSN?  I'd look to 
avoid some of those calls to single_set (insn).  Just call it once and 
reuse the value.


Shouldn't you be ensuring the REG_EQUAL note is unique?  I think we have 
a routine to avoid creating a note that already exists.


Don't you have to ensure that the value in the REG_EQUAL note has not 
changed?  A REG_EQUAL note denotes an equivalence that holds at the 
single insn where it appears.  If you want to use the value elsewhere 
you'd have to ensure the value hasn't been changed.  If RTX referred to 
by the REG_EQUAL note is a MEM, this can be relatively difficult due to 
aliasing issues.


Jeff






Re: [PATCH] Fix PR 61225

2014-05-22 Thread Zhenqiang Chen
On 21 May 2014 20:43, Steven Bosscher stevenb@gmail.com wrote:
 On Wed, May 21, 2014 at 11:58 AM, Zhenqiang Chen wrote:
 Hi,

 The patch fixes the gcc.target/i386/pr49095.c FAIL in PR61225. The
 test case tends to check a peephole2 optimization, which optimizes the
 following sequence

 2: bx:SI=ax:SI
 25: ax:SI=[bx:SI]
 7: {ax:SI=ax:SI-0x1;clobber flags:CC;}
 8: [bx:SI]=ax:SI
 9: flags:CCZ=cmp(ax:SI,0)
 to
2: bx:SI=ax:SI
41: {flags:CCZ=cmp([bx:SI]-0x1,0);[bx:SI]=[bx:SI]-0x1;}

 The enhanced shrink-wrapping, which calls copyprop_hardreg_forward
 changes the INSN 25 to

 25: ax:SI=[ax:SI]

 Then peephole2 can not optimize it since two memory_operands look like
 different.

 To fix it, the patch adds another peephole2 rule to read one more
 insn. From the register copy, it knows the address is the same.

 That is one complex peephole2 to deal with a transformation like this.
 It seems to be like it's a too specific solution for a bigger problem.

 Could you please try one of the following solutions instead:

 1. Track register values for peephole2 and try different alternatives
 based on known register equivalences? E.g. in your example, perhaps
 there is already a REG_EQUAL/REG_EQUIV note available on insn 25 after
 copyprop_hardreg_forward, to annotate that [ax:SI] is equivalent to
 [bx:SI] at that point (or if that information is not available, it is
 not very difficult to make it available). Then you could try applying
 peephole2 on the original pattern but also on patterns modified with
 the known equivalences (i.e. try peephole2 on multiple equivalent
 patterns for the same insn). This may expose other peephole2
 opportunities, not just the specific one your patch addresses.

Patch is updated according to the comment. There is no REG_EQUAL. So I
add it when replace_oldest_value_reg.

ChangeLog:
2014-05-22  Zhenqiang Chen  zhenqiang.c...@linaro.org

Part of PR rtl-optimization/61225
* config/i386/i386-protos.h (ix86_peephole2_rtx_equal_p): New proto.
* config/i386/i386.c (ix86_peephole2_rtx_equal_p): New function.
* regcprop.c (replace_oldest_value_reg): Add REG_EQUAL note when
propagating to SET.

diff --git a/gcc/config/i386/i386-protos.h b/gcc/config/i386/i386-protos.h
index 39462bd..0c4a2b9 100644
--- a/gcc/config/i386/i386-protos.h
+++ b/gcc/config/i386/i386-protos.h
@@ -42,6 +42,7 @@ extern enum calling_abi ix86_function_type_abi (const_tree);

 extern void ix86_reset_previous_fndecl (void);

+extern bool ix86_peephole2_rtx_equal_p (rtx, rtx, rtx, rtx);
 #ifdef RTX_CODE
 extern int standard_80387_constant_p (rtx);
 extern const char *standard_80387_constant_opcode (rtx);
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 6ffb788..583ebe8 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -46856,6 +46856,29 @@ ix86_atomic_assign_expand_fenv (tree *hold,
tree *clear, tree *update)
atomic_feraiseexcept_call);
 }

+/* OP0 is the SET_DEST of INSN and OP1 is the SET_SRC of INSN.
+   Check whether OP1 and OP6 is equal.  */
+
+bool
+ix86_peephole2_rtx_equal_p (rtx insn, rtx op0, rtx op1, rtx op6)
+{
+  rtx note;
+
+  if (!reg_overlap_mentioned_p (op0, op1)  rtx_equal_p (op1, op6))
+return true;
+
+  gcc_assert (single_set (insn)
+  op0 == SET_DEST (single_set (insn))
+  op1 == SET_SRC (single_set (insn)));
+
+  note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
+  if (note
+   !reg_overlap_mentioned_p (op0, XEXP (note, 0))
+   rtx_equal_p (XEXP (note, 0), op6))
+return true;
+
+  return false;
+}
 /* Initialize the GCC target structure.  */
 #undef TARGET_RETURN_IN_MEMORY
 #define TARGET_RETURN_IN_MEMORY ix86_return_in_memory
diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md
index 44e80ec..b57fc86 100644
--- a/gcc/config/i386/i386.md
+++ b/gcc/config/i386/i386.md
@@ -16996,11 +16996,12 @@
 [(match_dup 0)
  (match_operand:SWI 2 nonmemory_operand)]))
  (clobber (reg:CC FLAGS_REG))])
-   (set (match_dup 1) (match_dup 0))
+   (set (match_operand:SWI 6 memory_operand) (match_dup 0))
(set (reg FLAGS_REG) (compare (match_dup 0) (const_int 0)))]
   (TARGET_READ_MODIFY_WRITE || optimize_insn_for_size_p ())
 peep2_reg_dead_p (4, operands[0])
-!reg_overlap_mentioned_p (operands[0], operands[1])
+ix86_peephole2_rtx_equal_p (peep2_next_insn (0), operands[0],
+ operands[1], operands[6])
 !reg_overlap_mentioned_p (operands[0], operands[2])
 (MODEmode != QImode
|| immediate_operand (operands[2], QImode)
diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index 7a5a4f6..4e09724 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -510,6 +510,22 @@ replace_oldest_value_reg (rtx *loc, enum
reg_class cl, rtx insn,
fprintf (dump_file, insn %u: replaced reg %u with %u\n,
 INSN_UID (insn), REGNO (*loc), REGNO 

Re: [PATCH] Fix PR 61225

2014-05-21 Thread Steven Bosscher
On Wed, May 21, 2014 at 11:58 AM, Zhenqiang Chen wrote:
 Hi,

 The patch fixes the gcc.target/i386/pr49095.c FAIL in PR61225. The
 test case tends to check a peephole2 optimization, which optimizes the
 following sequence

 2: bx:SI=ax:SI
 25: ax:SI=[bx:SI]
 7: {ax:SI=ax:SI-0x1;clobber flags:CC;}
 8: [bx:SI]=ax:SI
 9: flags:CCZ=cmp(ax:SI,0)
 to
2: bx:SI=ax:SI
41: {flags:CCZ=cmp([bx:SI]-0x1,0);[bx:SI]=[bx:SI]-0x1;}

 The enhanced shrink-wrapping, which calls copyprop_hardreg_forward
 changes the INSN 25 to

 25: ax:SI=[ax:SI]

 Then peephole2 can not optimize it since two memory_operands look like
 different.

 To fix it, the patch adds another peephole2 rule to read one more
 insn. From the register copy, it knows the address is the same.

That is one complex peephole2 to deal with a transformation like this.
It seems to be like it's a too specific solution for a bigger problem.

Could you please try one of the following solutions instead:

1. Track register values for peephole2 and try different alternatives
based on known register equivalences? E.g. in your example, perhaps
there is already a REG_EQUAL/REG_EQUIV note available on insn 25 after
copyprop_hardreg_forward, to annotate that [ax:SI] is equivalent to
[bx:SI] at that point (or if that information is not available, it is
not very difficult to make it available). Then you could try applying
peephole2 on the original pattern but also on patterns modified with
the known equivalences (i.e. try peephole2 on multiple equivalent
patterns for the same insn). This may expose other peephole2
opportunities, not just the specific one your patch addresses.

2. Avoid copy-prop'ing ax:SI into [bx:SI] to begin with. At insn 7,
both ax:SI and bx:SI are live, so insn 2 is not dead (i.e. cannot be
eliminated) and there is no benefit in this transformation. It only
hides (or at least makes it harder to see) that [ax:SI] in insn 25 is
the same memory reference as [bx:SI] in insn 8. So perhaps the copy
propagation should simply not be done unless it turns at least one
instruction into dead code.


Any reason why this transformation isn't done much earlier, e.g. in combine?

Ciao!
Steven


Re: [PATCH] Fix PR 61225

2014-05-21 Thread Zhenqiang Chen
On 21 May 2014 20:43, Steven Bosscher stevenb@gmail.com wrote:
 On Wed, May 21, 2014 at 11:58 AM, Zhenqiang Chen wrote:
 Hi,

 The patch fixes the gcc.target/i386/pr49095.c FAIL in PR61225. The
 test case tends to check a peephole2 optimization, which optimizes the
 following sequence

 2: bx:SI=ax:SI
 25: ax:SI=[bx:SI]
 7: {ax:SI=ax:SI-0x1;clobber flags:CC;}
 8: [bx:SI]=ax:SI
 9: flags:CCZ=cmp(ax:SI,0)
 to
2: bx:SI=ax:SI
41: {flags:CCZ=cmp([bx:SI]-0x1,0);[bx:SI]=[bx:SI]-0x1;}

 The enhanced shrink-wrapping, which calls copyprop_hardreg_forward
 changes the INSN 25 to

 25: ax:SI=[ax:SI]

 Then peephole2 can not optimize it since two memory_operands look like
 different.

 To fix it, the patch adds another peephole2 rule to read one more
 insn. From the register copy, it knows the address is the same.

 That is one complex peephole2 to deal with a transformation like this.
 It seems to be like it's a too specific solution for a bigger problem.

 Could you please try one of the following solutions instead:

Thanks for the comments.

 1. Track register values for peephole2 and try different alternatives
 based on known register equivalences? E.g. in your example, perhaps
 there is already a REG_EQUAL/REG_EQUIV note available on insn 25 after
 copyprop_hardreg_forward, to annotate that [ax:SI] is equivalent to
 [bx:SI] at that point (or if that information is not available, it is
 not very difficult to make it available). Then you could try applying
 peephole2 on the original pattern but also on patterns modified with
 the known equivalences (i.e. try peephole2 on multiple equivalent
 patterns for the same insn). This may expose other peephole2
 opportunities, not just the specific one your patch addresses.

I will try this one.

 2. Avoid copy-prop'ing ax:SI into [bx:SI] to begin with. At insn 7,
 both ax:SI and bx:SI are live, so insn 2 is not dead (i.e. cannot be
 eliminated) and there is no benefit in this transformation. It only
 hides (or at least makes it harder to see) that [ax:SI] in insn 25 is
 the same memory reference as [bx:SI] in insn 8. So perhaps the copy
 propagation should simply not be done unless it turns at least one
 instruction into dead code.

This is a good heuristics. But it will lead copy-prop much more
complexity. copy-prop pass scans INSN one by one to do the
propagation. If there have multi reference INSNs, you can not make the
decision until changing the last reference.

 Any reason why this transformation isn't done much earlier, e.g. in combine?

I do not know. The similar peephole2 rule was added to fix pr49095.
Will try it if Solution 1 does not work.

Thanks!
-Zhenqiang