On Wed, 12 Oct 2016, Marc Glisse wrote:

> On Wed, 12 Oct 2016, Richard Biener wrote:
> 
> > So with doing the same on GENERIC I hit
> > 
> > FAIL: g++.dg/cpp1y/constexpr-array4.C  -std=c++14 (test for excess errors)
> > 
> > with the pattern
> > 
> >  /* (T)(P + A) - (T)P -> (T) A */
> >  (for add (plus pointer_plus)
> >   (simplify
> >    (minus (convert (add @0 @1))
> >     (convert @0))
> 
> Ah, grep missed that one because it is on 2 lines :-(
> 
> >    ...
> >     (convert @1))))
> > 
> > 
> > no longer applying to
> > 
> > (long int) ((int *) &ar + 12) - (long int) &ar
> > 
> > because while (int *) &ar is equal to (long int) &ar (operand_equal_p
> > does STRIP_NOPS) they obviously do not have the same type.
> 
> I believe we are comparing (int *) &ar to &ar, not to (long int) &ar. But yes,
> indeed.
> 
> > So on GENERIC we have to consider that we feed operand_equal_p with
> > non-ATOMs (arbitrary expressions).  The pattern above is "safe" as it
> > doesn't reference @0 in the result (not sure if we should take advantage of
> > that in genmatch).
> 
> Since we are in the process of defining an
> operand_equal_for_(generic|gimple)_match_p, we could tweak it to check the
> type only for INTEGER_CST, or to still STRIP_NOPS, or similar.
> 
> Or we could remain very strict and refine the pattern, allowing a convert on
> the pointer (we might want to split the plus and pointer_plus versions then,
> for clarity). There are not many optimizations that are mandated by front-ends
> and for which this is an issue.
> 
> > The other FAILs with doing the same on GENERIC are
> > 
> > FAIL: g++.dg/gomp/declare-simd-3.C  -std=gnu++11 (test for excess errors)
> > FAIL: g++.dg/torture/pr71448.C   -O0  (test for excess errors)
> > FAIL: g++.dg/vect/simd-clone-6.cc  -std=c++11 (test for excess errors)
> > 
> > the simd ones are 'warning: ignoring large linear step' and the pr71448.C
> > case is very similar to the above.
> 
> Yes, I expect they all come from the same 1 or 2 transformations.
> 
> > > > > If we stick to the old behavior, maybe we could have some genmatch
> > > > > magic to help with the constant capture weirdness. With matching
> > > > > captures, we could select which operand (among those supposed to be
> > > > > equivalent) is actually captured more cleverly, either with an
> > > > > explicit marker, or by giving priority to the one that is not
> > > > > immediatly below convert? in the pattern.
> > > > 
> > > > This route is a difficult one to take
> > > 
> > > The simplest version I was thinking about was @0 for a true capture, and
> > > @@0
> > > for something that just has to be checked for equality with @0.
> > 
> > Hmm, ok.  So you'd have @@0 having to match @0 and we'd get the @0 for
> > the result in a reliable way?  Sounds like a reasonable idea, I'll see
> > how that works out (we could auto-detect '@@' if the capture is not
> > used in the result, see above).
> 
> It probably doesn't bring much compared to the @0@4 syntax you were 
> suggesting below, so if that is easier...

Will figure that out ...

> > > > -- what would be possible is to
> > > > capture a specific operand.  Like allow one to write
> > > > 
> > > > (op (op @0@4 @1) (op @0@3 @2))
> > > > 
> > > > and thus actually have three "names" for @0.  We have this internally
> > > > already when you write
> > > > 
> > > > (convert?@0 @1)
> > > > 
> > > > for the case where there is no conversion.  @0 and @1 are the same
> > > > in this case.
> > > 
> > > Looks nice and convenient (assuming lax constant matching).
> > 
> > Yes, w/o lax matching it has of course little value.
> > 
> > > > Not sure if this helps enough cases.
> > > 
> > > IIRC, in all cases where we had trouble with operand_equal_p, chosing
> > > which
> > > operand to capture would have solved the issue.
> > 
> > Yes.  We'd still need to actually catch all those cases...
> 
> Being forced to chose which operand we capture (say with @ vs @@, making 2
> occurences of @0 a syntax error) might help, but it would also require
> updating many patterns for which this was never an issue (easy but boring).

We can even have today

 (plus (minus @0 @1) (plus @0 @2) @0)

thus three matching @0.  Now if we have @@ telling to use value equality
rather than "node equality" _and_ making the @@ select the canonical
operand to choose then we'd have at most one @@ per ID.  Somewhat tricky
but not impossible to implement I think.

> > > > I still think that having two things matched that are not really
> > > > the same is werid (and a possible source of errors as in, ICEs,
> > > > rather than missed optimizations).
> > > 
> > > Ok. Let's go with the strict matching, it is indeed less confusing.
> > 
> > I'll hold off with the GENERIC strict matching and will investigate
> > the @@ idea (plus auto-detecting it).
> > 
> > > > > And if we move to stricter matching, maybe genmatch could be updated
> > > > > so
> > > > > convert could also match integer constants in some cases.
> > > > 
> > > > You mean when trying to match @0 ... (convert @0) and @0 is an
> > > > INTEGER_CST
> > > > allow then (convert ...) case to match an INTEGER_CST of different type?
> > > > That's an interesting idea (not sure how to implement that though).
> > > 
> > > Yes, though I am not sure of the exact semantics that would work best.
> > > 
> > > On a related note, seeing duplicated patterns for constants, I tried
> > > several
> > > variants of
> > > 
> > > (match (mynot @0)
> > >  (bit_not @0))
> > > (match (mynot @0)
> > >  INTEGER_CST@0
> > >  (if (@0 = wide_int_to_tree (TREE_TYPE (@0), wi::bit_not (@0)))))
> > > 
> > > (simplify
> > >  (minus (bit_and:cs @0 (mynot @1)) (bit_and:cs @0 @1))
> > >   (minus (bit_xor @0 @1) @1))
> > > 
> > > This kind of hack feels wrong, but I don't see a proper way to write it.
> > 
> > Yes.  The above is very much a hack.  Must have been me inventing it
> > just to avoid duplicating the pattern.
> 
> Uh, no, don't worry, we don't have that hack in match.pd, we have 2 patterns.
> The hack is just me trying to remove the duplication. If you thought you had
> written it, that means it may not have been such an absurd way to go about it
> ;-)

Ah ...

;)

Reply via email to