2014-03-04 14:14 GMT+01:00 Richard Biener <rguent...@suse.de>:
> On Mon, 3 Mar 2014, Kai Tietz wrote:
>
>> 2014-03-03 12:33 GMT+01:00 Richard Biener <rguent...@suse.de>:
>> > On Fri, 28 Feb 2014, Kai Tietz wrote:
>> >
>> >> Hmm,  this all reminds me about the approach Andrew Pinski and I came
>> >> up with two years ago.
>> >
>> > You are talking about the gimple folding interface?  Yes, but it's
>> > more similar to what I proposed before that.
>>
>> Well, this interface was for rtl, gimple, and tree AFAIR.
>>
>>
>> >> So I doubt that we want to keep fold-const patterns similar to gimple
>> >> (forward-prop) ones.
>> >> Wouldn't it make more sense to move fold-const patterns completely
>> >> into gimple, and having a facility in FE to ask gimple to *pre*-fold a
>> >> given tree to see if a constant-expression can be achieved?
>> >
>> > That was proposed by somebody, yes.  The FE would hand off an
>> > expression to 1) the gimplifier to gimplify it, then 2) to the
>> > gimple folder to simplify it.  Not sure if that's a good design
>> > but yes, it mimics the awkward thing we do now (genericize for
>> > folding in fold_stmt), just the other way around - and it makes
>> > it very costly.
>>
>> Right, if we would redo step one, and two each time we visit same
>> statement again, then of course we would produce pretty high load.
>> By hashing this *pre*-computed gimple-expression I think the load of
>> such an approach would lower pretty much.  Of course it is true that
>> we move gimplification-costs to FE.  Nevertheless the avarage costs
>> should be in general the same as we have them now.
>>
>>
>> > Having a single meta-description of simplifications makes it
>> > possible to have the best of both worlds (no need to GENERICIZE
>> > back from GIMPLE and no need to GIMPLIFY from GENERIC) with
>> > a single point of maintainance.
>>
>> True.  I am fully agreeing to the positive effects of a single
>> meta-description for this area. For sure it is worth to avoid the
>> re-doing of the same folding for GENERIC/GIMPLE again and again.
>>
>> > [the possibility to use offline verification tools for the
>> > transforms comes to my mind as well]
>> This is actually a pretty interesting idea.  As it would allow us to
>> do testing for this area without  side-effects by high-level passes,
>> target-properties, etc
>>
>> > If you think the .md style pattern definitions are too limiting
>> > can you think of sth more powerful without removing the possibility
>> > of implementing the matching with a state machine to make it O(1)?
>>
>> Well, I am not opposed to the use of .md style pattern defintions at
>> all.  I see just some weaknesses on the current tree-folding
>> mechanism.
>> AST folder tries to fold into statementes by recursion into.  This
>> causes pretty high load in different areas.  Like stack-growth,
>> unnecessary repetition of operations on inner-statements, and complex
>> patterns for expression associative/distributive/commutative rules for
>> current operation-code.
>
> Actually it doesn't recurse - it avoids recursion by requiring
> each sub-pattern to be already folded.

Right, in general that is true.  Sorry I expressed myself wrong.  I
mean things like fold_truth_andor.

>> I am thinking about a model where we use just for the
>> base-fold-operations the .md-style pattern definitions. On top of this
>> model we set a layer implementing associative/commutative/distributive
>> properties for statements in an optimize form.
>
> Sure, that would be a pass using the match-and-simplify infrastructure.
> In fact the infrastructure alone only provides enough to do the
> bare folding.

Well, I am not sure if it is the best approach to have
commutative/associative/distributive only within a pass.  Sure
forward-propagation-pass would be a candidate for this.  nevertheless
should we fold generated conditions (and other generated
code-patterns) at time of generation including handling those laws?

Kai

Reply via email to