On Wed, Sep 14, 2016 at 9:49 PM, Jeff Law <l...@redhat.com> wrote:
> On 09/14/2016 01:29 PM, Richard Biener wrote:
>> It's what match-and-simplify does as well.
>> I question the need to build GENERIC here though. M-a-s happily gets you
>> a simplified expression as sequence of GIMPLE statements. (But does not yet
>> provide a way to build a simplified GENERIC expression from GIMPLE IL)
> At a particular site we're going to have an expression where we want to
> evaluate the operands (either to a constant or symbolically).
> For each operand in the expression we recursively raise the query "what is
> this operand". THe result (potentially another expression) gets substituted
> in for the operand and the process continues recursively. Recursion stops
> when you get a constant or when you've walked back "too far" or the
> expression gets "too complex".
> Once you get something like (x+y)+z you're no longer gimple. But we need to
> store that expression in some form.
I'm not sure you absolutely need to store it -- it is present in the
IL (in unoptimized
form) after all.
One reason current VRP symbolic ranges are "limited" is to avoid expensive stuff
such as folding (because of all the GC memory churn). Yes -- if
to a constant we'd like to know, m-a-s lets you restrict results to constants
(so it avoids building up any larger expressions). You also have a
should contain any intermediate simplifications to constants. You _may_ miss
some more advanced simplifications that require simplified
but folding never was able to catch all opportunities (think of
necessary to cancel everything to zero in a very large expression). For some
areas in the compiler (IVO) we have the tree-affine stuff which has
its own mini-IL
capable of some more "advanced" simplifications (basically to catch
> That could be expressed as a vector of operators/operands. It could be
> expressed as a generic tree. Whatever. Generic trees as a reasonably
> natural way to store the expression as its built, but fold() those arbitrary
> expressions doesn't work. m-a-s may be the answer here, I haven't really
> thought about it yet.
The question is always full generality / complexity vs. working on a production
compiler where compile-time and memory use matter. If you can create a
testcase, even if artificial, where we blow up worse than N log N this is bad.
[current VRP equivalences are such an example]
I believe that VRP for symbolic ranges is not terribly important besides
"simple" ranges like you can derive from a < b and c != d.
Oh, and if you are re-writing VRP range representation please get rid of
anti-ranges -- one "simple" extension of current VRP would get rid of
them by changing the current rep of a single range/anti-range to an
implicit union of two ranges (with that you can represent an anti-range).
Make the number of unioned ranges parametric and you end up with
sth quite general.
And symbolic ranges should be IMHO represented by maintaining
a set of active true/false predicates rather than trying to come up
with a "range" representation. And for predicates we'd like to have
a mini-IL like tree-affine -- this would be generally useful elsewhere
in the compiler as well.