On 8/16/22 05:18, Richard Biener wrote:
On Mon, 15 Aug 2022, Aldy Hernandez wrote:

On Mon, Aug 15, 2022 at 9:24 PM Andrew MacLeod <amacl...@redhat.com> wrote:
heh. or just


+      int_range<2> r;
+      if (!fold_range (r, const_cast <gcond *> (cond_stmt))
+      || !r.singleton_p (&val))


if you do not provide a range_query to any of the fold_using_range code,
it defaults to:

fur_source::fur_source (range_query *q)
{
    if (q)
      m_query = q;
    else if (cfun)
      m_query = get_range_query (cfun);
    else
      m_query = get_global_range_query ();
    m_gori = NULL;
}

Sweet.  Even better!
So when I do the following incremental change ontop of the posted
patch then I see that the path query is able to simplify more
"single BB paths" than the global range folding.

diff --git a/gcc/tree-ssa-threadbackward.cc
b/gcc/tree-ssa-threadbackward.cc
index 669098e4ec3..777e778037f 100644
--- a/gcc/tree-ssa-threadbackward.cc
+++ b/gcc/tree-ssa-threadbackward.cc
@@ -314,6 +314,12 @@ back_threader::find_taken_edge_cond (const
vec<basic_block> &path,
  {
    int_range_max r;
+ int_range<2> rf;
+  if (path.length () == 1)
+    {
+      fold_range (rf, cond);
+    }
+
    m_solver->compute_ranges (path, m_imports);
    m_solver->range_of_stmt (r, cond);
@@ -325,6 +331,8 @@ back_threader::find_taken_edge_cond (const
vec<basic_block> &path,
if (r == true_range || r == false_range)
      {
+      if (path.length () == 1)
+       gcc_assert  (r == rf);
        edge e_true, e_false;
        basic_block bb = gimple_bb (cond);
        extract_true_false_edges_from_block (bb, &e_true, &e_false);

Even doing the following (not sure what's the difference and in
particular expense over the path range query) results in missed
simplifications (checking my set of cc1files).

diff --git a/gcc/tree-ssa-threadbackward.cc
b/gcc/tree-ssa-threadbackward.cc
index 669098e4ec3..1d43a179d08 100644
--- a/gcc/tree-ssa-threadbackward.cc
+++ b/gcc/tree-ssa-threadbackward.cc
@@ -99,6 +99,7 @@ private:
back_threader_registry m_registry;
    back_threader_profitability m_profit;
+  gimple_ranger *m_ranger;
    path_range_query *m_solver;
// Current path being analyzed.
@@ -146,12 +147,14 @@ back_threader::back_threader (function *fun,
unsigned flags, bool first)
    // The path solver needs EDGE_DFS_BACK in resolving mode.
    if (flags & BT_RESOLVE)
      mark_dfs_back_edges ();
-  m_solver = new path_range_query (flags & BT_RESOLVE);
+  m_ranger = new gimple_ranger;
+  m_solver = new path_range_query (flags & BT_RESOLVE, m_ranger);
  }
back_threader::~back_threader ()
  {
    delete m_solver;
+  delete m_ranger;
loop_optimizer_finalize ();
  }
@@ -314,6 +317,12 @@ back_threader::find_taken_edge_cond (const
vec<basic_block> &path,
  {
    int_range_max r;
+ int_range<2> rf;
+  if (path.length () == 1)
+    {
+      fold_range (rf, cond, m_ranger);
+    }
+
    m_solver->compute_ranges (path, m_imports);
    m_solver->range_of_stmt (r, cond);
@@ -325,6 +334,8 @@ back_threader::find_taken_edge_cond (const
vec<basic_block> &path,
if (r == true_range || r == false_range)
      {
+      if (path.length () == 1)
+       gcc_assert  (r == rf);
        edge e_true, e_false;
        basic_block bb = gimple_bb (cond);
        extract_true_false_edges_from_block (bb, &e_true, &e_false);

one example is

<bb 176> [local count: 14414059]:
_128 = node_177(D)->typed.type;
pretmp_413 = MEM[(const union tree_node *)_128].base.code;
_431 = pretmp_413 + 65519;
if (_128 == 0B)
   goto <bb 199>; [18.09%]
else
   goto <bb 177>; [81.91%]

where m_imports for the path is just _128 and the range computed is
false while the ranger query returns VARYING.  But
path_range_query::range_defined_in_block does

   if (bb && POINTER_TYPE_P (TREE_TYPE (name)))
     m_ranger->m_cache.m_exit.maybe_adjust_range (r, name, bb);
This is the coarse grained "side effect applies somewhere in the block" mechanism.  There is no understanding of where in the block it happens.

which adjusts the range to ~[0, 0], probably because of the
dereference in the following stmt.

Why does fold_range not do this when folding the exit test?  Is there
a way to make it do so?  It looks like the only routine using this
in gimple-range.cc is range_on_edge and there it's used for e->src
after calling range_on_exit for e->src (why's it not done in
range_on_exit?).  A testcase for this is

Fold_range doesnt do this because it is simply another statement.  It makes no attempt to understand the context in which you are folding something. you could be folding that stmt from a different location (ie recomputing)   If your context is that you are looking for the range after the last statement has been executed, then one needs to check to see if there are any side effects.

ranger uses it for range_on_edge (), because  it knows all the statements in the block have been executed, and its safe to apply anything seen in the block.  It does it right after range_on_exit() is called internally.

Once upon a time, it was integrated with range-on-exit, but it turned out there were certain times that it was causing problems. There have been some cleanups since then, it probably safe now to return that call to range_on_exit.. but doesnt buy us a whole lot by itself.. except of course I have now OKd using range_on-entry/exit generally :-)

the cache also uses it when walking blocks to pick up inferred values during an on-entry cache fill.


int foo (int **p, int i)
{
   int *q = *p;
   int res = *q + i;
   if (q)
     return res;
   return -1;
}

which we "thread" with the path and with the above ICEs because
fold_range doesn't get that if (q) is always true.  Without the

Its a known limitation that, unless you are doing a walk, on-demand requests will "miss" some inferred ranges, because they are only maintained at the basic block level.  (ie, we will know that q is non-null in BB2,  but we don't know where, so we can make no assumptions at the exit condition about q in this case. the path_query is invoked in on-demand mode because it wont walk the entire IL,. so the first time you ask for the range of an ssa-name, it will quickly zip over the immediate use list and "fill" the on-exit structure for any blocks which a non-null reference is seen.  This allows the threader to pick up non-null from blocks outside the path that havent been examined.

VRP does a walk, and while during the walk, adjusts ranges on the fly for the current block via the gimple_ranger::register_inferred_ranges () routine.  which is really just a wrapper around ranger_cache::apply_inferred_ranges ()  (in gimple-range-cache.cc)

This is called after every statement and is where we take care of bookkeeping for adjusting values, and adding them to the blocks list.

if the path query is walking those statement, it could also "adjust" the range of q on the fly... but it has to have walked those statements.  from that routine, the relevant bits use the gimple-infer class to find any inferred ranges from the statement, and would look something like:

  gimple_infer_range infer(s);
  for (unsigned x = 0; x < infer.num (); x++)
    {
      tree name = infer.name (x);
      if (!interesting_p (name))
         continue;
      get_current_path_range (r, name);
      if (r.intersect (infer.range (x)))
        set_current_path_range (name, r);
    }

That would adjust the value of q to be non-zero after   "int res = *q + i;"

but you need to have walked the statement before you get to the condition.   as long as they are all in your list of interesting statement to look at, then you'd be golden.

I dont know if, when, or what direction things are examined.

Andrew


Reply via email to