Re: [PATCH] [RFC] New early __builtin_unreachable processing.

2023-09-18 Thread Andrew MacLeod via Gcc-patches



On 9/18/23 02:53, Richard Biener wrote:

On Fri, Sep 15, 2023 at 4:45 PM Andrew MacLeod  wrote:

Ive been looking at __builtin_unreachable () regressions.  The
fundamental problem seems to be  a lack of consistent expectation for
when we remove it earlier than the final pass of VRP.After looking
through them, I think this provides a sensible approach.

Ranger is pretty good at providing ranges in blocks dominated by the
__builtin_unreachable  branch, so removing it isn't quite a critical as
it once was.  Its also pretty good at identifying what in the block can
be affected by the branch.

This patch provide an alternate removal algorithm for earlier passes.
it looks at *all* the exports from the block, and if the branch
dominates every use of all the exports, AND none of those values access
memory, VRP will remove the unreachable call, rewrite the branch, update
all the values globally, and finally perform the simple DCE on the
branch's ssa-name.   This is kind of what it did before, but it wasn't
as stringent on the requirements.

The memory access check is required because there are a couple of test
cases for PRE in which there is a series of instruction leading to an
unreachable call, and none of those ssa names are ever used in the IL
again. The whole chunk is dead, and we update globals, however
pointlessly.  However, one of ssa_names loads from memory, and a later
passes commons this value with a later load, and then  the unreachable
call provides additional information about the later load.This is
evident in tree-ssa/ssa-pre-34.c.   The only way I see to avoid this
situation is to not remove the unreachable if there is a load feeding it.

What this does is a more sophisticated version of what DOM does in
all_uses_feed_or_dominated_by_stmt.  THe feeding instructions dont have
to be single use, but they do have to be dominated by the branch or be
single use within the branches block..

If there are multiple uses in the same block as the branch, this does
not remove the unreachable call.  If we could be sure there are no
intervening calls or side effects, it would be allowable, but this a
more expensive checking operation.  Ranger gets the ranges right anyway,
so with more passes using ranger, Im not sure we'd see much benefit from
the additional analysis.   It could always be added later.

This fixes at least 110249 and 110080 (and probably others).  The only
regression is 93917 for which I changed the testcase to adjust
expectations:

// PR 93917
void f1(int n)
{
if(n<0)
  __builtin_unreachable();
f3(n);
}

void f2(int*n)
{
if(*n<0)
  __builtin_unreachable();
f3 (*n);
}

We were removing both unreachable calls in VRP1, but only updating the
global values in the first case, meaning we lose information.   With the
change in semantics, we only update the global in the first case, but we
leave the unreachable call in the second case now (due to the load from
memory).  Ranger still calculates the contextual range correctly as [0,
+INF] in the second case, it just doesn't set the global value until
VRP2 when it is removed.

Does this seem reasonable?

I wonder how this addresses the fundamental issue we always faced
in that when we apply the range this range info in itself allows the
branch to the __builtin_unreachable () to be statically determined,
so when the first VRP pass sets the range the next pass evaluating
the condition will remove it (and the guarded __builtin_unreachable ()).

In principle there's nothing wrong with that if we don't lose the range
info during optimizations, but that unfortunately happens more often
than wanted and with the __builtin_unreachable () gone we've lost
the ability to re-compute them.

I think it's good to explicitly remove the branch at the point we want
rather than relying on the "next" visitor to pick up the global range.

As I read the patch we now remove __builtin_unreachable () explicitly
as soon as possible but don't really address the fundamental issue
in any way?



I think it pretty much addresses the issue completely.  No globals are 
updated by the unreachable branch unless it is removed.  We remove the 
unreachable early ONLY if every use of all the exports is dominated by 
the branch...    with the exception of a single use in the block used to 
define a different export. and those have to all have no other uses 
which are not dominated.  ie


  [local count: 1073741824]:
  y_2 = x_1(D) >> 1;
  t_3 = y_2 + 1;
  if (t_3 > 100)
    goto ; [0.00%]
  else
    goto ; [100.00%]

   [count: 0]:
  __builtin_unreachable ();

   [local count: 1073741824]:
  func (x_1(D), y_2, t_3);


In this case we will remove the unreachable call because we can provide 
an accurate global range for all values used in the definition chain for 
the program.


Global Exported (via early unreachable): x_1(D) = [irange] unsigned int 
[0, 199] MASK 0xff VALUE 0x0
Global Exported (via early unreachable): y_2 = [irange] unsigned int [0, 
99] MASK 

[PATCH] [RFC] New early __builtin_unreachable processing.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
Ive been looking at __builtin_unreachable () regressions.  The 
fundamental problem seems to be  a lack of consistent expectation for 
when we remove it earlier than the final pass of VRP.    After looking 
through them, I think this provides a sensible approach.


Ranger is pretty good at providing ranges in blocks dominated by the 
__builtin_unreachable  branch, so removing it isn't quite a critical as 
it once was.  Its also pretty good at identifying what in the block can 
be affected by the branch.


This patch provide an alternate removal algorithm for earlier passes.  
it looks at *all* the exports from the block, and if the branch 
dominates every use of all the exports, AND none of those values access 
memory, VRP will remove the unreachable call, rewrite the branch, update 
all the values globally, and finally perform the simple DCE on the 
branch's ssa-name.   This is kind of what it did before, but it wasn't 
as stringent on the requirements.


The memory access check is required because there are a couple of test 
cases for PRE in which there is a series of instruction leading to an 
unreachable call, and none of those ssa names are ever used in the IL 
again. The whole chunk is dead, and we update globals, however 
pointlessly.  However, one of ssa_names loads from memory, and a later 
passes commons this value with a later load, and then  the unreachable 
call provides additional information about the later load.    This is 
evident in tree-ssa/ssa-pre-34.c.   The only way I see to avoid this 
situation is to not remove the unreachable if there is a load feeding it.


What this does is a more sophisticated version of what DOM does in 
all_uses_feed_or_dominated_by_stmt.  THe feeding instructions dont have 
to be single use, but they do have to be dominated by the branch or be 
single use within the branches block..


If there are multiple uses in the same block as the branch, this does 
not remove the unreachable call.  If we could be sure there are no 
intervening calls or side effects, it would be allowable, but this a 
more expensive checking operation.  Ranger gets the ranges right anyway, 
so with more passes using ranger, Im not sure we'd see much benefit from 
the additional analysis.   It could always be added later.


This fixes at least 110249 and 110080 (and probably others).  The only 
regression is 93917 for which I changed the testcase to adjust 
expectations:


// PR 93917
void f1(int n)
{
  if(n<0)
    __builtin_unreachable();
  f3(n);
}

void f2(int*n)
{
  if(*n<0)
    __builtin_unreachable();
  f3 (*n);
}

We were removing both unreachable calls in VRP1, but only updating the 
global values in the first case, meaning we lose information.   With the 
change in semantics, we only update the global in the first case, but we 
leave the unreachable call in the second case now (due to the load from 
memory).  Ranger still calculates the contextual range correctly as [0, 
+INF] in the second case, it just doesn't set the global value until 
VRP2 when it is removed.


Does this seem reasonable?

Bootstraps on x86_64-pc-linux-gnu with no regressions.  OK?

Andrew


From 87072ebfcd4f51276fc6ed1fb0557257d51ec446 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 11:52:15 -0400
Subject: [PATCH 3/3] New early __builtin_unreachable processing.

in VRP passes before __builtin_unreachable MUST be removed, only remove it
if all exports affected by the unreachable can have global values updated, and
do not involve loads from memory.

	PR tree-optimization/110080
	PR tree-optimization/110249
	gcc/
	* tree-vrp.cc (remove_unreachable::final_p): New.
	(remove_unreachable::maybe_register): Rename from
	maybe_register_block and call early or final routine.
	(fully_replaceable): New.
	(remove_unreachable::handle_early): New.
	(remove_unreachable::remove_and_update_globals): Remove
	non-final processing.
	(rvrp_folder::rvrp_folder): Add final flag to constructor.
	(rvrp_folder::post_fold_bb): Remove unreachable registration.
	(rvrp_folder::pre_fold_stmt): Move unreachable processing to here.
	(execute_ranger_vrp): Adjust some call parameters.

	gcc/testsuite/
	* g++.dg/pr110249.C: New.
	* gcc.dg/pr110080.c: New.
	* gcc.dg/pr93917.c: Adjust.

Tweak vuse case

Adjusted testcase 93917
---
 gcc/testsuite/g++.dg/pr110249.C |  16 +++
 gcc/testsuite/gcc.dg/pr110080.c |  27 +
 gcc/testsuite/gcc.dg/pr93917.c  |   7 +-
 gcc/tree-vrp.cc | 203 ++--
 4 files changed, 214 insertions(+), 39 deletions(-)
 create mode 100644 gcc/testsuite/g++.dg/pr110249.C
 create mode 100644 gcc/testsuite/gcc.dg/pr110080.c

diff --git a/gcc/testsuite/g++.dg/pr110249.C b/gcc/testsuite/g++.dg/pr110249.C
new file mode 100644
index 000..2b737618bdb
--- /dev/null
+++ b/gcc/testsuite/g++.dg/pr110249.C
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp1-alias" } */
+
+#include 
+#include 
+
+uint64_t read64r(const uint64_t ) {
+if 

[COMMITTED 2/2] Always do PHI analysis before loop analysis.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
The original invocation of phi_analysis was only invoked if there was no 
loop information available.  I have found situations where phi analysis 
enhances existing loop information, and as such this patch moves the phi 
analysis block to before loop analysis is invoked (in case a query is 
made from within that area), and does it unconditionally.  There is 
minimal impact on compilation time.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5d5f90ec3b4a939cae5ce4f33b76849f6b08e3a9 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 10:09:16 -0400
Subject: [PATCH 2/3] Always do PHI analysis and before loop analysis.

PHI analysis wasn't being done if loop analysis found a value.  Always
do the PHI analysis, and run it for an iniital value before invoking
loop analysis.

	* gimple-range-fold.cc (fold_using_range::range_of_phi): Always
	run phi analysis, and do it before loop analysis.
---
 gcc/gimple-range-fold.cc | 53 
 1 file changed, 26 insertions(+), 27 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 03805d88d9b..d1945ccb554 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -939,7 +939,32 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	}
 }
 
-  bool loop_info_p = false;
+  // If PHI analysis is available, see if there is an iniital range.
+  if (phi_analysis_available_p ()
+  && irange::supports_p (TREE_TYPE (phi_def)))
+{
+  phi_group *g = (phi_analysis())[phi_def];
+  if (g && !(g->range ().varying_p ()))
+	{
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, "PHI GROUP query for ");
+	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
+	  fprintf (dump_file, " found : ");
+	  g->range ().dump (dump_file);
+	  fprintf (dump_file, " and adjusted original range from :");
+	  r.dump (dump_file);
+	}
+	  r.intersect (g->range ());
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, " to :");
+	  r.dump (dump_file);
+	  fprintf (dump_file, "\n");
+	}
+	}
+}
+
   // If SCEV is available, query if this PHI has any known values.
   if (scev_initialized_p ()
   && !POINTER_TYPE_P (TREE_TYPE (phi_def)))
@@ -962,32 +987,6 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 		  fprintf (dump_file, "\n");
 		}
 	  r.intersect (loop_range);
-	  loop_info_p = true;
-	}
-	}
-}
-
-  if (!loop_info_p && phi_analysis_available_p ()
-  && irange::supports_p (TREE_TYPE (phi_def)))
-{
-  phi_group *g = (phi_analysis())[phi_def];
-  if (g && !(g->range ().varying_p ()))
-	{
-	  if (dump_file && (dump_flags & TDF_DETAILS))
-	{
-	  fprintf (dump_file, "PHI GROUP query for ");
-	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
-	  fprintf (dump_file, " found : ");
-	  g->range ().dump (dump_file);
-	  fprintf (dump_file, " and adjusted original range from :");
-	  r.dump (dump_file);
-	}
-	  r.intersect (g->range ());
-	  if (dump_file && (dump_flags & TDF_DETAILS))
-	{
-	  fprintf (dump_file, " to :");
-	  r.dump (dump_file);
-	  fprintf (dump_file, "\n");
 	}
 	}
 }
-- 
2.41.0



[COMMITTED 1/2] Fix indentation in range_of_phi.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
Somewhere along the way a large sequence of code in range_of_phi() ended 
up with the same indentation of the preceeding loop.. this simply fixes it.


committed as obvious.

Andrew
From e35c3b5335879afb616c6ead0f41bf6c275ee941 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 09:58:39 -0400
Subject: [PATCH 1/3] Fix indentation.

No functio0nal change, indentation was incorrect.

	* gimple-range-fold.cc (fold_using_range::range_of_phi): Fix
	indentation.
---
 gcc/gimple-range-fold.cc | 80 
 1 file changed, 40 insertions(+), 40 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 8ebff7f5980..03805d88d9b 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -898,46 +898,46 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	break;
 }
 
-// If all arguments were equivalences, use the equivalence ranges as no
-// arguments were processed.
-if (r.undefined_p () && !equiv_range.undefined_p ())
-  r = equiv_range;
-
-// If the PHI boils down to a single effective argument, look at it.
-if (single_arg)
-  {
-	// Symbolic arguments can be equivalences.
-	if (gimple_range_ssa_p (single_arg))
-	  {
-	// Only allow the equivalence if the PHI definition does not
-	// dominate any incoming edge for SINGLE_ARG.
-	// See PR 108139 and 109462.
-	basic_block bb = gimple_bb (phi);
-	if (!dom_info_available_p (CDI_DOMINATORS))
-	  single_arg = NULL;
-	else
-	  for (x = 0; x < gimple_phi_num_args (phi); x++)
-		if (gimple_phi_arg_def (phi, x) == single_arg
-		&& dominated_by_p (CDI_DOMINATORS,
-	gimple_phi_arg_edge (phi, x)->src,
-	bb))
-		  {
-		single_arg = NULL;
-		break;
-		  }
-	if (single_arg)
-	  src.register_relation (phi, VREL_EQ, phi_def, single_arg);
-	  }
-	else if (src.get_operand (arg_range, single_arg)
-		 && arg_range.singleton_p ())
-	  {
-	// Numerical arguments that are a constant can be returned as
-	// the constant. This can help fold later cases where even this
-	// constant might have been UNDEFINED via an unreachable edge.
-	r = arg_range;
-	return true;
-	  }
-  }
+  // If all arguments were equivalences, use the equivalence ranges as no
+  // arguments were processed.
+  if (r.undefined_p () && !equiv_range.undefined_p ())
+r = equiv_range;
+
+  // If the PHI boils down to a single effective argument, look at it.
+  if (single_arg)
+{
+  // Symbolic arguments can be equivalences.
+  if (gimple_range_ssa_p (single_arg))
+	{
+	  // Only allow the equivalence if the PHI definition does not
+	  // dominate any incoming edge for SINGLE_ARG.
+	  // See PR 108139 and 109462.
+	  basic_block bb = gimple_bb (phi);
+	  if (!dom_info_available_p (CDI_DOMINATORS))
+	single_arg = NULL;
+	  else
+	for (x = 0; x < gimple_phi_num_args (phi); x++)
+	  if (gimple_phi_arg_def (phi, x) == single_arg
+		  && dominated_by_p (CDI_DOMINATORS,
+  gimple_phi_arg_edge (phi, x)->src,
+  bb))
+		{
+		  single_arg = NULL;
+		  break;
+		}
+	  if (single_arg)
+	src.register_relation (phi, VREL_EQ, phi_def, single_arg);
+	}
+  else if (src.get_operand (arg_range, single_arg)
+	   && arg_range.singleton_p ())
+	{
+	  // Numerical arguments that are a constant can be returned as
+	  // the constant. This can help fold later cases where even this
+	  // constant might have been UNDEFINED via an unreachable edge.
+	  r = arg_range;
+	  return true;
+	}
+}
 
   bool loop_info_p = false;
   // If SCEV is available, query if this PHI has any known values.
-- 
2.41.0



Re: [PATCH] Checking undefined_p before using the vr

2023-09-15 Thread Andrew MacLeod via Gcc-patches



On 9/14/23 22:07, Jiufu Guo wrote:


undefined is a perfectly acceptable range.  It can be used to
represent either values which has not been initialized, or more
frequently it identifies values that cannot occur due to
conflicting/unreachable code.  VARYING means it can be any range,
UNDEFINED means this is unusable, so treat it accordingly.  Its
propagated like any other range.

"undefined" means the ranger is unusable. So, for this ranger, it
seems only "undefined_p ()" can be checked, and it seems no other
functions of this ranger can be called.


not at all. It means ranger has determined that there is no valid range 
for the item you are asking about probably due to conflicting 
conditions, which imparts important information about the range.. or 
lack of range :-)


Quite frequently it means you are looking at a block of code that ranger 
knows is unreachable, but a pass of the compiler which removes such 
blocks has not been called yet.. so the awareness imparted is that there 
isn't much point in doing optimizations on it because its probably going 
to get thrown away by a following pass.




I'm thinking that it may be ok to let "range_of_expr" return false
if the "vr" is "undefined_p".  I know this may change the meaning
of "range_of_expr" slightly :)


No.  That would be like saying NULL is not a valid value for a pointer.  
undefined_p has very specific meaning that we use.. it just has no type.


Andrew



Re: [PATCH] Checking undefined_p before using the vr

2023-09-13 Thread Andrew MacLeod via Gcc-patches



On 9/12/23 21:42, Jiufu Guo wrote:

Hi,

Richard Biener  writes:


On Thu, 7 Sep 2023, Jiufu Guo wrote:


Hi,

As discussed in PR111303:

For pattern "(X + C) / N": "div (plus@3 @0 INTEGER_CST@1) INTEGER_CST@2)",
Even if "X" has value-range and "X + C" does not overflow, "@3" may still
be undefined. Like below example:

_3 = _2 + -5;
if (0 != 0)
   goto ; [34.00%]
else
   goto ; [66.00%]
;;  succ:   3
;;  4

;; basic block 3, loop depth 0
;;  pred:   2
_5 = _3 / 5;
;;  succ:   4

The whole pattern "(_2 + -5 ) / 5" is in "bb 3", but "bb 3" would be
unreachable (because "if (0 != 0)" is always false).
And "get_range_query (cfun)->range_of_expr (vr3, @3)" is checked in
"bb 3", "range_of_expr" gets an "undefined vr3". Where "@3" is "_5".

So, before using "vr3", it would be safe to check "!vr3.undefined_p ()".

Bootstrap & regtest pass on ppc64{,le} and x86_64.
Is this ok for trunk?

OK, but I wonder why ->range_of_expr () doesn't return false for
undefined_p ()?  While "undefined" technically means we can treat
it as nonnegative_p (or not, maybe but maybe not both), we seem to
not want to do that.  So why expose it at all to ranger users
(yes, internally we in some places want to handle undefined).

I guess, currently, it returns true and then lets the user check
undefined_p, maybe because it tries to only return false if the
type of EXPR is unsupported.


false is returned if no range can be calculated for any reason. The most 
common ones are unsupported types or in some cases, statements that are 
not understood.  FALSE means you cannot use the range being passed in.




Let "range_of_expr" return false for undefined_p would save checking
undefined_p again when using the APIs.

undefined is a perfectly acceptable range.  It can be used to represent 
either values which has not been initialized, or more frequently it 
identifies values that cannot occur due to conflicting/unreachable 
code.  VARYING means it can be any range, UNDEFINED means this is 
unusable, so treat it accordingly.  Its propagated like any other range.


The only reason you are having issues is you are then asking for the 
type of the range, and an undefined range currently has no type, for 
historical reasons.


Andrew

Andrew




[COMMITTED] PR tree-optimization/110875 - Some ssa-names get incorrectly marked as always_current.

2023-09-07 Thread Andrew MacLeod via Gcc-patches
When range_of_stmt invokes prefill_name to evaluate unvisited 
dependneciesit should not mark visited names as always_current.


when raner_cache::get_globaL_range() is invoked with the optional  
"current_p" flag, it triggers additional functionality. This call is 
meant to be from within ranger and it is understood that if the current 
value is not current,  set_global_range will always be called later with 
a value.  Thus it sets the always_current flag in the temporal cache to 
avoid computation cycles.


the prefill_stmt_dependencies () mechanism within ranger is intended to 
emulate the bahaviour od range_of_stmt on an arbitrarily long series of 
unresolved dependencies without triggering the overhead of huge call 
chains from the range_of_expr/range_on_entry/range_on_exit routines.  
Rather, it creates a stack of unvisited names, and invokes range_of_stmt 
on them directly in order to get initial cache values for each ssa-name.


The issue in this PR was that routine was incorrectly invoking the 
get_global_cache to determine whether there was a global value.  If 
there was, it would move on to the next dependency without invoking 
set_global_range to clear the always_current flag.


What it soudl have been doing was simply checking if there as a global 
value, and if there was not, add the name for processingand THEN invoke 
get_global_value to do all the special processing.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew




From e9be59f7d2dc6b302cf85ad69b0a77dee89ec809 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 7 Sep 2023 11:15:50 -0400
Subject: [PATCH] Some ssa-names get incorrectly marked as always_current.

When range_of_stmt invokes prefill_name to evaluate unvisited dependencies
it should not mark already visited names as always_current.

	PR tree-optimization/110875
	gcc/
	* gimple-range.cc (gimple_ranger::prefill_name): Only invoke
	cache-prefilling routine when the ssa-name has no global value.

	gcc/testsuite/
	* gcc.dg/pr110875.c: New.
---
 gcc/gimple-range.cc | 10 +++---
 gcc/testsuite/gcc.dg/pr110875.c | 34 +
 2 files changed, 41 insertions(+), 3 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110875.c

diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 01173c58f02..13c3308d537 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -351,10 +351,14 @@ gimple_ranger::prefill_name (vrange , tree name)
   if (!gimple_range_op_handler::supported_p (stmt) && !is_a (stmt))
 return;
 
-  bool current;
   // If this op has not been processed yet, then push it on the stack
-  if (!m_cache.get_global_range (r, name, current))
-m_stmt_list.safe_push (name);
+  if (!m_cache.get_global_range (r, name))
+{
+  bool current;
+  // Set the global cache value and mark as alway_current.
+  m_cache.get_global_range (r, name, current);
+  m_stmt_list.safe_push (name);
+}
 }
 
 // This routine will seed the global cache with most of the dependencies of
diff --git a/gcc/testsuite/gcc.dg/pr110875.c b/gcc/testsuite/gcc.dg/pr110875.c
new file mode 100644
index 000..4d6ecbca0c8
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110875.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp2" } */
+
+void foo(void);
+static int a, b;
+static int *c = , *d;
+static unsigned e;
+static short f;
+static unsigned g(unsigned char h, char i) { return h + i; }
+int main() {
+d = 
+int *j = d;
+e = -27;
+for (; e > 18; e = g(e, 6)) {
+a = 0;
+for (; a != -3; a--) {
+if (0 != a ^ *j)
+for (; b; b++) f = -f;
+else if (*c) {
+foo();
+break;
+}
+if (!(((e) >= 235) && ((e) <= 4294967269))) {
+__builtin_unreachable();
+}
+b = 0;
+}
+}
+}
+
+
+/* { dg-final { scan-tree-dump-not "foo" "vrp2" } } */
+
+
-- 
2.41.0



Re: [PATCH 2/2] VR-VALUES: Rewrite test_for_singularity using range_op_handler

2023-09-07 Thread Andrew MacLeod via Gcc-patches



On 9/1/23 02:40, Andrew Pinski wrote:

On Fri, Aug 11, 2023 at 8:08 AM Andrew MacLeod via Gcc-patches
 wrote:


If this is only going to work with integers, you might want to check
that somewhere or switch to irange and int_range_max..

You can make it work with any kind (if you know op1 is a constant) by
simply doing

Value_Range op1_range (TREE_TYPE (op1))
get_global_range_query->range_of_expr (op1_range, op1)

That will convert trees to a the appropriate range...  THis is also true
for integer constants... but you can also just do the WI conversion like
you do.

The routine also get confusing to read because it passes in op0 and
op1,  but of course ranger uses op1 and op2 nomenclature, and it looks a
bit confusing :-P   I'd change the operands passed in to op1 and op2 if
we are rewriting the routine.

Ranger using the nomenclature of op1/op2 and gimple is inconsistent
with trees and other parts of GCC.
It seems like we have to live with this inconsistency now too.
Renaming things in this one function to op1/op2 might be ok but the
rest of the file uses op0/op1 too; most likely because it was
originally written before gimple.

I think it would be good to have this written in the coding style,
which way should we have it for new code; if we start at 0 or 1 for
operands. It might reduce differences based on who wrote which part
(and even to some extent when). I don't really care which one is
picked as long as we pick one.

Thanks,
Andrew Pinski

I certainly wont argue it would be good to be consistent, but of course 
its quite prevalent. Perhaps we should rewrite vr-values.cc to change 
the terminology in one patch?


long term some of it is likely to get absorbed into rangeops, and what 
isn't could/should be made vrange/irange  aware...  no one has gotten to 
it yet. we could change the terminology as the routines are reworked too...


Andrew




[COMMITTED 2/2] tree-optimization/110918 - Phi analyzer - Initialize with a range instead of a tree.

2023-08-23 Thread Andrew MacLeod via Gcc-patches
Rangers PHI analyzer currently only allows a single initializing value 
to a group. This patch changes that to use an initialization range, which is
cumulative of all integer constants, plus a single symbolic value.  
There were many times when there were multiple constants feeding into 
PHIs and there is no reason to disqualify those from determining if 
there is a better starting range for a PHI,


This patch also changes the way PHI groups are printed so they show up 
in the listing as they are encountered, rather than as a list at the 
end.  It was quite difficult to see what was going on when it simply 
dumped the groups at the end of processing.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From bd50bbfa95e51edf51392f147e9a860adb5f495e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 17 Aug 2023 12:34:59 -0400
Subject: [PATCH 2/4] Phi analyzer - Initialize with range instead of a tree.

Rangers PHI analyzer currently only allows a single initializer to a group.
This patch changes that to use an inialization range, which is
cumulative of all integer constants, plus a single symbolic value.  There is no other change to group functionality.

This patch also changes the way PHI groups are printed so they show up in the
listing as they are encountered, rather than as a list at the end.  It
was more difficult to see what was going on previously.

	PR tree-optimization/110918 - Initialize with range instead of a tree.
	gcc/
	* gimple-range-fold.cc (fold_using_range::range_of_phi): Tweak output.
	* gimple-range-phi.cc (phi_group::phi_group): Remove unused members.
	Initialize using a range instead of value and edge.
	(phi_group::calculate_using_modifier): Use initializer value and
	process for relations after trying for iteration convergence.
	(phi_group::refine_using_relation): Use initializer range.
	(phi_group::dump): Rework the dump output.
	(phi_analyzer::process_phi): Allow multiple constant initilizers.
	Dump groups immediately as created.
	(phi_analyzer::dump): Tweak output.
	* gimple-range-phi.h (phi_group::phi_group): Adjust prototype.
	(phi_group::initial_value): Delete.
	(phi_group::refine_using_relation): Adjust prototype.
	(phi_group::m_initial_value): Delete.
	(phi_group::m_initial_edge): Delete.
	(phi_group::m_vr): Use int_range_max.
	* tree-vrp.cc (execute_ranger_vrp): Don't dump phi groups.

	gcc/testsuite/
	* gcc.dg/pr102983.c: Adjust output expectations.
	* gcc.dg/pr110918.c: New.
---
 gcc/gimple-range-fold.cc|   6 +-
 gcc/gimple-range-phi.cc | 186 
 gcc/gimple-range-phi.h  |   9 +-
 gcc/testsuite/gcc.dg/pr102983.c |   2 +-
 gcc/testsuite/gcc.dg/pr110918.c |  26 +
 gcc/tree-vrp.cc |   5 +-
 6 files changed, 129 insertions(+), 105 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110918.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 7fa5a27cb12..8ebff7f5980 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -953,7 +953,7 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 		{
-		  fprintf (dump_file, "   Loops range found for ");
+		  fprintf (dump_file, "Loops range found for ");
 		  print_generic_expr (dump_file, phi_def, TDF_SLIM);
 		  fprintf (dump_file, ": ");
 		  loop_range.dump (dump_file);
@@ -975,9 +975,9 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	{
-	  fprintf (dump_file, "   PHI group range found for ");
+	  fprintf (dump_file, "PHI GROUP query for ");
 	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
-	  fprintf (dump_file, ": ");
+	  fprintf (dump_file, " found : ");
 	  g->range ().dump (dump_file);
 	  fprintf (dump_file, " and adjusted original range from :");
 	  r.dump (dump_file);
diff --git a/gcc/gimple-range-phi.cc b/gcc/gimple-range-phi.cc
index a94b90a4660..9884a0ebbb0 100644
--- a/gcc/gimple-range-phi.cc
+++ b/gcc/gimple-range-phi.cc
@@ -79,39 +79,33 @@ phi_analyzer _analysis ()
 phi_group::phi_group (const phi_group )
 {
   m_group = g.m_group;
-  m_initial_value = g.m_initial_value;
-  m_initial_edge = g.m_initial_edge;
   m_modifier = g.m_modifier;
   m_modifier_op = g.m_modifier_op;
   m_vr = g.m_vr;
 }
 
-// Create a new phi_group with members BM, initialvalue INIT_VAL, modifier
-// statement MOD, and resolve values using query Q.
-// Calculate the range for the gropup if possible, otherwise set it to
-// VARYING.
+// Create a new phi_group with members BM, initial range INIT_RANGE, modifier
+// statement MOD on edge MOD_EDGE, and resolve values using query Q.  Calculate
+// the range for the group if possible, otherwise set it to VARYING.
 
-phi_group::phi_group (bitmap bm, tree init_val, edge e, gimple *mod,
+phi_group::phi_group (bitmap bm, irange _range, gimple *mod,
 		  range_query *q)
 {

[COMMITTED 1/2] Phi analyzer - Do not create phi groups with a single phi.

2023-08-23 Thread Andrew MacLeod via Gcc-patches
Rangers Phi Analyzer was creating a group consisting of a single PHI, 
which was problematic.  It didn't really help anything, and it prevented 
larger groups from including those PHIs and stopped some useful things 
from happening.


Bootstrapped on x86_64-pc-linux-gnu  with no regressions. Pushed.

Andrew
From 9855b3f0a2869d456f0ee34a94a1231eb6d44c4a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 16 Aug 2023 13:23:06 -0400
Subject: [PATCH 1/4] Don't process phi groups with one phi.

The phi analyzer should not create a phi group containing a single phi.

	* gimple-range-phi.cc (phi_analyzer::operator[]): Return NULL if
	no group was created.
	(phi_analyzer::process_phi): Do not create groups of one phi node.
---
 gcc/gimple-range-phi.cc | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/gcc/gimple-range-phi.cc b/gcc/gimple-range-phi.cc
index ffb4691d06b..a94b90a4660 100644
--- a/gcc/gimple-range-phi.cc
+++ b/gcc/gimple-range-phi.cc
@@ -344,9 +344,10 @@ phi_analyzer::operator[] (tree name)
   process_phi (as_a (SSA_NAME_DEF_STMT (name)));
   if (bitmap_bit_p (m_simple, v))
 	return  NULL;
-  // If m_simple bit isn't set, then process_phi allocated the table
-  // and should have a group.
-  gcc_checking_assert (v < m_tab.length ());
+ // If m_simple bit isn't set, and process_phi didn't allocated the table
+ // no group was created, so return NULL.
+ if (v >= m_tab.length ())
+  return NULL;
 }
   return m_tab[v];
 }
@@ -363,6 +364,7 @@ phi_analyzer::process_phi (gphi *phi)
   unsigned x;
   m_work.truncate (0);
   m_work.safe_push (gimple_phi_result (phi));
+  unsigned phi_count = 1;
   bitmap_clear (m_current);
 
   // We can only have 2 externals: an initial value and a modifier.
@@ -407,6 +409,7 @@ phi_analyzer::process_phi (gphi *phi)
 	  gimple *arg_stmt = SSA_NAME_DEF_STMT (arg);
 	  if (arg_stmt && is_a (arg_stmt))
 		{
+		  phi_count++;
 		  m_work.safe_push (arg);
 		  continue;
 		}
@@ -430,9 +433,12 @@ phi_analyzer::process_phi (gphi *phi)
 	}
 }
 
-  // If there are no names in the group, we're done.
-  if (bitmap_empty_p (m_current))
+  // If there are less than 2 names, just return.  This PHI may be included
+  // by another PHI, making it simple or a group of one will prevent a larger
+  // group from being formed.
+  if (phi_count < 2)
 return;
+  gcc_checking_assert (!bitmap_empty_p (m_current));
 
   phi_group *g = NULL;
   if (cycle_p)
-- 
2.41.0



[COMMITTED] PR tree-optimization/111009 - Fix range-ops operator_addr.

2023-08-17 Thread Andrew MacLeod via Gcc-patches
operator_addr was simply calling fold_range() to implement op1_range, 
but it turns out op1_range needs to be more restrictive.


take for example  from the PR :

   _13 = >maj

when folding,  getting a value of 0 for op1 means dso->maj resolved to a 
value of [0,0].  fold_using_range::range_of_address will have processed 
the symbolics, or at least we know that op1 is 0.  Likewise if it is 
non-zero, we can also conclude the LHS is non-zero.


however, when working from the LHS, we cannot make the same 
conclusions.  GORI has no concept of symblics, so knowing the expressions is


[0,0]  = & 

 we cannot conclude the op1 is also 0.. in particular >maj wouldnt 
be unless dso was zero and maj was also a zero offset.
Likewise if the LHS is [1,1] we cant be sure op1 is nonzero unless we 
know the type cannot wrap.


This patch simply implements op1_range with these rules instead of 
calling fold_range.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From dc48d1d1d4458773f89f21b2f019f66ddf88f2e5 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 17 Aug 2023 11:13:14 -0400
Subject: [PATCH] Fix range-ops operator_addr.

Lack of symbolic information prevents op1_range from beig able to draw
the same conclusions as fold_range can.

PR tree-optimization/111009
gcc/
* range-op.cc (operator_addr_expr::op1_range): Be more restrictive.

gcc/testsuite/
* gcc.dg/pr111009.c: New.
---
 gcc/range-op.cc | 12 ++-
 gcc/testsuite/gcc.dg/pr111009.c | 38 +
 2 files changed, 49 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr111009.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 086c6c19735..268f6b6f025 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -4325,7 +4325,17 @@ operator_addr_expr::op1_range (irange , tree type,
   const irange ,
   relation_trio) const
 {
-  return operator_addr_expr::fold_range (r, type, lhs, op2);
+   if (empty_range_varying (r, type, lhs, op2))
+return true;
+
+  // Return a non-null pointer of the LHS type (passed in op2), but only
+  // if we cant overflow, eitherwise a no-zero offset could wrap to zero.
+  // See PR 111009.
+  if (!contains_zero_p (lhs) && TYPE_OVERFLOW_UNDEFINED (type))
+r = range_nonzero (type);
+  else
+r.set_varying (type);
+  return true;
 }
 
 // Initialize any integral operators to the primary table
diff --git a/gcc/testsuite/gcc.dg/pr111009.c b/gcc/testsuite/gcc.dg/pr111009.c
new file mode 100644
index 000..3accd9ac063
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111009.c
@@ -0,0 +1,38 @@
+/* PR tree-optimization/111009 */
+/* { dg-do run } */
+/* { dg-options "-O3 -fno-strict-overflow" } */
+
+struct dso {
+ struct dso * next;
+ int maj;
+};
+
+__attribute__((noipa)) static void __dso_id__cmp_(void) {}
+
+__attribute__((noipa))
+static int bug(struct dso * d, struct dso *dso)
+{
+ struct dso **p = 
+ struct dso *curr = 0;
+
+ while (*p) {
+  curr = *p;
+  // prevent null deref below
+  if (!dso) return 1;
+  if (dso == curr) return 1;
+
+  int *a = >maj;
+  // null deref
+  if (!(a && *a)) __dso_id__cmp_();
+
+  p = >next;
+ }
+ return 0;
+}
+
+__attribute__((noipa))
+int main(void) {
+struct dso d = { 0, 0, };
+bug(, 0);
+}
+
-- 
2.41.0



Re: [PATCH 2/2] VR-VALUES: Rewrite test_for_singularity using range_op_handler

2023-08-11 Thread Andrew MacLeod via Gcc-patches



On 8/11/23 05:51, Richard Biener wrote:

On Fri, Aug 11, 2023 at 11:17 AM Andrew Pinski via Gcc-patches
 wrote:

So it turns out there was a simplier way of starting to
improve VRP to start to fix PR 110131, PR 108360, and PR 108397.
That was rewrite test_for_singularity to use range_op_handler
and Value_Range.

This patch implements that and

OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.

I'm hoping Andrew/Aldy can have a look here.

Richard.


gcc/ChangeLog:

 * vr-values.cc (test_for_singularity): Add edge argument
 and rewrite using range_op_handler.
 (simplify_compare_using_range_pairs): Use Value_Range
 instead of value_range and update test_for_singularity call.

gcc/testsuite/ChangeLog:

 * gcc.dg/tree-ssa/vrp124.c: New test.
 * gcc.dg/tree-ssa/vrp125.c: New test.
---
  gcc/testsuite/gcc.dg/tree-ssa/vrp124.c | 44 +
  gcc/testsuite/gcc.dg/tree-ssa/vrp125.c | 44 +
  gcc/vr-values.cc   | 91 --
  3 files changed, 114 insertions(+), 65 deletions(-)
  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/vrp125.c

diff --git a/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c 
b/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
new file mode 100644
index 000..6ccbda35d1b
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
@@ -0,0 +1,44 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+/* Should be optimized to a == -100 */
+int g(int a)
+{
+  if (a == -100 || a >= 0)
+;
+  else
+return 0;
+  return a < 0;
+}
+
+/* Should optimize to a == 0 */
+int f(int a)
+{
+  if (a == 0 || a > 100)
+;
+  else
+return 0;
+  return a < 50;
+}
+
+/* Should be optimized to a == 0. */
+int f2(int a)
+{
+  if (a == 0 || a > 100)
+;
+  else
+return 0;
+  return a < 100;
+}
+
+/* Should optimize to a == 100 */
+int f1(int a)
+{
+  if (a < 0 || a == 100)
+;
+  else
+return 0;
+  return a > 50;
+}
+
+/* { dg-final { scan-tree-dump-not "goto " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c 
b/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c
new file mode 100644
index 000..f6c2f8e35f1
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c
@@ -0,0 +1,44 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+/* Should be optimized to a == -100 */
+int g(int a)
+{
+  if (a == -100 || a == -50 || a >= 0)
+;
+  else
+return 0;
+  return a < -50;
+}
+
+/* Should optimize to a == 0 */
+int f(int a)
+{
+  if (a == 0 || a == 50 || a > 100)
+;
+  else
+return 0;
+  return a < 50;
+}
+
+/* Should be optimized to a == 0. */
+int f2(int a)
+{
+  if (a == 0 || a == 50 || a > 100)
+;
+  else
+return 0;
+  return a < 25;
+}
+
+/* Should optimize to a == 100 */
+int f1(int a)
+{
+  if (a < 0 || a == 50 || a == 100)
+;
+  else
+return 0;
+  return a > 50;
+}
+
+/* { dg-final { scan-tree-dump-not "goto " "optimized" } } */
diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index a4fddd62841..7004b0224bd 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -907,66 +907,30 @@ simplify_using_ranges::simplify_bit_ops_using_ranges
 a known value range VR.

 If there is one and only one value which will satisfy the
-   conditional, then return that value.  Else return NULL.
-
-   If signed overflow must be undefined for the value to satisfy
-   the conditional, then set *STRICT_OVERFLOW_P to true.  */
+   conditional on the EDGE, then return that value.
+   Else return NULL.  */

  static tree
  test_for_singularity (enum tree_code cond_code, tree op0,
- tree op1, const value_range *vr)
+ tree op1, Value_Range vr, bool edge)


VR should be a "vrange &".   THis is the top level base class for all 
ranges of all types/kinds, and what we usually pass values around as if 
we want tohem to be any kind.   If this is inetger only, we'd pass a an 
'irange &'


Value_Range is the opposite. Its the sink that contains one of each kind 
of range and can switch around between them as needed. You do not want 
to pass that by value!   The generic engine uses these so it can suppose 
floats. int, pointers, whatever...



  {
-  tree min = NULL;
-  tree max = NULL;
-
-  /* Extract minimum/maximum values which satisfy the conditional as it was
- written.  */
-  if (cond_code == LE_EXPR || cond_code == LT_EXPR)
+  /* This is already a singularity.  */
+  if (cond_code == NE_EXPR || cond_code == EQ_EXPR)
+return NULL;
+  auto range_op = range_op_handler (cond_code);
+  int_range<2> op1_range (TREE_TYPE (op0));
+  wide_int w = wi::to_wide (op1);
+  op1_range.set (TREE_TYPE (op1), w, w);


If this is only going to work with integers, you might want to check 
that somewhere or switch to irange and int_range_max..


You can make it work with any kind (if you know op1 is a constant) by 

[COMMITTED] Add operand ranges to op1_op2_relation API.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
We're looking to add the unordered relations for floating point, and as 
a result, we can no longer determine the relation between op1 and op2 in 
a statement based purely on the LHS... we also need to know the type of 
the operands on the RHS.


This patch adjusts op1_op2_relation to fit the same mold as 
fold_range... ie, takes 3 vrange instead of just a LHS.


It also copies the functionality of the integral relations to the 
floating point counterparts, and when the unordered relations are added, 
those floating point routines can be adjusted to do the right thing.


This results in no current functional changes.

Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From de7ae277f497ed5b533af877fe26d8f133760f8b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 1 Aug 2023 14:33:09 -0400
Subject: [PATCH 3/3] Add operand ranges to op1_op2_relation API.

With additional floating point relations in the pipeline, we can no
longer tell based on the LHS what the relation of X < Y is without knowing
the type of X and Y.

	* gimple-range-fold.cc (fold_using_range::range_of_range_op): Add
	ranges to the call to relation_fold_and_or.
	(fold_using_range::relation_fold_and_or): Add op1 and op2 ranges.
	(fur_source::register_outgoing_edges): Add op1 and op2 ranges.
	* gimple-range-fold.h (relation_fold_and_or): Adjust params.
	* gimple-range-gori.cc (gori_compute::compute_operand_range): Add
	a varying op1 and op2 to call.
	* range-op-float.cc (range_operator::op1_op2_relation): New dafaults.
	(operator_equal::op1_op2_relation): New float version.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation) Ditto.
	* range-op-mixed.h (operator_equal::op1_op2_relation): New float
	prototype.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation): Ditto.
	* range-op.cc (range_op_handler::op1_op2_relation): Dispatch new
	variations.
	(range_operator::op1_op2_relation): Add extra params.
	(operator_equal::op1_op2_relation): Ditto.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation): Ditto.
	* range-op.h (range_operator): New prototypes.
	(range_op_handler): Ditto.
---
 gcc/gimple-range-fold.cc |  26 +---
 gcc/gimple-range-fold.h  |   3 +-
 gcc/gimple-range-gori.cc |   5 +-
 gcc/range-op-float.cc| 129 ++-
 gcc/range-op-mixed.h |  30 +++--
 gcc/range-op.cc  |  41 +
 gcc/range-op.h   |  15 -
 7 files changed, 216 insertions(+), 33 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index ab2d996c4eb..7fa5a27cb12 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -700,7 +700,7 @@ fold_using_range::range_of_range_op (vrange ,
    relation_trio::op1_op2 (rel)))
 	r.set_varying (type);
 	  if (irange::supports_p (type))
-	relation_fold_and_or (as_a  (r), s, src);
+	relation_fold_and_or (as_a  (r), s, src, range1, range2);
 	  if (lhs)
 	{
 	  if (src.gori ())
@@ -1103,7 +1103,8 @@ fold_using_range::range_of_ssa_name_with_loop_info (vrange , tree name,
 
 void
 fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
-	fur_source )
+	fur_source , vrange ,
+	vrange )
 {
   // No queries or already folded.
   if (!src.gori () || !src.query ()->oracle () || lhs_range.singleton_p ())
@@ -1164,9 +1165,8 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 return;
 
   int_range<2> bool_one = range_true ();
-
-  relation_kind relation1 = handler1.op1_op2_relation (bool_one);
-  relation_kind relation2 = handler2.op1_op2_relation (bool_one);
+  relation_kind relation1 = handler1.op1_op2_relation (bool_one, op1, op2);
+  relation_kind relation2 = handler2.op1_op2_relation (bool_one, op1, op2);
   if (relation1 == VREL_VARYING || relation2 == VREL_VARYING)
 return;
 
@@ -1201,7 +1201,8 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 // Register any outgoing edge relations from a conditional branch.
 
 void
-fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge e1)
+fur_source::register_outgoing_edges (gcond *s, irange _range,
+ edge e0, edge e1)
 {
   int_range<2> e0_range, e1_range;
   tree name;
@@ -1236,17 +1237,20 @@ fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge
   // if (a_2 < b_5)
   tree ssa1 = gimple_range_ssa_p (handler.operand1 ());
   tree ssa2 = gimple_range_ssa_p (handler.operand2 ());
+  Value_Range r1,r2;
   if (ssa1 && ssa2)
 {
+  r1.set_varying (TREE_TYPE 

[COMMITTED] Provide a routine for NAME == NAME relation.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
We've been assuming x == x is always VREL_EQ in GORI, but this is not 
always going to be true with floating point.  Provide an API to return 
the relation.


Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 430ff4f3e670e02185991190a5e2d90e61b39e07 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 2 Aug 2023 10:58:37 -0400
Subject: [PATCH 2/3] Provide a routine for NAME == NAME relation.

We've been assuming x == x s VREL_EQ in GORI, but this is not always going to
be true with floating point.  Provide an API to return the relation.

	* gimple-range-gori.cc (gori_compute::compute_operand1_range):
	Use identity relation.
	(gori_compute::compute_operand2_range): Ditto.
	* value-relation.cc (get_identity_relation): New.
	* value-relation.h (get_identity_relation): New prototype.
---
 gcc/gimple-range-gori.cc | 10 --
 gcc/value-relation.cc| 14 ++
 gcc/value-relation.h |  3 +++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 6dc15a0ce3f..c37e54bcf84 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1142,7 +1142,10 @@ gori_compute::compute_operand1_range (vrange ,
 
   // If op1 == op2, create a new trio for just this call.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-	trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
+	{
+	  relation_kind k = get_identity_relation (op1, op1_range);
+	  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
+	}
   if (!handler.calc_op1 (r, lhs, op2_range, trio))
 	return false;
 }
@@ -1218,7 +1221,10 @@ gori_compute::compute_operand2_range (vrange ,
 
   // If op1 == op2, create a new trio for this stmt.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
+{
+  relation_kind k = get_identity_relation (op1, op1_range);
+  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
+}
   // Intersect with range for op2 based on lhs and op1.
   if (!handler.calc_op2 (r, lhs, op1_range, trio))
 return false;
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 7df2cd6e961..f2c668a0193 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -183,6 +183,20 @@ relation_transitive (relation_kind r1, relation_kind r2)
   return relation_kind (rr_transitive_table[r1][r2]);
 }
 
+// When operands of a statement are identical ssa_names, return the
+// approriate relation between operands for NAME == NAME, given RANGE.
+//
+relation_kind
+get_identity_relation (tree name, vrange  ATTRIBUTE_UNUSED)
+{
+  // Return VREL_UNEQ when it is supported for floats as appropriate.
+  if (frange::supports_p (TREE_TYPE (name)))
+return VREL_EQ;
+
+  // Otherwise return VREL_EQ.
+  return VREL_EQ;
+}
+
 // This vector maps a relation to the equivalent tree code.
 
 static const tree_code relation_to_code [VREL_LAST] = {
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index be6e277421b..f00f84f93b6 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -91,6 +91,9 @@ inline bool relation_equiv_p (relation_kind r)
 
 void print_relation (FILE *f, relation_kind rel);
 
+// Return relation for NAME == NAME with RANGE.
+relation_kind get_identity_relation (tree name, vrange );
+
 class relation_oracle
 {
 public:
-- 
2.40.1



[COMMITTED] Automatically set type is certain Value_Range routines.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
When you use a Value_Range, you need to set it's type first so it knows 
whether it will be an irange or an frange or whatever.


There are a few set routines which take a type, and you shouldn't need 
to set the type first in those cases..  For instance set_varying() takes 
a type, so it seems pointless to specify the type twice.  ie


Value_Range r1 (TREE_TYPE (name));
r1.set_varying (TREE_TYPE (name));

this patch automatically sets the kind based on the type in the routines 
set_varying(), set_zero(), and set_nonzero().. All of which take a type 
parameter.  Now it is simply:


Value_Range r1;
r1.set_varying (TREE_TYPE (name));

Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 1fbde4cc5fb7ad4b08f0f7ae1f247f9b35124f99 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 2 Aug 2023 17:46:58 -0400
Subject: [PATCH 1/3] Automatically set type is certain Value_Range routines.

Set routines which take a type shouldn't have to pre-set the type of the
underlying range as it is specified as a parameter already.

	* value-range.h (Value_Range::set_varying): Set the type.
	(Value_Range::set_zero): Ditto.
	(Value_Range::set_nonzero): Ditto.
---
 gcc/value-range.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/gcc/value-range.h b/gcc/value-range.h
index d8af6fca7d7..622b68863d2 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -679,15 +679,16 @@ public:
   tree type () { return m_vrange->type (); }
   bool varying_p () const { return m_vrange->varying_p (); }
   bool undefined_p () const { return m_vrange->undefined_p (); }
-  void set_varying (tree type) { m_vrange->set_varying (type); }
+  void set_varying (tree type) { init (type); m_vrange->set_varying (type); }
   void set_undefined () { m_vrange->set_undefined (); }
   bool union_ (const vrange ) { return m_vrange->union_ (r); }
   bool intersect (const vrange ) { return m_vrange->intersect (r); }
   bool contains_p (tree cst) const { return m_vrange->contains_p (cst); }
   bool singleton_p (tree *result = NULL) const
 { return m_vrange->singleton_p (result); }
-  void set_zero (tree type) { return m_vrange->set_zero (type); }
-  void set_nonzero (tree type) { return m_vrange->set_nonzero (type); }
+  void set_zero (tree type) { init (type); return m_vrange->set_zero (type); }
+  void set_nonzero (tree type)
+{ init (type); return m_vrange->set_nonzero (type); }
   bool nonzero_p () const { return m_vrange->nonzero_p (); }
   bool zero_p () const { return m_vrange->zero_p (); }
   wide_int lower_bound () const; // For irange/prange comparability.
-- 
2.40.1



Re: [PATCH V5 1/2] Add overflow API for plus minus mult on range

2023-08-03 Thread Andrew MacLeod via Gcc-patches

This is OK.


On 8/2/23 22:18, Jiufu Guo wrote:

Hi,

I would like to have a ping on this patch.

BR,
Jeff (Jiufu Guo)


Jiufu Guo  writes:


Hi,

As discussed in previous reviews, adding overflow APIs to range-op
would be useful. Those APIs could help to check if overflow happens
when operating between two 'range's, like: plus, minus, and mult.

Previous discussions are here:
https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624067.html
https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624701.html

Bootstrap & regtest pass on ppc64{,le} and x86_64.
Is this patch ok for trunk?

BR,
Jeff (Jiufu Guo)

gcc/ChangeLog:

* range-op-mixed.h (operator_plus::overflow_free_p): New declare.
(operator_minus::overflow_free_p): New declare.
(operator_mult::overflow_free_p): New declare.
* range-op.cc (range_op_handler::overflow_free_p): New function.
(range_operator::overflow_free_p): New default function.
(operator_plus::overflow_free_p): New function.
(operator_minus::overflow_free_p): New function.
(operator_mult::overflow_free_p): New function.
* range-op.h (range_op_handler::overflow_free_p): New declare.
(range_operator::overflow_free_p): New declare.
* value-range.cc (irange::nonnegative_p): New function.
(irange::nonpositive_p): New function.
* value-range.h (irange::nonnegative_p): New declare.
(irange::nonpositive_p): New declare.

---
  gcc/range-op-mixed.h |  11 
  gcc/range-op.cc  | 124 +++
  gcc/range-op.h   |   5 ++
  gcc/value-range.cc   |  12 +
  gcc/value-range.h|   2 +
  5 files changed, 154 insertions(+)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 6944742ecbc..42157ed9061 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -383,6 +383,10 @@ public:
  relation_kind rel) const final override;
void update_bitmask (irange , const irange ,
   const irange ) const final override;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  private:
void wi_fold (irange , tree type, const wide_int _lb,
const wide_int _ub, const wide_int _lb,
@@ -446,6 +450,10 @@ public:
relation_kind rel) const final override;
void update_bitmask (irange , const irange ,
   const irange ) const final override;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  private:
void wi_fold (irange , tree type, const wide_int _lb,
const wide_int _ub, const wide_int _lb,
@@ -525,6 +533,9 @@ public:
const REAL_VALUE_TYPE _lb, const REAL_VALUE_TYPE _ub,
const REAL_VALUE_TYPE _lb, const REAL_VALUE_TYPE _ub,
relation_kind kind) const final override;
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  };
  
  class operator_addr_expr : public range_operator

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index cb584314f4c..632b044331b 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -366,6 +366,22 @@ range_op_handler::op1_op2_relation (const vrange ) 
const
  }
  }
  
+bool

+range_op_handler::overflow_free_p (const vrange ,
+  const vrange ,
+  relation_trio rel) const
+{
+  gcc_checking_assert (m_operator);
+  switch (dispatch_kind (lh, lh, rh))
+{
+  case RO_III:
+   return m_operator->overflow_free_p(as_a  (lh),
+  as_a  (rh),
+  rel);
+  default:
+   return false;
+}
+}
  
  // Convert irange bitmasks into a VALUE MASK pair suitable for calling CCP.
  
@@ -688,6 +704,13 @@ range_operator::op1_op2_relation_effect (irange _range ATTRIBUTE_UNUSED,

return false;
  }
  
+bool

+range_operator::overflow_free_p (const irange &, const irange &,
+relation_trio) const
+{
+  return false;
+}
+
  // Apply any known bitmask updates based on this operator.
  
  void

@@ -4311,6 +4334,107 @@ range_op_table::initialize_integral_ops ()
  
  }
  
+bool

+operator_plus::overflow_free_p (const irange , const irange ,
+   relation_trio) const
+{
+  if (lh.undefined_p () || rh.undefined_p ())
+return false;
+
+  tree type = lh.type ();
+  if (TYPE_OVERFLOW_UNDEFINED (type))
+return true;
+
+  wi::overflow_type ovf;
+  signop sgn = TYPE_SIGN (type);
+  wide_int wmax0 = lh.upper_bound ();
+  wide_int wmax1 = rh.upper_bound ();
+  wi::add (wmax0, wmax1, sgn, );
+  if (ovf != wi::OVF_NONE)
+return false;
+
+  if (TYPE_UNSIGNED (type))
+return true;
+
+  

[COMMITTED] PR tree-optimization/110582 - fur_list should not use the range vector for non-ssa, operands.

2023-07-31 Thread Andrew MacLeod via Gcc-patches
The fold_using_range operand fetching mechanism has a variety of modes.  
The "normal" mechanism simply invokes the current or supplied 
range_query to satisfy fetching current range info for any ssa-names 
used during the evalaution of the statement,


I also added support for fur_list which allows a list of ranges to be 
supplied which is used to satisfy ssa-names as they appear in the stmt.  
Once the list is exhausted, then it reverts to using the range query.


This allows us to fold a stmt using whatever values we want. ie,

a_2 = b_3 + c_4


i can call fold_stmt (r, stmt, [1,2],  [4,5])

and a_2 would be calculated using [1,2] for the first ssa_name, and 
[4,5] for the second encountered name.  This allows us to manually fold 
stmts when we desire.


There was a bug in the implementation of fur_list where it was using the 
supplied values for *any* encountered operand, not just ssa_names.


The PHI analyzer is the first consumer of the fur_list API, and was 
tripping over this.



     [local count: 1052266993]:
  # a_lsm.12_29 = PHI 
  iftmp.1_15 = 3 / a_lsm.12_29;

   [local count: 1063004408]:
  # iftmp.1_11 = PHI 
  # ivtmp_2 = PHI 
  ivtmp_36 = ivtmp_2 - 1;
  if (ivtmp_36 != 0)
    goto ; [98.99%]
  else
    goto ; [1.01%]

It detemined that the initial value of iftmp.1_11 was [2, 2] (from the 
edge 2->4), and that the only modifying statement is

iftmp.1_15 = 3 / a_lsm.12_29;

One of the things it tries to do is determine is if a few iterations 
feeding the initial value and combining it with the result of the 
statement converge, thus providing a complete initial range.  Its uses 
fold_range supplying the value for the ssa-operand directly..  but 
tripped over the bug.


So for the first iteration, instead of calculating   _15 = 3 / [2,2]  
and coming up with [1,1],   it was instead calculating [2,2]/VARYING, 
and coming up with [-2, 2].  Next pass of the iteration checker then 
erroneously calculated [-2,2]/VARYING and the result was [-2,2] and 
convergence was achieved, and the initial value of the PHI set to[-2, 2] 
... incorrectly.  and of course bad things happened.


This patch fixes fur_list::get_operand to check for an ssa-name before 
it pulling a value from the supplied list.  With this, no partlculary 
good starting value for the PHI node can be determined.


Andrew

From 914fa35a7f7db76211ca259606578193773a254e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 31 Jul 2023 10:08:51 -0400
Subject: [PATCH] fur_list should not use the range vector for non-ssa
 operands.

	gcc/
	PR tree-optimization/110582
	* gimple-range-fold.cc (fur_list::get_operand): Do not use the
	range vector for non-ssa names.

	gcc/testsuite/
	* gcc.dg/pr110582.c: New.
---
 gcc/gimple-range-fold.cc|  3 ++-
 gcc/testsuite/gcc.dg/pr110582.c | 18 ++
 2 files changed, 20 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110582.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index d07246008f0..ab2d996c4eb 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -262,7 +262,8 @@ fur_list::fur_list (unsigned num, vrange **list, range_query *q)
 bool
 fur_list::get_operand (vrange , tree expr)
 {
-  if (m_index >= m_limit)
+  // Do not use the vector for non-ssa-names, or if it has been emptied.
+  if (TREE_CODE (expr) != SSA_NAME || m_index >= m_limit)
 return m_query->range_of_expr (r, expr);
   r = *m_list[m_index++];
   gcc_checking_assert (range_compatible_p (TREE_TYPE (expr), r.type ()));
diff --git a/gcc/testsuite/gcc.dg/pr110582.c b/gcc/testsuite/gcc.dg/pr110582.c
new file mode 100644
index 000..ae0650d3ae7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110582.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp2" } */
+
+int a, b;
+int main() {
+  char c = a = 0;
+  for (; c != -3; c++) {
+int d = 2;
+d ^= 2 && a;
+b = a == 0 ? d : d / a;
+a = b;
+  }
+  for (; (1 + 95 << 24) + b + 1 + 686658714L + b - 2297271457;)
+;
+}
+
+/* { dg-final { scan-tree-dump-not "Folding predicate" "vrp2" } } */
+
-- 
2.40.1



[COMMITTED] Remove value_query, push into sub class.

2023-07-28 Thread Andrew MacLeod via Gcc-patches
When we first introduced range_query, we provided a base class for 
constants rather than range queries.  Then inherioted from that and 
modified the value queries for a range-specific engine. .   At the time, 
we figured there would be other consumers of the value_query class.


When all the dust settled, it turned out that subsitute_and_fold is the 
only consumer, and all the other places we perceived there to be value 
clients actually use substitute_and_fold.


This patch simplifies everything by providing only a range-query class, 
and moving the old value_range functionality into substitute_and_fold, 
the only place that uses it.


Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew

From 619641397a558bf65c24b99a4c52878bd940fcbe Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sun, 16 Jul 2023 12:46:00 -0400
Subject: [PATCH 2/3] Remove value_query, push into sub class

	* tree-ssa-propagate.cc (substitute_and_fold_engine::value_on_edge):
	Move from value-query.cc.
	(substitute_and_fold_engine::value_of_stmt): Ditto.
	(substitute_and_fold_engine::range_of_expr): New.
	* tree-ssa-propagate.h (substitute_and_fold_engine): Inherit from
	range_query.  New prototypes.
	* value-query.cc (value_query::value_on_edge): Relocate.
	(value_query::value_of_stmt): Ditto.
	* value-query.h (class value_query): Remove.
	(class range_query): Remove base class.  Adjust prototypes.
---
 gcc/tree-ssa-propagate.cc | 28 
 gcc/tree-ssa-propagate.h  |  8 +++-
 gcc/value-query.cc| 21 -
 gcc/value-query.h | 30 --
 4 files changed, 39 insertions(+), 48 deletions(-)

diff --git a/gcc/tree-ssa-propagate.cc b/gcc/tree-ssa-propagate.cc
index 174d19890f9..cb68b419b8c 100644
--- a/gcc/tree-ssa-propagate.cc
+++ b/gcc/tree-ssa-propagate.cc
@@ -532,6 +532,34 @@ struct prop_stats_d
 
 static struct prop_stats_d prop_stats;
 
+// range_query default methods to drive from a value_of_expr() ranther than
+// range_of_expr.
+
+tree
+substitute_and_fold_engine::value_on_edge (edge, tree expr)
+{
+  return value_of_expr (expr);
+}
+
+tree
+substitute_and_fold_engine::value_of_stmt (gimple *stmt, tree name)
+{
+  if (!name)
+name = gimple_get_lhs (stmt);
+
+  gcc_checking_assert (!name || name == gimple_get_lhs (stmt));
+
+  if (name)
+return value_of_expr (name);
+  return NULL_TREE;
+}
+
+bool
+substitute_and_fold_engine::range_of_expr (vrange &, tree, gimple *)
+{
+  return false;
+}
+
 /* Replace USE references in statement STMT with the values stored in
PROP_VALUE. Return true if at least one reference was replaced.  */
 
diff --git a/gcc/tree-ssa-propagate.h b/gcc/tree-ssa-propagate.h
index be4cb457873..29bde37add9 100644
--- a/gcc/tree-ssa-propagate.h
+++ b/gcc/tree-ssa-propagate.h
@@ -96,11 +96,17 @@ class ssa_propagation_engine
   void simulate_block (basic_block);
 };
 
-class substitute_and_fold_engine : public value_query
+class substitute_and_fold_engine : public range_query
 {
  public:
   substitute_and_fold_engine (bool fold_all_stmts = false)
 : fold_all_stmts (fold_all_stmts) { }
+
+  virtual tree value_of_expr (tree expr, gimple * = NULL) = 0;
+  virtual tree value_on_edge (edge, tree expr) override;
+  virtual tree value_of_stmt (gimple *, tree name = NULL) override;
+  virtual bool range_of_expr (vrange , tree expr, gimple * = NULL);
+
   virtual ~substitute_and_fold_engine (void) { }
   virtual bool fold_stmt (gimple_stmt_iterator *) { return false; }
 
diff --git a/gcc/value-query.cc b/gcc/value-query.cc
index adef93415b7..0870d6c60a6 100644
--- a/gcc/value-query.cc
+++ b/gcc/value-query.cc
@@ -33,27 +33,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "gimple-range.h"
 #include "value-range-storage.h"
 
-// value_query default methods.
-
-tree
-value_query::value_on_edge (edge, tree expr)
-{
-  return value_of_expr (expr);
-}
-
-tree
-value_query::value_of_stmt (gimple *stmt, tree name)
-{
-  if (!name)
-name = gimple_get_lhs (stmt);
-
-  gcc_checking_assert (!name || name == gimple_get_lhs (stmt));
-
-  if (name)
-return value_of_expr (name);
-  return NULL_TREE;
-}
-
 // range_query default methods.
 
 bool
diff --git a/gcc/value-query.h b/gcc/value-query.h
index d10c3eac1e2..429446b32eb 100644
--- a/gcc/value-query.h
+++ b/gcc/value-query.h
@@ -37,28 +37,6 @@ along with GCC; see the file COPYING3.  If not see
 // Proper usage of the correct query in passes will enable other
 // valuation mechanisms to produce more precise results.
 
-class value_query
-{
-public:
-  value_query () { }
-  // Return the singleton expression for EXPR at a gimple statement,
-  // or NULL if none found.
-  virtual tree value_of_expr (tree expr, gimple * = NULL) = 0;
-  // Return the singleton expression for EXPR at an edge, or NULL if
-  // none found.
-  virtual tree value_on_edge (edge, tree expr);
-  // Return the singleton expression for the LHS of a gimple
-  // statement, 

[COMMITTED] Add a merge_range to ssa_cache and use it.

2023-07-28 Thread Andrew MacLeod via Gcc-patches

This adds some tweaks to the ssa-range cache.

1)   Adds a new merge_range which works like set_range, except if there 
is already a value, the two values are merged via intersection and 
stored.   THis avpoids having to check if there is a value, load it, 
intersect it then store that in the client. There is one usage pattern 
(but more to come) in the code base.. change to use it.


2)  The range_of_expr() method in ssa_cache does not set the stmt to a 
default of NULL.  Correct that oversight.


3)  the method empty_p() is added to the ssa_lazy_cache class so we can 
detect if the lazy cache has any active elements in it or not.


Bootstrapped on 86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew

From 72fb44ca53fda15024e0c272052b74b1f32735b1 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 28 Jul 2023 11:00:57 -0400
Subject: [PATCH 3/3] Add a merge_range to ssa_cache and use it.  add empty_p
 and param tweaks.

	* gimple-range-cache.cc (ssa_cache::merge_range): New.
	(ssa_lazy_cache::merge_range): New.
	* gimple-range-cache.h (class ssa_cache): Adjust protoypes.
	(class ssa_lazy_cache): Ditto.
	* gimple-range.cc (assume_query::calculate_op): Use merge_range.
---
 gcc/gimple-range-cache.cc | 45 +++
 gcc/gimple-range-cache.h  |  6 --
 gcc/gimple-range.cc   |  6 ++
 3 files changed, 51 insertions(+), 6 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 52165d2405b..5b74681b61a 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -605,6 +605,32 @@ ssa_cache::set_range (tree name, const vrange )
   return m != NULL;
 }
 
+// If NAME has a range, intersect it with R, otherwise set it to R.
+// Return TRUE if there was already a range set, otherwise false.
+
+bool
+ssa_cache::merge_range (tree name, const vrange )
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+m_tab.safe_grow_cleared (num_ssa_names + 1);
+
+  vrange_storage *m = m_tab[v];
+  if (m)
+{
+  Value_Range curr (TREE_TYPE (name));
+  m->get_vrange (curr, TREE_TYPE (name));
+  curr.intersect (r);
+  if (m->fits_p (curr))
+	m->set_vrange (curr);
+  else
+	m_tab[v] = m_range_allocator->clone (curr);
+}
+  else
+m_tab[v] = m_range_allocator->clone (r);
+  return m != NULL;
+}
+
 // Set the range for NAME to R in the ssa cache.
 
 void
@@ -689,6 +715,25 @@ ssa_lazy_cache::set_range (tree name, const vrange )
   return false;
 }
 
+// If NAME has a range, intersect it with R, otherwise set it to R.
+// Return TRUE if there was already a range set, otherwise false.
+
+bool
+ssa_lazy_cache::merge_range (tree name, const vrange )
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (!bitmap_set_bit (active_p, v))
+{
+  // There is already an entry, simply merge it.
+  gcc_checking_assert (v < m_tab.length ());
+  return ssa_cache::merge_range (name, r);
+}
+  if (v >= m_tab.length ())
+m_tab.safe_grow (num_ssa_names + 1);
+  m_tab[v] = m_range_allocator->clone (r);
+  return false;
+}
+
 // Return TRUE if NAME has a range, and return it in R.
 
 bool
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index a0f436b5723..bbb9b18a10c 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -61,11 +61,11 @@ public:
   virtual bool has_range (tree name) const;
   virtual bool get_range (vrange , tree name) const;
   virtual bool set_range (tree name, const vrange );
+  virtual bool merge_range (tree name, const vrange );
   virtual void clear_range (tree name);
   virtual void clear ();
   void dump (FILE *f = stderr);
-  virtual bool range_of_expr (vrange , tree expr, gimple *stmt);
-
+  virtual bool range_of_expr (vrange , tree expr, gimple *stmt = NULL);
 protected:
   vec m_tab;
   vrange_allocator *m_range_allocator;
@@ -80,8 +80,10 @@ class ssa_lazy_cache : public ssa_cache
 public:
   inline ssa_lazy_cache () { active_p = BITMAP_ALLOC (NULL); }
   inline ~ssa_lazy_cache () { BITMAP_FREE (active_p); }
+  inline bool empty_p () const { return bitmap_empty_p (active_p); }
   virtual bool has_range (tree name) const;
   virtual bool set_range (tree name, const vrange );
+  virtual bool merge_range (tree name, const vrange );
   virtual bool get_range (vrange , tree name) const;
   virtual void clear_range (tree name);
   virtual void clear ();
diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 01e62d3ff39..01173c58f02 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -809,10 +809,8 @@ assume_query::calculate_op (tree op, gimple *s, vrange , fur_source )
   if (m_gori.compute_operand_range (op_range, s, lhs, op, src)
   && !op_range.varying_p ())
 {
-  Value_Range range (TREE_TYPE (op));
-  if (global.get_range (range, op))
-	op_range.intersect (range);
-  global.set_range (op, op_range);
+  // Set the global range, merging if there is already a range.
+  global.merge_range (op, 

[COMMITTED] PR tree-optimization/110205 -Fix some warnings

2023-07-28 Thread Andrew MacLeod via Gcc-patches

This patch simply fixes the code up a little to remove potential warnings.

Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew

From 7905c071c35070fff3397b1e24f140c128c08e64 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 10 Jul 2023 13:58:22 -0400
Subject: [PATCH 1/3] Fix some warnings

	PR tree-optimization/110205
	* gimple-range-cache.h (ranger_cache::m_estimate): Delete.
	* range-op-mixed.h (operator_bitwise_xor::op1_op2_relation_effect):
	Add final override.
	* range-op.cc (operator_lshift): Add missing final overrides.
	(operator_rshift): Ditto.
---
 gcc/gimple-range-cache.h |  1 -
 gcc/range-op-mixed.h |  2 +-
 gcc/range-op.cc  | 44 ++--
 3 files changed, 21 insertions(+), 26 deletions(-)

diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 93d16294d2e..a0f436b5723 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -137,7 +137,6 @@ private:
   void exit_range (vrange , tree expr, basic_block bb, enum rfd_mode);
   bool edge_range (vrange , edge e, tree name, enum rfd_mode);
 
-  phi_analyzer *m_estimate;
   vec m_workback;
   class update_list *m_update;
 };
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 3cb904f9d80..b623a88cc71 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -574,7 +574,7 @@ public:
 	tree type,
 	const irange _range,
 	const irange _range,
-	relation_kind rel) const;
+	relation_kind rel) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 private:
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 615e5fe0036..19fdff0eb64 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -2394,22 +2394,21 @@ class operator_lshift : public cross_product_operator
   using range_operator::fold_range;
   using range_operator::op1_range;
 public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
+  virtual bool op1_range (irange , tree type, const irange ,
+			  const irange , relation_trio rel = TRIO_VARYING)
+const final override;
+  virtual bool fold_range (irange , tree type, const irange ,
+			   const irange , relation_trio rel = TRIO_VARYING)
+const final override;
 
   virtual void wi_fold (irange , tree type,
 			const wide_int _lb, const wide_int _ub,
-			const wide_int _lb, const wide_int _ub) const;
+			const wide_int _lb,
+			const wide_int _ub) const final override;
   virtual bool wi_op_overflows (wide_int ,
 tree type,
 const wide_int &,
-const wide_int &) const;
+const wide_int &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override
 { update_known_bitmask (r, LSHIFT_EXPR, lh, rh); }
@@ -2421,27 +2420,24 @@ class operator_rshift : public cross_product_operator
   using range_operator::op1_range;
   using range_operator::lhs_op1_relation;
 public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
+  virtual bool fold_range (irange , tree type, const irange ,
+			   const irange , relation_trio rel = TRIO_VARYING)
+   const final override;
   virtual void wi_fold (irange , tree type,
 			const wide_int _lb,
 			const wide_int _ub,
 			const wide_int _lb,
-			const wide_int _ub) const;
+			const wide_int _ub) const final override;
   virtual bool wi_op_overflows (wide_int ,
 tree type,
 const wide_int ,
-const wide_int ) const;
-  virtual bool op1_range (irange &, tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	   const irange ,
-	   const irange ,
-	   relation_kind rel) const;
+const wide_int ) const final override;
+  virtual bool op1_range (irange &, tree type, const irange ,
+			  const irange , relation_trio rel = TRIO_VARYING)
+const final override;
+  virtual relation_kind lhs_op1_relation (const irange , const irange ,
+	  const irange , relation_kind rel)
+const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override
 { update_known_bitmask (r, RSHIFT_EXPR, lh, rh); }
-- 
2.40.1



Re: [PATCH] [GCC13] PR tree-optimization/110315 - Add auto-resizing capability to irange's

2023-07-24 Thread Andrew MacLeod via Gcc-patches



On 7/24/23 12:49, Richard Biener wrote:



Am 24.07.2023 um 16:40 schrieb Andrew MacLeod via Gcc-patches 
:

Aldy has ported his irange reduction patch to GCC 13.  It resolves this PR.

I have bootstrapped it and it passes regression tests.

Do we want to check it into the GCC 13 branch?  The patch has all his comments 
in it.

Please wait until the branch is open again, then yes , I think we want this 
there.  Was there any work reducing the recursion depth that’s worth 
backporting as well?


I think most of the recursion depth work is in GCC13.  Im looking at a 
few more tweaks for GCC14, but they are fairly minor at the moment.  
reduction of the size of the stack was the huge win.




[PATCH] [GCC13] PR tree-optimization/110315 - Add auto-resizing capability to irange's

2023-07-24 Thread Andrew MacLeod via Gcc-patches

Aldy has ported his irange reduction patch to GCC 13.  It resolves this PR.

I have bootstrapped it and it passes regression tests.

Do we want to check it into the GCC 13 branch?  The patch has all his 
comments in it.


Andrew
From 777aa930b106fea2dd6ed9fe22b42a2717f1472d Mon Sep 17 00:00:00 2001
From: Aldy Hernandez 
Date: Mon, 15 May 2023 12:25:58 +0200
Subject: [PATCH] [GCC13] Add auto-resizing capability to irange's [PR109695]

Backport the following from trunk.

	Note that the patch has been adapted to trees.

	The numbers for various sub-ranges on GCC13 are:
		< 2> =  64 bytes, -3.02% for VRP.
		< 3> =  80 bytes, -2.67% for VRP.
		< 8> = 160 bytes, -2.46% for VRP.
		<16> = 288 bytes, -2.40% for VRP.


We can now have int_range for automatically
resizable ranges.  int_range_max is now int_range<3, true>
for a 69X reduction in size from current trunk, and 6.9X reduction from
GCC12.  This incurs a 5% performance penalty for VRP that is more than
covered by our > 13% improvements recently.


int_range_max is the temporary range object we use in the ranger for
integers.  With the conversion to wide_int, this structure bloated up
significantly because wide_ints are huge (80 bytes a piece) and are
about 10 times as big as a plain tree.  Since the temporary object
requires 255 sub-ranges, that's 255 * 80 * 2, plus the control word.
This means the structure grew from 4112 bytes to 40912 bytes.

This patch adds the ability to resize ranges as needed, defaulting to
no resizing, while int_range_max now defaults to 3 sub-ranges (instead
of 255) and grows to 255 when the range being calculated does not fit.

For example:

int_range<1> foo;	// 1 sub-range with no resizing.
int_range<5> foo;	// 5 sub-ranges with no resizing.
int_range<5, true> foo;	// 5 sub-ranges with resizing.

I ran some tests and found that 3 sub-ranges cover 99% of cases, so
I've set the int_range_max default to that:

	typedef int_range<3, /*RESIZABLE=*/true> int_range_max;

We don't bother growing incrementally, since the default covers most
cases and we have a 255 hard-limit.  This hard limit could be reduced
to 128, since my tests never saw a range needing more than 124, but we
could do that as a follow-up if needed.

With 3-subranges, int_range_max is now 592 bytes versus 40912 for
trunk, and versus 4112 bytes for GCC12!  The penalty is 5.04% for VRP
and 3.02% for threading, with no noticeable change in overall
compilation (0.27%).  This is more than covered by our 13.26%
improvements for the legacy removal + wide_int conversion.

I think this approach is a good alternative, while providing us with
flexibility going forward.  For example, we could try defaulting to a
8 sub-ranges for a noticeable improvement in VRP.  We could also use
large sub-ranges for switch analysis to avoid resizing.

Another approach I tried was always resizing.  With this, we could
drop the whole int_range nonsense, and have irange just hold a
resizable range.  This simplified things, but incurred a 7% penalty on
ipa_cp.  This was hard to pinpoint, and I'm not entirely convinced
this wasn't some artifact of valgrind.  However, until we're sure,
let's avoid massive changes, especially since IPA changes are coming
up.

For the curious, a particular hot spot for IPA in this area was:

ipcp_vr_lattice::meet_with_1 (const value_range *other_vr)
{
...
...
  value_range save (m_vr);
  m_vr.union_ (*other_vr);
  return m_vr != save;
}

The problem isn't the resizing (since we do that at most once) but the
fact that for some functions with lots of callers we end up a huge
range that gets copied and compared for every meet operation.  Maybe
the IPA algorithm could be adjusted somehow??.

Anywhooo... for now there is nothing to worry about, since value_range
still has 2 subranges and is not resizable.  But we should probably
think what if anything we want to do here, as I envision IPA using
infinite ranges here (well, int_range_max) and handling frange's, etc.

gcc/ChangeLog:

	PR tree-optimization/109695
	* value-range.cc (irange::operator=): Resize range.
	(irange::union_): Same.
	(irange::intersect): Same.
	(irange::invert): Same.
	(int_range_max): Default to 3 sub-ranges and resize as needed.
	* value-range.h (irange::maybe_resize): New.
	(~int_range): New.
	(int_range::int_range): Adjust for resizing.
	(int_range::operator=): Same.
---
 gcc/value-range-storage.h |  2 +-
 gcc/value-range.cc| 15 ++
 gcc/value-range.h | 96 +++
 3 files changed, 83 insertions(+), 30 deletions(-)

diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 6da377ebd2e..1ed6f1ccd61 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -184,7 +184,7 @@ vrange_allocator::alloc_irange (unsigned num_pairs)
   // Allocate the irange and required memory for the vector.
   void *r = alloc (sizeof (irange));
   tree *mem = static_cast  (alloc (nbytes));
-  return new (r) irange (mem, num_pairs);
+  return new 

Re: [PATCH V4] Optimize '(X - N * M) / N' to 'X / N - M' if valid

2023-07-17 Thread Andrew MacLeod via Gcc-patches


On 7/17/23 09:45, Jiufu Guo wrote:



Should we decide we would like it in general, it wouldnt be hard to add to
irange.  wi_fold() cuurently returns null, it could easily return a bool
indicating if an overflow happened, and wi_fold_in_parts and fold_range would
simply OR the results all together of the compoent wi_fold() calls.  It would
require updating/audfiting  a number of range-op entries and adding an
overflowed_p()  query to irange.

Ah, yeah - the folding APIs would be a good fit I guess.  I was
also looking to have the "new" helpers to be somewhat consistent
with the ranger API.

So if we had a fold_range overload with either an output argument
or a flag that makes it return false on possible overflow that
would work I guess?  Since we have a virtual class setup we
might be able to provide a default failing method and implement
workers for plus and mult (as needed for this patch) as the need
arises?

Thanks for your comments!
Here is a concern.  The patterns in match.pd may be supported by
'vrp' passes. At that time, the range info would be computed (via
the value-range machinery) and cached for each SSA_NAME. In the
patterns, when range_of_expr is called for a capture, the range
info is retrieved from the cache, and no need to fold_range again.
This means the overflow info may also need to be cached together
with other range info.  There may be additional memory and time
cost.



I've been thinking about this a little bit, and how to make the info 
available in a useful way.


I wonder if maybe we just add another entry point  to range-ops that 
looks a bit like fold_range ..


  Attached is an (untested) patch which ads overflow_free_p(op1, op2, 
relation)  to rangeops.   It defaults to returning false.  If you want 
to implement it for say plus,  you'd add to operator_plus in 
range-ops.cc  something like


operator_plus::overflow_free_p (irange, irange& op2, relation_kind)
{
   // stuff you do in plus_without_overflow
}

I added relation_kind as  param, but you can ignore it.  maybe it wont 
ever help, but it seems like if we know there is a relation between op1 
and op2 we might be able to someday determine something else? if 
not, remove it.


Then all you need to do too access it is to go thru range-op_handler.. 
so for instance:


range_op_handler (PLUS_EXPR).overflow_free_p (op1, op2)

It'll work for all types an all tree codes. the dispatch machinery will 
return false unless both op1 and op2 are integral ranges, and then it 
will invoke the appropriate handler, defaulting to returning FALSE.


I also am not a fan of the get_range  routine.  It would be better to 
generally just call range_of_expr, get the results, then handle 
undefined in the new overflow_free_p() routine and return false.  
varying should not need anything special since it will trigger the 
overflow when you do the calculation.


The auxillary routines could go in vr-values.h/cc.  They seem like 
things that simplify_using_ranges could utilize, and when we get to 
integrating simplify_using_ranges better,  what you are doing may end up 
there anyway


Does that work?

Andrew
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index d1c735ee6aa..f2a863db286 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -366,6 +366,24 @@ range_op_handler::op1_op2_relation (const vrange ) const
 }
 }
 
+bool
+range_op_handler::overflow_free_p (const vrange ,
+   const vrange ,
+   relation_trio rel) const
+{
+  gcc_checking_assert (m_operator);
+  switch (dispatch_kind (lh, lh, rh))
+{
+  case RO_III:
+	return m_operator->overflow_free_p(as_a  (lh),
+	   as_a  (rh),
+	   rel);
+  default:
+	return false;
+}
+}
+
+
 
 // Convert irange bitmasks into a VALUE MASK pair suitable for calling CCP.
 
@@ -688,6 +706,13 @@ range_operator::op1_op2_relation_effect (irange _range ATTRIBUTE_UNUSED,
   return false;
 }
 
+bool
+range_operator::overflow_free_p (const irange &, const irange &,
+ relation_trio) const
+{
+  return false;
+}
+
 // Apply any known bitmask updates based on this operator.
 
 void
diff --git a/gcc/range-op.h b/gcc/range-op.h
index af94c2756a7..db3b03f28a5 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -147,6 +147,9 @@ public:
 
   virtual relation_kind op1_op2_relation (const irange ) const;
   virtual relation_kind op1_op2_relation (const frange ) const;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+relation_trio = TRIO_VARYING) const;
 protected:
   // Perform an integral operation between 2 sub-ranges and return it.
   virtual void wi_fold (irange , tree type,
@@ -214,6 +217,8 @@ public:
   const vrange ,
   relation_kind = VREL_VARYING) const;
   relation_kind op1_op2_relation (const vrange ) const;
+  bool overflow_free_p (const vrange , const vrange ,
+			relation_trio = TRIO_VARYING) const;
 protected:
   unsigned dispatch_kind (const vrange , const vrange ,
 			  const vrange& op2) const;


Re: [PATCH V4] Optimize '(X - N * M) / N' to 'X / N - M' if valid

2023-07-14 Thread Andrew MacLeod via Gcc-patches



On 7/14/23 09:37, Richard Biener wrote:

On Fri, 14 Jul 2023, Aldy Hernandez wrote:


I don't know what you're trying to accomplish here, as I haven't been
following the PR, but adding all these helper functions to the ranger header
file seems wrong, especially since there's only one use of them. I see you're
tweaking the irange API, adding helper functions to range-op (which is only
for code dealing with implementing range operators for tree codes), etc etc.

If you need these helper functions, I suggest you put them closer to their
uses (i.e. wherever the match.pd support machinery goes).

Note I suggested the opposite beacuse I thought these kind of helpers
are closer to value-range support than to match.pd.



probably vr-values.{cc.h} and  the simply_using_ranges paradigm would be 
the most sensible place to put these kinds of auxiliary routines?





But I take away from your answer that there's nothing close in the
value-range machinery that answers the question whether A op B may
overflow?


we dont track it in ranges themselves.   During calculation of a range 
we obviously know, but propagating that generally when we rarely care 
doesn't seem worthwhile.  The very first generation of irange 6 years 
ago had an overflow_p() flag, but it was removed as not being worth 
keeping.     easier to simply ask the question when it matters


As the routines show, it pretty easy to figure out when the need arises 
so I think that should suffice.  At least for now,


Should we decide we would like it in general, it wouldnt be hard to add 
to irange.  wi_fold() cuurently returns null, it could easily return a 
bool indicating if an overflow happened, and wi_fold_in_parts and 
fold_range would simply OR the results all together of the compoent 
wi_fold() calls.  It would require updating/audfiting  a number of 
range-op entries and adding an overflowed_p()  query to irange.


Andrew



[COMMITTED 5/5] Make compute_operand_range a tail call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
This simply tweaks cmpute_operand_range a little so the recursion is a 
tail call.


With this, the patchset produces a modest speedup of 0.2% in VRP and 
0.4% in threading.  It will also have a much smaller stack profile.


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew

From 51ed3a6ce432e7e6226bb62125ef8a09b2ebf60c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 14:26:00 -0400
Subject: [PATCH 5/6] Make compute_operand_range a tail call.

Tweak the routine so it is making a tail call.

	* gimple-range-gori.cc (compute_operand_range): Convert to a tail
	call.
---
 gcc/gimple-range-gori.cc | 34 --
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b036ed56f02..6dc15a0ce3f 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -725,36 +725,34 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 			 op1_trange, op1_frange, op2_trange, op2_frange);
   if (idx)
 	tracer.trailer (idx, "compute_operand", res, name, r);
+  return res;
 }
   // Follow the appropriate operands now.
-  else if (op1_in_chain && op2_in_chain)
-res = compute_operand1_and_operand2_range (r, handler, lhs, name, src,
-	   vrel_ptr);
-  else if (op1_in_chain)
+  if (op1_in_chain && op2_in_chain)
+return compute_operand1_and_operand2_range (r, handler, lhs, name, src,
+		vrel_ptr);
+  Value_Range vr;
+  gimple *src_stmt;
+  if (op1_in_chain)
 {
-  Value_Range vr (TREE_TYPE (op1));
+  vr.set_type (TREE_TYPE (op1));
   if (!compute_operand1_range (vr, handler, lhs, src, vrel_ptr))
 	return false;
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
-  gcc_checking_assert (src_stmt);
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+  src_stmt = SSA_NAME_DEF_STMT (op1);
 }
-  else if (op2_in_chain)
+  else
 {
-  Value_Range vr (TREE_TYPE (op2));
+  gcc_checking_assert (op2_in_chain);
+  vr.set_type (TREE_TYPE (op2));
   if (!compute_operand2_range (vr, handler, lhs, src, vrel_ptr))
 	return false;
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
-  gcc_checking_assert (src_stmt);
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+  src_stmt = SSA_NAME_DEF_STMT (op2);
 }
-  else
-gcc_unreachable ();
 
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
   // If neither operand is derived, this statement tells us nothing.
-  return res;
 }
 
 
-- 
2.40.1



[COMMITTED 4/5] Make compute_operand2_range a leaf call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
now operand2 alone is resolved, and returned as the result.  much 
cleaner, and removes it from the recursion stack.


compute_operand_range() will decide if further evaluation is required.

Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 298952bcf05d298892e99adba1f4a75af17bc65a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:52:21 -0400
Subject: [PATCH 4/6] Make compute_operand2_range a leaf call.

Rather than creating long call chains, put the onus for finishing
the evlaution on the caller.

	* gimple-range-gori.cc (compute_operand_range): After calling
	compute_operand2_range, recursively call self if needed.
	(compute_operand2_range): Turn into a leaf function.
	(gori_compute::compute_operand1_and_operand2_range): Finish
	operand2 calculation.
	* gimple-range-gori.h (compute_operand2_range): Remove name param.
---
 gcc/gimple-range-gori.cc | 52 +++-
 gcc/gimple-range-gori.h  |  2 +-
 2 files changed, 26 insertions(+), 28 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b66b9b0398c..b036ed56f02 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -639,7 +639,7 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   if (op1 == name)
 return compute_operand1_range (r, handler, lhs, src, vrel_ptr);
   if (op2 == name)
-return compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
+return compute_operand2_range (r, handler, lhs, src, vrel_ptr);
 
   // NAME is not in this stmt, but one of the names in it ought to be
   // derived from it.
@@ -741,7 +741,15 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
 }
   else if (op2_in_chain)
-res = compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
+{
+  Value_Range vr (TREE_TYPE (op2));
+  if (!compute_operand2_range (vr, handler, lhs, src, vrel_ptr))
+	return false;
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+}
   else
 gcc_unreachable ();
 
@@ -1188,7 +1196,7 @@ gori_compute::compute_operand1_range (vrange ,
 bool
 gori_compute::compute_operand2_range (vrange ,
   gimple_range_op_handler ,
-  const vrange , tree name,
+  const vrange ,
   fur_source , value_relation *rel)
 {
   gimple *stmt = handler.stmt ();
@@ -1198,7 +1206,6 @@ gori_compute::compute_operand2_range (vrange ,
 
   Value_Range op1_range (TREE_TYPE (op1));
   Value_Range op2_range (TREE_TYPE (op2));
-  Value_Range tmp (TREE_TYPE (op2));
 
   src.get_operand (op1_range, op1);
   src.get_operand (op2_range, op2);
@@ -1215,7 +1222,7 @@ gori_compute::compute_operand2_range (vrange ,
   if (op1 == op2 && gimple_range_ssa_p (op1))
 trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
   // Intersect with range for op2 based on lhs and op1.
-  if (!handler.calc_op2 (tmp, lhs, op1_range, trio))
+  if (!handler.calc_op2 (r, lhs, op1_range, trio))
 return false;
 
   unsigned idx;
@@ -1237,31 +1244,16 @@ gori_compute::compute_operand2_range (vrange ,
   tracer.print (idx, "Computes ");
   print_generic_expr (dump_file, op2, TDF_SLIM);
   fprintf (dump_file, " = ");
-  tmp.dump (dump_file);
+  r.dump (dump_file);
   fprintf (dump_file, " intersect Known range : ");
   op2_range.dump (dump_file);
   fputc ('\n', dump_file);
 }
   // Intersect the calculated result with the known result and return if done.
-  if (op2 == name)
-{
-  tmp.intersect (op2_range);
-  r = tmp;
-  if (idx)
-	tracer.trailer (idx, " produces ", true, NULL_TREE, r);
-  return true;
-}
-  // If the calculation continues, we're using op2_range as the new LHS.
-  op2_range.intersect (tmp);
-
+  r.intersect (op2_range);
   if (idx)
-tracer.trailer (idx, " produces ", true, op2, op2_range);
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
-  gcc_checking_assert (src_stmt);
-//  gcc_checking_assert (!is_import_p (op2, find.bb));
-
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, op2_range, name, src, rel);
+tracer.trailer (idx, " produces ", true, op2, r);
+  return true;
 }
 
 // Calculate a range for NAME from both operand positions of S
@@ -1279,15 +1271,21 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
+  Value_Range vr (TREE_TYPE (handler.operand2 ()));
   // Calculate a good a range through op2.
-  if (!compute_operand2_range (r, handler, lhs, name, src, rel))
+  if (!compute_operand2_range (vr, handler, lhs, src, rel))
+return false;
+  gimple *src_stmt = SSA_NAME_DEF_STMT (handler.operand2 

[COMMITTED 3/5] Make compute_operand1_range a leaf call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
now operand1 alone is resolved, and returned as the result.  much 
cleaner, and removes it from the recursion stack.


compute_operand_range() will decide if further evaluation is required.

Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew

From 912b5ac49677160aada7a2d862273251406dfca5 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:41:50 -0400
Subject: [PATCH 3/6] Make compute_operand1_range a leaf call.

Rather than creating long call chains, put the onus for finishing
the evlaution on the caller.

	* gimple-range-gori.cc (compute_operand_range): After calling
	compute_operand1_range, recursively call self if needed.
	(compute_operand1_range): Turn into a leaf function.
	(gori_compute::compute_operand1_and_operand2_range): Finish
	operand1 calculation.
	* gimple-range-gori.h (compute_operand1_range): Remove name param.
---
 gcc/gimple-range-gori.cc | 49 
 gcc/gimple-range-gori.h  |  2 +-
 2 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 5429c6e3c1a..b66b9b0398c 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -637,7 +637,7 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 
   // Handle end of lookup first.
   if (op1 == name)
-return compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
+return compute_operand1_range (r, handler, lhs, src, vrel_ptr);
   if (op2 == name)
 return compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
 
@@ -731,7 +731,15 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 res = compute_operand1_and_operand2_range (r, handler, lhs, name, src,
 	   vrel_ptr);
   else if (op1_in_chain)
-res = compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
+{
+  Value_Range vr (TREE_TYPE (op1));
+  if (!compute_operand1_range (vr, handler, lhs, src, vrel_ptr))
+	return false;
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+}
   else if (op2_in_chain)
 res = compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
   else
@@ -1099,7 +1107,7 @@ gori_compute::refine_using_relation (tree op1, vrange _range,
 bool
 gori_compute::compute_operand1_range (vrange ,
   gimple_range_op_handler ,
-  const vrange , tree name,
+  const vrange ,
   fur_source , value_relation *rel)
 {
   gimple *stmt = handler.stmt ();
@@ -1112,7 +1120,6 @@ gori_compute::compute_operand1_range (vrange ,
 trio = rel->create_trio (lhs_name, op1, op2);
 
   Value_Range op1_range (TREE_TYPE (op1));
-  Value_Range tmp (TREE_TYPE (op1));
   Value_Range op2_range (op2 ? TREE_TYPE (op2) : TREE_TYPE (op1));
 
   // Fetch the known range for op1 in this block.
@@ -1130,7 +1137,7 @@ gori_compute::compute_operand1_range (vrange ,
   // If op1 == op2, create a new trio for just this call.
   if (op1 == op2 && gimple_range_ssa_p (op1))
 	trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
-  if (!handler.calc_op1 (tmp, lhs, op2_range, trio))
+  if (!handler.calc_op1 (r, lhs, op2_range, trio))
 	return false;
 }
   else
@@ -1138,7 +1145,7 @@ gori_compute::compute_operand1_range (vrange ,
   // We pass op1_range to the unary operation.  Normally it's a
   // hidden range_for_type parameter, but sometimes having the
   // actual range can result in better information.
-  if (!handler.calc_op1 (tmp, lhs, op1_range, trio))
+  if (!handler.calc_op1 (r, lhs, op1_range, trio))
 	return false;
 }
 
@@ -1161,30 +1168,16 @@ gori_compute::compute_operand1_range (vrange ,
   tracer.print (idx, "Computes ");
   print_generic_expr (dump_file, op1, TDF_SLIM);
   fprintf (dump_file, " = ");
-  tmp.dump (dump_file);
+  r.dump (dump_file);
   fprintf (dump_file, " intersect Known range : ");
   op1_range.dump (dump_file);
   fputc ('\n', dump_file);
 }
-  // Intersect the calculated result with the known result and return if done.
-  if (op1 == name)
-{
-  tmp.intersect (op1_range);
-  r = tmp;
-  if (idx)
-	tracer.trailer (idx, "produces ", true, name, r);
-  return true;
-}
-  // If the calculation continues, we're using op1_range as the new LHS.
-  op1_range.intersect (tmp);
 
+  r.intersect (op1_range);
   if (idx)
-tracer.trailer (idx, "produces ", true, op1, op1_range);
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
-  gcc_checking_assert (src_stmt);
-
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, op1_range, name, src, rel);
+tracer.trailer (idx, "produces ", true, op1, r);
+  return true;
 }
 
 
@@ -1291,7 +1284,13 @@ 

[COMMITTED 2/5] Simplify compute_operand_range for op1 and op2 case.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
This patch simplifies compute_operand1_and_operand2() such that it only 
calls each routine one. This will simplify the next couple of patches.


It also allows moves the determination that op1 and op2 have an 
interdependence to  compute_operand_range().


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 7276248946d3eae83e5e08fc023163614c9ea9ab Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:36:27 -0400
Subject: [PATCH 2/6] Simplify compute_operand_range for op1 and op2 case.

Move the check for co-dependency between 2 operands into
compute_operand_range, resulting in a much cleaner
compute_operand1_and_operand2_range routine.

	* gimple-range-gori.cc (compute_operand_range): Check for
	operand interdependence when both op1 and op2 are computed.
	(compute_operand1_and_operand2_range): No checks required now.
---
 gcc/gimple-range-gori.cc | 25 +++--
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b0d13a8ac53..5429c6e3c1a 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -650,6 +650,17 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   if (!op1_in_chain && !op2_in_chain)
 return false;
 
+  // If either operand is in the def chain of the other (or they are equal), it
+  // will be evaluated twice and can result in an exponential time calculation.
+  // Instead just evaluate the one operand.
+  if (op1_in_chain && op2_in_chain)
+{
+  if (in_chain_p (op1, op2) || op1 == op2)
+	op1_in_chain = false;
+  else if (in_chain_p (op2, op1))
+	op2_in_chain = false;
+}
+
   bool res = false;
   // If the lhs doesn't tell us anything only a relation can possibly enhance
   // the result.
@@ -1275,24 +1286,10 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
-  // If op1 is in the def chain of op2, we'll do the work twice to evalaute
-  // op1.  This can result in an exponential time calculation.
-  // Instead just evaluate op2, which will eventualy get to op1.
-  if (in_chain_p (handler.operand1 (), handler.operand2 ()))
-return compute_operand2_range (r, handler, lhs, name, src, rel);
-
-  // Likewise if op2 is in the def chain of op1.
-  if (in_chain_p (handler.operand2 (), handler.operand1 ()))
-return compute_operand1_range (r, handler, lhs, name, src, rel);
-
   // Calculate a good a range through op2.
   if (!compute_operand2_range (r, handler, lhs, name, src, rel))
 return false;
 
-  // If op1 == op2 there is again no need to go further.
-  if (handler.operand1 () == handler.operand2 ())
-return true;
-
   // Now get the range thru op1.
   if (!compute_operand1_range (op_range, handler, lhs, name, src, rel))
 return false;
-- 
2.40.1



[COMMITTED 1/5] Move relation discovery into compute_operand_range

2023-07-05 Thread Andrew MacLeod via Gcc-patches

This is a set of 5 patches which cleans up GORIs compute_operand routines.

This is the mechanism GORI uses to calculate ranges from the bottom of 
the routine back thru definitions in the block to the name that is 
requested.


Currently, compute_operand_range() is called on a stmt, and it divides 
the work based on which operands are used to get back to the requested 
name.  It calls compute_operand1_range or compute_operand2_range or 
compute_operand1_and_operand2_range. If the specified name is not on 
this statement, then a call back to compute_operand_range on the 
definition statement is made.


this means the call chain is recursive, but involves alternating 
functions.  This patch sets changes the compute_operand1_range and 
compute_operand2_range to be leaf functions, and then 
compute_operand_range is still recursive, but has a much smaller stack 
footprint, and is also becomes a tailcall.


I tried removing the recursion, but at this point, removing the 
recursion is a performance hit :-P   stay tuned on that one.


This patch moves some common code for relation discovery from 
compute_operand[12]range into compute_operand_range.


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 290798faef706c335bd346b13771f977ddedb415 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 4 Jul 2023 11:28:52 -0400
Subject: [PATCH 1/6] Move relation discovery into compute_operand_range

compute_operand1_range and compute_operand2_range were both doing
relation discovery between the 2 operands... move it into a common area.

	* gimple-range-gori.cc (compute_operand_range): Check for
	a relation between op1 and op2 and use that instead.
	(compute_operand1_range): Don't look for a relation override.
	(compute_operand2_range): Ditto.
---
 gcc/gimple-range-gori.cc | 42 +---
 1 file changed, 13 insertions(+), 29 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 4ee0ae36014..b0d13a8ac53 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -623,6 +623,18 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   tree op1 = gimple_range_ssa_p (handler.operand1 ());
   tree op2 = gimple_range_ssa_p (handler.operand2 ());
 
+  // If there is a relation betwen op1 and op2, use it instead as it is
+  // likely to be more applicable.
+  if (op1 && op2)
+{
+  relation_kind k = handler.op1_op2_relation (lhs);
+  if (k != VREL_VARYING)
+	{
+	  vrel.set_relation (k, op1, op2);
+	  vrel_ptr = 
+	}
+}
+
   // Handle end of lookup first.
   if (op1 == name)
 return compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
@@ -1079,7 +1091,6 @@ gori_compute::compute_operand1_range (vrange ,
   const vrange , tree name,
   fur_source , value_relation *rel)
 {
-  value_relation local_rel;
   gimple *stmt = handler.stmt ();
   tree op1 = handler.operand1 ();
   tree op2 = handler.operand2 ();
@@ -1088,7 +1099,6 @@ gori_compute::compute_operand1_range (vrange ,
   relation_trio trio;
   if (rel)
 trio = rel->create_trio (lhs_name, op1, op2);
-  relation_kind op_op = trio.op1_op2 ();
 
   Value_Range op1_range (TREE_TYPE (op1));
   Value_Range tmp (TREE_TYPE (op1));
@@ -1102,19 +1112,7 @@ gori_compute::compute_operand1_range (vrange ,
 {
   src.get_operand (op2_range, op2);
 
-  // If there is a relation betwen op1 and op2, use it instead.
-  // This allows multiple relations to be processed in compound logicals.
-  if (gimple_range_ssa_p (op1) && gimple_range_ssa_p (op2))
-	{
-	  relation_kind k = handler.op1_op2_relation (lhs);
-	  if (k != VREL_VARYING)
-	{
-	  op_op = k;
-	  local_rel.set_relation (op_op, op1, op2);
-	  rel = _rel;
-	}
-	}
-
+  relation_kind op_op = trio.op1_op2 ();
   if (op_op != VREL_VARYING)
 	refine_using_relation (op1, op1_range, op2, op2_range, src, op_op);
 
@@ -1189,7 +1187,6 @@ gori_compute::compute_operand2_range (vrange ,
   const vrange , tree name,
   fur_source , value_relation *rel)
 {
-  value_relation local_rel;
   gimple *stmt = handler.stmt ();
   tree op1 = handler.operand1 ();
   tree op2 = handler.operand2 ();
@@ -1207,19 +1204,6 @@ gori_compute::compute_operand2_range (vrange ,
 trio = rel->create_trio (lhs_name, op1, op2);
   relation_kind op_op = trio.op1_op2 ();
 
-  // If there is a relation betwen op1 and op2, use it instead.
-  // This allows multiple relations to be processed in compound logicals.
-  if (gimple_range_ssa_p (op1) && gimple_range_ssa_p (op2))
-{
-  relation_kind k = handler.op1_op2_relation (lhs);
-  if (k != VREL_VARYING)
-	{
-	  op_op = k;
-	  local_rel.set_relation (op_op, op1, op2);
-	  rel = _rel;
-	}
-}
-
   if (op_op != VREL_VARYING)
 refine_using_relation (op1, op1_range, op2, op2_range, src, op_op);
 
-- 
2.40.1



Re: Enable ranger for ipa-prop

2023-06-27 Thread Andrew MacLeod via Gcc-patches



On 6/27/23 12:24, Jan Hubicka wrote:

On 6/27/23 09:19, Jan Hubicka wrote:

Hi,
as shown in the testcase (which would eventually be useful for
optimizing std::vector's push_back), ipa-prop can use context dependent ranger
queries for better value range info.

Bootstrapped/regtested x86_64-linux, OK?

Quick question.

When you call enable_ranger(), its gives you a ranger back, but it also sets
the range query for the specified context to that same instance.  So from
that point forward  all existing calls to get_range_query(fun) will now use
the context ranger

enable_ranger (struct function *fun, bool use_imm_uses)
<...>
   gcc_checking_assert (!fun->x_range_query);
   r = new gimple_ranger (use_imm_uses);
   fun->x_range_query = r;
   return r;

So you probably dont have to pass a ranger around?  or is that ranger you
are passing for a different context?

I don't need passing ranger around - I just did not know that.  I tought
the default one is the context insensitive one, I will simplify the
patch.  I need to look more into how ranger works.



No need. Its magic!

Andrew


PS. well, we tried to provide an interface to make it as seamless as 
possible with the whole range-query thing.

10,000 foot view:

The range_query object (value-range.h) replaces the old 
SSA_NAME_RANGE_INFO macros.  It adds the ability to provide an optional 
context in the form of a stmt or edge to any query.  If no context is 
provided, it simply provides the global value. There are basically 3 
queries:


  virtual bool range_of_expr (vrange , tree expr, gimple * = NULL) ;
  virtual bool range_on_edge (vrange , edge, tree expr);
  virtual bool range_of_stmt (vrange , gimple *, tree name = NULL);

- range_of_stmt evaluates the DEF of the stmt, but can also evaluate 
things like  "if (x < y)" that have an implicit boolean LHS.  If NAME is 
provided, it needs to match the DEF. Thats mostly flexibility for 
dealing with something like multiple defs, you can specify which def.
- range_on_edge provides the range of an ssa-name as it would be valued 
on a specific edge.
- range_of_expr is used to ask for the range of any ssa_name or tree 
expression as it occurs on entry to a specific stmt. Normally we use 
this to ask for the range of an ssa-name as its used on a stmt,  but it 
can evaluate expression trees as well.


These requests are not limited to names which occur on a stmt.. we can 
recompute values by asking for the range of value as they occur at other 
locations in the IL.  ie

x_2 = b_3 + 5
<...>
if (b_3 > 7)
   blah (x_2)
When we ask for the range of x_2 at the call to blah, ranger actually 
recomputes x_2 = b_3 + 5 at the call site by asking for the range of b_3 
on the outgoing edge leading to the block with the call to blah, and 
thus uses b_3 == [8, +INF] to re-evaluate x_2


Internally, ranger uses the exact same API to evaluate everything that 
external clients use.



The default query object is global_range_query, which ignores any 
location (stmt or edge) information provided, and simply returns the 
global value. This amounts to an identical result as the old 
SSA_NAME_RANGE_INFO request, and when get_range_query () is called, this 
is the default range_query that is provided.


When a pass calls enable_ranger(), the default query is changed to this 
new instance (which supports context information), and any further calls 
to get_range_query() will now invoke ranger instead of the 
global_range_query.  It uses its on-demand support to go and answer the 
range question by looking at only what it needs to in order to answer 
the question.  This is the exact same ranger code base that all the VRP 
passes use, so you get almost the same level of power to answer 
questions.  There are just a couple of little things that VRP enables 
because it does a DOM walk, but they are fairly minor for most cases.


if you use the range_query API, and do not provide a stmt or an edge, 
then we can't provide contextual range information, and you'll go back 
to getting just global information again.


I think Aldy has converted everything to the new range_query API...  
which means any pass that could benefit from contextual range 
information , in theory, only needs to enable_ranger() and provide a 
context stmt or edge on the range query call.


Just remember to disable it when done :-)

Andrew



Re: Enable ranger for ipa-prop

2023-06-27 Thread Andrew MacLeod via Gcc-patches



On 6/27/23 09:19, Jan Hubicka wrote:

Hi,
as shown in the testcase (which would eventually be useful for
optimizing std::vector's push_back), ipa-prop can use context dependent ranger
queries for better value range info.

Bootstrapped/regtested x86_64-linux, OK?


Quick question.

When you call enable_ranger(), its gives you a ranger back, but it also 
sets the range query for the specified context to that same instance.  
So from that point forward  all existing calls to get_range_query(fun) 
will now use the context ranger


enable_ranger (struct function *fun, bool use_imm_uses)
<...>
  gcc_checking_assert (!fun->x_range_query);
  r = new gimple_ranger (use_imm_uses);
  fun->x_range_query = r;
  return r;

So you probably dont have to pass a ranger around?  or is that ranger 
you are passing for a different context?



Andrew




[COMMITTED] PR tree-optimization/110251 - Avoid redundant GORI calcuations.

2023-06-26 Thread Andrew MacLeod via Gcc-patches
When calculating ranges, GORI evaluates the chain of definitions until 
it finds the desired name.


  _4 = (short unsigned int) c.2_1;
  _5 = _4 + 65535;
  a_lsm.19_30 = a;
  _49 = _4 + 65534;
  _12 = _5 & _49;
  _46 = _12 + 65535;
  _48 = _12 & _46;    <<--
  if (_48 != 0)

When evaluating c.2_1 on the true edge, GORI starts with _48 with a 
range of [1, +INF]


Looking at _48's operands (_12 and _46), note that it depends both  _12 
and _46.  Also note that _46 is also dependent on _12.


GORI currently simply calculates c.2_1 through both operands. this means 
_12 will be evaluates back thru to c.2_1, and then _46 will do the same 
and the results will be combined.  that means the statements from _12 
back to c.2_1 are actually calculated twice.


This PR produces a sequence of code which is quite long, with cascading 
chains of dependencies like this that feed each other. This becomes a 
geometric/exponential growth in calculation time, over and over.


This patch identifies the situation of one operand depending on the 
other, and simply evaluates only  the one which includes the other.  In 
the above case, it simply winds back thru _46 ignoring the _12 operand 
in the definition of _48.    During the process of evaluating _46, we 
eventually get to evaluating _12 anyway, so we don't lose much, if 
anything.    This results in a much more consistently linear time 
evaluation.


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew







commit 6246ee062062b53275c229daf8676ccaa535f419
Author: Andrew MacLeod 
Date:   Thu Jun 22 10:00:12 2023 -0400

Avoid redundant GORI calcuations.

When GORI evaluates a statement, if operand 1 and 2 are both in the
dependency chain, GORI evaluates the name through both operands sequentially
and combines the results.

If either operand is in the dependency chain of the other, this
evaluation will do the same work twice, for questionable gain.
Instead, simple evaluate only the operand which depends on the other
and keep the evaluation linear in time.

* gimple-range-gori.cc (compute_operand1_and_operand2_range):
Check for interdependence between operands 1 and 2.

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index abc70cd54ee..4ee0ae36014 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1291,13 +1291,26 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
-  // Calculate a good a range for op2.  Since op1 == op2, this will
-  // have already included whatever the actual range of name is.
-  if (!compute_operand2_range (op_range, handler, lhs, name, src, rel))
+  // If op1 is in the def chain of op2, we'll do the work twice to evalaute
+  // op1.  This can result in an exponential time calculation.
+  // Instead just evaluate op2, which will eventualy get to op1.
+  if (in_chain_p (handler.operand1 (), handler.operand2 ()))
+return compute_operand2_range (r, handler, lhs, name, src, rel);
+
+  // Likewise if op2 is in the def chain of op1.
+  if (in_chain_p (handler.operand2 (), handler.operand1 ()))
+return compute_operand1_range (r, handler, lhs, name, src, rel);
+
+  // Calculate a good a range through op2.
+  if (!compute_operand2_range (r, handler, lhs, name, src, rel))
 return false;
 
+  // If op1 == op2 there is again no need to go further.
+  if (handler.operand1 () == handler.operand2 ())
+return true;
+
   // Now get the range thru op1.
-  if (!compute_operand1_range (r, handler, lhs, name, src, rel))
+  if (!compute_operand1_range (op_range, handler, lhs, name, src, rel))
 return false;
 
   // Both operands have to be simultaneously true, so perform an intersection.


[PATCH] PR tree-optimization/110266 - Check for integer only complex

2023-06-15 Thread Andrew MacLeod via Gcc-patches
With the expanded capabilities of range-op dispatch, floating point 
complex objects can appear when folding, whic they couldn't before. In 
the processing for extracting integers from complex int's, make sure it 
actually is an integer.


Bootstraps on x86_64-pc-linux-gnu.  Regtesting currently under way.  
Assuming there are no issues, I will push this.


Andrew

From 2ba20a9e7b41fbcf1f03d5447e14b9b7b174fead Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 15 Jun 2023 11:59:55 -0400
Subject: [PATCH] Check for integer only complex.

With the expanded capabilities of range-op dispatch, floating point
complex objects can appear when folding, whic they couldn't before.
In the processig for extracting integers from complex ints, make sure it
is an integer complex.

	PR tree-optimization/110266
	gcc/
	* gimple-range-fold.cc (adjust_imagpart_expr): Check for integer
	complex type.
	(adjust_realpart_expr): Ditto.

	gcc/testsuite/
	* gcc.dg/pr110266.c: New.
---
 gcc/gimple-range-fold.cc|  6 --
 gcc/testsuite/gcc.dg/pr110266.c | 20 
 2 files changed, 24 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110266.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 173d9f386c5..b4018d08d2b 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -506,7 +506,8 @@ adjust_imagpart_expr (vrange , const gimple *stmt)
   && gimple_assign_rhs_code (def_stmt) == COMPLEX_CST)
 {
   tree cst = gimple_assign_rhs1 (def_stmt);
-  if (TREE_CODE (cst) == COMPLEX_CST)
+  if (TREE_CODE (cst) == COMPLEX_CST
+	  && TREE_CODE (TREE_TYPE (TREE_TYPE (cst))) == INTEGER_TYPE)
 	{
 	  wide_int w = wi::to_wide (TREE_IMAGPART (cst));
 	  int_range<1> imag (TREE_TYPE (TREE_IMAGPART (cst)), w, w);
@@ -533,7 +534,8 @@ adjust_realpart_expr (vrange , const gimple *stmt)
   && gimple_assign_rhs_code (def_stmt) == COMPLEX_CST)
 {
   tree cst = gimple_assign_rhs1 (def_stmt);
-  if (TREE_CODE (cst) == COMPLEX_CST)
+  if (TREE_CODE (cst) == COMPLEX_CST
+	  && TREE_CODE (TREE_TYPE (TREE_TYPE (cst))) == INTEGER_TYPE)
 	{
 	  wide_int imag = wi::to_wide (TREE_REALPART (cst));
 	  int_range<2> tmp (TREE_TYPE (TREE_REALPART (cst)), imag, imag);
diff --git a/gcc/testsuite/gcc.dg/pr110266.c b/gcc/testsuite/gcc.dg/pr110266.c
new file mode 100644
index 000..0b2acb5a791
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110266.c
@@ -0,0 +1,20 @@
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+
+#include 
+
+int Hann_i, PsyBufferUpdate_psyInfo_0, PsyBufferUpdate_i;
+double *mdct_data;
+double PsyBufferUpdate_sfreq;
+void PsyBufferUpdate() {
+  if (PsyBufferUpdate_psyInfo_0 == 4)
+for (; Hann_i;)
+  ;
+  {
+double xr_0 = cos(PsyBufferUpdate_psyInfo_0);
+PsyBufferUpdate_sfreq = sin(PsyBufferUpdate_psyInfo_0);
+for (; PsyBufferUpdate_psyInfo_0; PsyBufferUpdate_i++)
+  mdct_data[PsyBufferUpdate_i] = xr_0 * PsyBufferUpdate_sfreq;
+  }
+}
+
-- 
2.40.1



[COMMITTED 12/17] - Add a hybrid MAX_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


This is the last use of the pointer table, so it is also removed.

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cd194f582c5be3cc91e025e304e2769f61ceb6b6 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:35:18 -0400
Subject: [PATCH 12/17] Add a hybrid MAX_EXPR operator for integer and pointer.

This adds an operator to the unified table for MAX_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

THIs also removes the pointer table which is no longer needed.

	* range-op-mixed.h (operator_max): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove MAX_EXPR.
	(pointer_table::pointer_table): Remove.
	(class hybrid_max_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_max_operator.
	* range-op.cc (pointer_tree_table): Remove.
	(unified_table::unified_table): Comment out MAX_EXPR.
	(get_op_handler): Remove check of pointer table.
	* range-op.h (class pointer_table): Remove.
---
 gcc/range-op-mixed.h |  6 +++---
 gcc/range-op-ptr.cc  | 30 --
 gcc/range-op.cc  | 10 ++
 gcc/range-op.h   |  9 -
 4 files changed, 25 insertions(+), 30 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index a65935435c2..bdc488b8754 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -636,10 +636,10 @@ class operator_max : public range_operator
 {
 public:
   void update_bitmask (irange , const irange ,
-  const irange ) const final override;
-private:
+  const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 483e43ca994..ea66fe9056b 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -157,7 +157,6 @@ pointer_min_max_operator::wi_fold (irange , tree type,
 r.set_varying (type);
 }
 
-
 class pointer_and_operator : public range_operator
 {
 public:
@@ -265,14 +264,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 	rel);
 }
 
-// When PRANGE is implemented, these are all the opcodes which are currently
-// expecting routines with PRANGE signatures.
-
-pointer_table::pointer_table ()
-{
-  set (MAX_EXPR, op_ptr_min_max);
-}
-
 // --
 // Hybrid operators for the 4 operations which integer and pointers share,
 // but which have different implementations.  Simply check the type in
@@ -404,8 +395,26 @@ public:
 }
 } op_hybrid_min;
 
+class hybrid_max_operator : public operator_max
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_max::update_bitmask (r, lh, rh);
+}
 
-
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_max::wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+  else
+	return op_ptr_min_max.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_max;
 
 // Initialize any pointer operators to the primary table
 
@@ -417,4 +426,5 @@ range_op_table::initialize_pointer_ops ()
   set (BIT_AND_EXPR, op_hybrid_and);
   set (BIT_IOR_EXPR, op_hybrid_or);
   set (MIN_EXPR, op_hybrid_min);
+  set (MAX_EXPR, op_hybrid_max);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 481f3b1324d..046b7691bb6 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,8 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-pointer_table pointer_tree_table;
-
 // Instantiate a range_op_table for unified operations.
 class unified_table : public range_op_table
 {
@@ -124,18 +122,14 @@ unified_table::unified_table ()
   // set (BIT_AND_EXPR, op_bitwise_and);
   // set (BIT_IOR_EXPR, op_bitwise_or);
   // set (MIN_EXPR, op_min);
-  set (MAX_EXPR, op_max);
+  // set (MAX_EXPR, op_max);
 }
 
 // The tables are hidden and accessed via a simple extern function.
 
 range_operator *
-get_op_handler (enum tree_code code, tree type)
+get_op_handler (enum tree_code code, tree)
 {
-  // If this is pointer type and there is pointer specifc routine, use it.
-  if (POINTER_TYPE_P (type) && pointer_tree_table[code])
-return pointer_tree_table[code];
-
   return unified_tree_table[code];
 }
 
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 08c51bace40..15c45137af2 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -299,15 +299,6 @@ 

[COMMITTED 17/17] PR tree-optimization/110205 - Add some overrides.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add some missing overrides, and add the diaptch pattern for FII which 
will be used for integer to float conversion.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 1bed4b49302e2fd7bf89426117331ae89ebdc90b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 12 Jun 2023 09:47:43 -0400
Subject: [PATCH 17/17] Add some overrides.

	PR tree-optimization/110205
	* range-op-float.cc (range_operator::fold_range): Add default FII
	fold routine.
	(Class operator_gt): Add missing final overrides.
	* range-op.cc (range_op_handler::fold_range): Add RO_FII case.
	(operator_lshift ::update_bitmask): Add final override.
	(operator_rshift ::update_bitmask): Add final override.
	* range-op.h (range_operator::fold_range): Add FII prototype.
---
 gcc/range-op-float.cc | 10 ++
 gcc/range-op-mixed.h  |  9 +
 gcc/range-op.cc   | 10 --
 gcc/range-op.h|  4 
 4 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 24f2235884f..f5c0cec75c4 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -157,6 +157,16 @@ range_operator::fold_range (irange  ATTRIBUTE_UNUSED,
   return false;
 }
 
+bool
+range_operator::fold_range (frange  ATTRIBUTE_UNUSED,
+			tree type ATTRIBUTE_UNUSED,
+			const irange  ATTRIBUTE_UNUSED,
+			const irange  ATTRIBUTE_UNUSED,
+			relation_trio) const
+{
+  return false;
+}
+
 bool
 range_operator::op1_range (frange  ATTRIBUTE_UNUSED,
  tree type ATTRIBUTE_UNUSED,
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index bdc488b8754..6944742ecbc 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -239,26 +239,27 @@ public:
   using range_operator::op1_op2_relation;
   bool fold_range (irange , tree type,
 		   const irange , const irange ,
-		   relation_trio = TRIO_VARYING) const;
+		   relation_trio = TRIO_VARYING) const final override;
   bool fold_range (irange , tree type,
 		   const frange , const frange ,
 		   relation_trio = TRIO_VARYING) const final override;
 
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio = TRIO_VARYING) const;
+		  relation_trio = TRIO_VARYING) const final override;
   bool op1_range (frange , tree type,
 		  const irange , const frange ,
 		  relation_trio = TRIO_VARYING) const final override;
 
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio = TRIO_VARYING) const;
+		  relation_trio = TRIO_VARYING) const final override;
   bool op2_range (frange , tree type,
 		  const irange , const frange ,
 		  relation_trio = TRIO_VARYING) const final override;
   relation_kind op1_op2_relation (const irange ) const final override;
-  void update_bitmask (irange , const irange , const irange ) const;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
 };
 
 class operator_ge :  public range_operator
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 8a661fdb042..f0dff53ec1e 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -219,6 +219,10 @@ range_op_handler::fold_range (vrange , tree type,
 	return m_operator->fold_range (as_a  (r), type,
    as_a  (lh),
    as_a  (rh), rel);
+  case RO_FII:
+	return m_operator->fold_range (as_a  (r), type,
+   as_a  (lh),
+   as_a  (rh), rel);
   default:
 	return false;
 }
@@ -2401,7 +2405,8 @@ public:
 tree type,
 const wide_int &,
 const wide_int &) const;
-  void update_bitmask (irange , const irange , const irange ) const
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
 { update_known_bitmask (r, LSHIFT_EXPR, lh, rh); }
 } op_lshift;
 
@@ -2432,7 +2437,8 @@ public:
 	   const irange ,
 	   const irange ,
 	   relation_kind rel) const;
-  void update_bitmask (irange , const irange , const irange ) const
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
 { update_known_bitmask (r, RSHIFT_EXPR, lh, rh); }
 } op_rshift;
 
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 3602bc4e123..af94c2756a7 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -72,6 +72,10 @@ public:
 			   const frange ,
 			   const frange ,
 			   relation_trio = TRIO_VARYING) const;
+  virtual bool fold_range (frange , tree type,
+			   const irange ,
+			   const irange ,
+			   relation_trio = TRIO_VARYING) const;
 
   // Return the range for op[12] in the general case.  LHS is the range for
   // the LHS of the expression, OP[12]is the range for the other
-- 
2.40.1



[COMMITTED 10/17] - Add a hybrid BIT_IOR_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 80f402e832a2ce402ee1562030d5c67ebc276f7c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:33:17 -0400
Subject: [PATCH 10/17] Add a hybrid BIT_IOR_EXPR operator for integer and
 pointer.

This adds an operator to the unified table for BIT_IOR_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_bitwise_or): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove BIT_IOR_EXPR.
	(class hybrid_or_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_or_operator.
	* range-op.cc (unified_table::unified_table): Comment out BIT_IOR_EXPR.
---
 gcc/range-op-mixed.h | 10 -
 gcc/range-op-ptr.cc  | 52 ++--
 gcc/range-op.cc  |  4 ++--
 3 files changed, 57 insertions(+), 9 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 4177818e4b9..e4852e974c4 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -609,16 +609,16 @@ public:
   using range_operator::op2_range;
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 
 class operator_min : public range_operator
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 941026994ed..7b22d0bf05b 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -184,9 +184,9 @@ pointer_and_operator::wi_fold (irange , tree type,
 
 class pointer_or_operator : public range_operator
 {
+public:
   using range_operator::op1_range;
   using range_operator::op2_range;
-public:
   virtual bool op1_range (irange , tree type,
 			  const irange ,
 			  const irange ,
@@ -270,7 +270,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
@@ -334,6 +333,54 @@ public:
 }
 } op_hybrid_and;
 
+// Temporary class which dispatches routines to either the INT version or
+// the pointer version depending on the type.  Once PRANGE is a range
+// class, we can remove the hybrid.
+
+class hybrid_or_operator : public operator_bitwise_or
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::op1_range (r, type, lhs, op2, rel);
+  else
+	return op_pointer_or.op1_range (r, type, lhs, op2, rel);
+}
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::op2_range (r, type, lhs, op1, rel);
+  else
+	return op_pointer_or.op2_range (r, type, lhs, op1, rel);
+}
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_bitwise_or::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::wi_fold (r, type, lh_lb, lh_ub,
+	  rh_lb, rh_ub);
+  else
+	return op_pointer_or.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_or;
+
+
 
 // Initialize any pointer operators to the primary table
 
@@ -343,4 +390,5 @@ range_op_table::initialize_pointer_ops ()
   set (POINTER_PLUS_EXPR, op_pointer_plus);
   set (POINTER_DIFF_EXPR, op_pointer_diff);
   set (BIT_AND_EXPR, op_hybrid_and);
+  set (BIT_IOR_EXPR, op_hybrid_or);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index dcb922143ce..0a9a3297de7 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -121,8 +121,8 @@ unified_table::unified_table ()
   // is used until there is a pointer range class.  Then we can simply
   // uncomment 

[COMMITTED 15/17] - Provide a default range_operator via range_op_handler.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This provides range_op_handler with a default range_operator, so you no 
longer need to check if it has a valid handler or not.


The valid check now turns into a "is this something other than a default 
operator" check.   IT means you can now simply invoke fold without 
checking.. ie instead of


range_op_handler handler(CONVERT_EXPR);
if (handler &&  handler.fold_range (..))

we can simply write
if (range_op_handler(CONVERT_EXPR).fold_range ())

The new method range_op() will return the a pointer to the custom 
range_operator, or NULL if its the default.   THis allos use of 
range_op_handler() to behave as if you were indexing a range table/ if 
that happens to be needed.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 3c4399657d35a0b5bf7caeb88c6ddc0461322d3f Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:59:38 -0400
Subject: [PATCH 15/17] Provide a default range_operator via range_op_handler.

range_op_handler now provides a default range_operator for any opcode,
so there is no longer a need to check for a valid operator.

	* gimple-range-op.cc (gimple_range_op_handler): Set m_operator
	manually as there is no access to the default operator.
	(cfn_copysign::fold_range): Don't check for validity.
	(cfn_ubsan::fold_range): Ditto.
	(gimple_range_op_handler::maybe_builtin_call): Don't set to NULL.
	* range-op.cc (default_operator): New.
	(range_op_handler::range_op_handler): Use default_operator
	instead of NULL.
	(range_op_handler::operator bool): Move from header, compare
	against default operator.
	(range_op_handler::range_op): New.
	* range-op.h (range_op_handler::operator bool): Move.
---
 gcc/gimple-range-op.cc | 28 +---
 gcc/range-op.cc| 32 ++--
 gcc/range-op.h |  3 ++-
 3 files changed, 45 insertions(+), 18 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 4cbc981ee04..021a9108ecf 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -120,21 +120,22 @@ gimple_range_op_handler::supported_p (gimple *s)
 // Construct a handler object for statement S.
 
 gimple_range_op_handler::gimple_range_op_handler (gimple *s)
-  : range_op_handler (get_code (s))
 {
+  range_op_handler oper (get_code (s));
   m_stmt = s;
   m_op1 = NULL_TREE;
   m_op2 = NULL_TREE;
 
-  if (m_operator)
+  if (oper)
 switch (gimple_code (m_stmt))
   {
 	case GIMPLE_COND:
 	  m_op1 = gimple_cond_lhs (m_stmt);
 	  m_op2 = gimple_cond_rhs (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
-	  if (!Value_Range::supports_type_p (TREE_TYPE (m_op1)))
-	m_operator = NULL;
+	  if (Value_Range::supports_type_p (TREE_TYPE (m_op1)))
+	m_operator = oper.range_op ();
+	  gcc_checking_assert (m_operator);
 	  return;
 	case GIMPLE_ASSIGN:
 	  m_op1 = gimple_range_base_of_assignment (m_stmt);
@@ -153,7 +154,9 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 	m_op2 = gimple_assign_rhs2 (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
 	  if ((m_op1 && !Value_Range::supports_type_p (TREE_TYPE (m_op1
-	m_operator = NULL;
+	return;
+	  m_operator = oper.range_op ();
+	  gcc_checking_assert (m_operator);
 	  return;
 	default:
 	  gcc_unreachable ();
@@ -165,6 +168,7 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 maybe_builtin_call ();
   else
 maybe_non_standard ();
+  gcc_checking_assert (m_operator);
 }
 
 // Calculate what we can determine of the range of this unary
@@ -364,11 +368,10 @@ public:
 			   const frange , relation_trio) const override
   {
 frange neg;
-range_op_handler abs_op (ABS_EXPR);
-range_op_handler neg_op (NEGATE_EXPR);
-if (!abs_op || !abs_op.fold_range (r, type, lh, frange (type)))
+if (!range_op_handler (ABS_EXPR).fold_range (r, type, lh, frange (type)))
   return false;
-if (!neg_op || !neg_op.fold_range (neg, type, r, frange (type)))
+if (!range_op_handler (NEGATE_EXPR).fold_range (neg, type, r,
+		frange (type)))
   return false;
 
 bool signbit;
@@ -1073,14 +1076,11 @@ public:
   virtual bool fold_range (irange , tree type, const irange ,
 			   const irange , relation_trio rel) const
   {
-range_op_handler handler (m_code);
-gcc_checking_assert (handler);
-
 bool saved_flag_wrapv = flag_wrapv;
 // Pretend the arithmetic is wrapping.  If there is any overflow,
 // we'll complain, but will actually do wrapping operation.
 flag_wrapv = 1;
-bool result = handler.fold_range (r, type, lh, rh, rel);
+bool result = range_op_handler (m_code).fold_range (r, type, lh, rh, rel);
 flag_wrapv = saved_flag_wrapv;
 
 // If for both arguments vrp_valueize returned non-NULL, this should
@@ -1230,8 +1230,6 @@ gimple_range_op_handler::maybe_builtin_call ()
 	m_operator = _cfn_constant_p;
   else if (frange::supports_p (TREE_TYPE (m_op1)))
 	

[COMMITTED 9/17] - Add a hybrid BIT_AND_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 8adb8b2fd5797706e9fbb353d52fda123545431d Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:28:40 -0400
Subject: [PATCH 09/17] Add a hybrid BIT_AND_EXPR operator for integer and
 pointer.

This adds an operator to the unified table for BIT_AND_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_bitwise_and): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove BIT_AND_EXPR.
	(class hybrid_and_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_and_operator.
	* range-op.cc (unified_table::unified_table): Comment out BIT_AND_EXPR.
---
 gcc/range-op-mixed.h | 12 -
 gcc/range-op-ptr.cc  | 62 +++-
 gcc/range-op.cc  |  9 ---
 3 files changed, 73 insertions(+), 10 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index b188f5a516e..4177818e4b9 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -584,19 +584,19 @@ public:
   using range_operator::lhs_op1_relation;
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   relation_kind lhs_op1_relation (const irange ,
   const irange , const irange ,
-  relation_kind) const final override;
+  relation_kind) const override;
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
   void simple_op1_range_solver (irange , tree type,
 const irange ,
 const irange ) const;
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 55c37cc8c86..941026994ed 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -270,12 +270,71 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (BIT_AND_EXPR, op_pointer_and);
   set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
 
+// --
+// Hybrid operators for the 4 operations which integer and pointers share,
+// but which have different implementations.  Simply check the type in
+// the call and choose the appropriate method.
+// Once there is a PRANGE signature, simply add the appropriate
+// prototypes in the rmixed range class, and remove these hybrid classes.
+
+class hybrid_and_operator : public operator_bitwise_and
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::op1_range (r, type, lhs, op2, rel);
+  else
+	return false;
+}
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::op2_range (r, type, lhs, op1, rel);
+  else
+	return false;
+}
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind rel) const final override
+{
+  if (!lhs.undefined_p () && INTEGRAL_TYPE_P (lhs.type ()))
+	return operator_bitwise_and::lhs_op1_relation (lhs, op1, op2, rel);
+  else
+	return VREL_VARYING;
+}
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_bitwise_and::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::wi_fold (r, type, lh_lb, lh_ub,
+	  rh_lb, rh_ub);
+  else
+	return op_pointer_and.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_and;
+
+
 // Initialize any pointer operators to the primary table
 
 void
@@ -283,4 +342,5 @@ range_op_table::initialize_pointer_ops ()
 {
   set (POINTER_PLUS_EXPR, op_pointer_plus);
   

[COMMITTED 16/17] - Provide interface for non-standard operators.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This patch removes the hack introduced late last year for the 
non-standard range-op support.


Instead of adding a a pointer to a range_operator in the header file, 
and then setting the operator from another file via that pointer, the 
table itself is extended and  we provide new #defines to declare new 
operators.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6d3b6847bcb36221185a6259d19d743f4cfe1b5a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 17:06:36 -0400
Subject: [PATCH 16/17] Provide interface for non-standard operators.

THis removes the hack introduced for WIDEN_MULT which exported a pointer
to the operator and the gimple-range-op.cc set the operator to this
pointer whenn it was appropriate.

Instead, we simple change the range-op table to be unsigned indexed,
and add new opcodes to the end of the table, allowing them to be indexed
directly via range_op_handler::range_op.

	* gimple-range-op.cc (gimple_range_op_handler::maybe_non_standard):
	Use range_op_handler directly.
	* range-op.cc (range_op_handler::range_op_handler): Unsigned
	param instead of tree-code.
	(ptr_op_widen_plus_signed): Delete.
	(ptr_op_widen_plus_unsigned): Delete.
	(ptr_op_widen_mult_signed): Delete.
	(ptr_op_widen_mult_unsigned): Delete.
	(range_op_table::initialize_integral_ops): Add new opcodes.
	* range-op.h (range_op_handler): Use unsigned.
	(OP_WIDEN_MULT_SIGNED): New.
	(OP_WIDEN_MULT_UNSIGNED): New.
	(OP_WIDEN_PLUS_SIGNED): New.
	(OP_WIDEN_PLUS_UNSIGNED): New.
	(RANGE_OP_TABLE_SIZE): New.
	(range_op_table::operator []): Use unsigned.
	(range_op_table::set): Use unsigned.
	(m_range_tree): Make unsigned.
	(ptr_op_widen_mult_signed): Remove.
	(ptr_op_widen_mult_unsigned): Remove.
	(ptr_op_widen_plus_signed): Remove.
	(ptr_op_widen_plus_unsigned): Remove.
---
 gcc/gimple-range-op.cc | 11 +++
 gcc/range-op.cc| 11 ++-
 gcc/range-op.h | 26 --
 3 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 021a9108ecf..72c7b866f90 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -1168,8 +1168,11 @@ public:
 void
 gimple_range_op_handler::maybe_non_standard ()
 {
-  range_operator *signed_op = ptr_op_widen_mult_signed;
-  range_operator *unsigned_op = ptr_op_widen_mult_unsigned;
+  range_op_handler signed_op (OP_WIDEN_MULT_SIGNED);
+  gcc_checking_assert (signed_op);
+  range_op_handler unsigned_op (OP_WIDEN_MULT_UNSIGNED);
+  gcc_checking_assert (unsigned_op);
+
   if (gimple_code (m_stmt) == GIMPLE_ASSIGN)
 switch (gimple_assign_rhs_code (m_stmt))
   {
@@ -1195,9 +1198,9 @@ gimple_range_op_handler::maybe_non_standard ()
 	std::swap (m_op1, m_op2);
 
 	  if (signed1 || signed2)
-	m_operator = signed_op;
+	m_operator = signed_op.range_op ();
 	  else
-	m_operator = unsigned_op;
+	m_operator = unsigned_op.range_op ();
 	  break;
 	}
 	default:
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a271e00fa07..8a661fdb042 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -135,7 +135,7 @@ range_op_handler::range_op_handler ()
 // Create a range_op_handler for CODE.  Use a default operatoer if CODE
 // does not have an entry.
 
-range_op_handler::range_op_handler (tree_code code)
+range_op_handler::range_op_handler (unsigned code)
 {
   m_operator = operator_table[code];
   if (!m_operator)
@@ -1726,7 +1726,6 @@ public:
 			const wide_int _lb,
 			const wide_int _ub) const;
 } op_widen_plus_signed;
-range_operator *ptr_op_widen_plus_signed = _widen_plus_signed;
 
 void
 operator_widen_plus_signed::wi_fold (irange , tree type,
@@ -1760,7 +1759,6 @@ public:
 			const wide_int _lb,
 			const wide_int _ub) const;
 } op_widen_plus_unsigned;
-range_operator *ptr_op_widen_plus_unsigned = _widen_plus_unsigned;
 
 void
 operator_widen_plus_unsigned::wi_fold (irange , tree type,
@@ -2184,7 +2182,6 @@ public:
 			const wide_int _ub)
 const;
 } op_widen_mult_signed;
-range_operator *ptr_op_widen_mult_signed = _widen_mult_signed;
 
 void
 operator_widen_mult_signed::wi_fold (irange , tree type,
@@ -2217,7 +2214,6 @@ public:
 			const wide_int _ub)
 const;
 } op_widen_mult_unsigned;
-range_operator *ptr_op_widen_mult_unsigned = _widen_mult_unsigned;
 
 void
 operator_widen_mult_unsigned::wi_fold (irange , tree type,
@@ -4298,6 +4294,11 @@ range_op_table::initialize_integral_ops ()
   set (IMAGPART_EXPR, op_unknown);
   set (REALPART_EXPR, op_unknown);
   set (ABSU_EXPR, op_absu);
+  set (OP_WIDEN_MULT_SIGNED, op_widen_mult_signed);
+  set (OP_WIDEN_MULT_UNSIGNED, op_widen_mult_unsigned);
+  set (OP_WIDEN_PLUS_SIGNED, op_widen_plus_signed);
+  set (OP_WIDEN_PLUS_UNSIGNED, op_widen_plus_unsigned);
+
 }
 
 #if CHECKING_P
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 8243258eea5..3602bc4e123 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -185,7 +185,7 @@ class range_op_handler
 {
 public:
   range_op_handler ();
- 

[COMMITTED 14/17] - Switch from unified table to range_op_table. There can be only one.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Now that the unified table is the only one,  remove it and simply use 
range_op_table as the class instead of inheriting from it.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5bb9d2acd1987f788a52a2be9bca10c47033020a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:56:06 -0400
Subject: [PATCH 14/17] Switch from unified table to range_op_table.  There can
 be only one.

Now that there is only a single range_op_table, make the base table the
only table.

	* range-op.cc (unified_table): Delete.
	(range_op_table operator_table): Instantiate.
	(range_op_table::range_op_table): Rename from unified_table.
	(range_op_handler::range_op_handler): Use range_op_table.
	* range-op.h (range_op_table::operator []): Inline.
	(range_op_table::set): Inline.
---
 gcc/range-op.cc | 14 +-
 gcc/range-op.h  | 33 +++--
 2 files changed, 16 insertions(+), 31 deletions(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 3e8b1222b1c..382f5d50ffa 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,13 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-// Instantiate a range_op_table for unified operations.
-class unified_table : public range_op_table
-{
-  public:
-unified_table ();
-} unified_tree_table;
-
 // Instantiate the operators which apply to multiple types here.
 
 operator_equal op_equal;
@@ -80,9 +73,12 @@ operator_bitwise_or op_bitwise_or;
 operator_min op_min;
 operator_max op_max;
 
+// Instantaite a range operator table.
+range_op_table operator_table;
+
 // Invoke the initialization routines for each class of range.
 
-unified_table::unified_table ()
+range_op_table::range_op_table ()
 {
   initialize_integral_ops ();
   initialize_pointer_ops ();
@@ -134,7 +130,7 @@ range_op_handler::range_op_handler ()
 
 range_op_handler::range_op_handler (tree_code code)
 {
-  m_operator = unified_tree_table[code];
+  m_operator = operator_table[code];
 }
 
 // Create a dispatch pattern for value range discriminators LHS, OP1, and OP2.
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 295e5116dd1..328910d0ec5 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -266,35 +266,24 @@ extern void wi_set_zero_nonzero_bits (tree type,
 class range_op_table
 {
 public:
-  range_operator *operator[] (enum tree_code code);
-  void set (enum tree_code code, range_operator );
+  range_op_table ();
+  inline range_operator *operator[] (enum tree_code code)
+{
+  gcc_checking_assert (code >= 0 && code < MAX_TREE_CODES);
+  return m_range_tree[code];
+}
 protected:
+  inline void set (enum tree_code code, range_operator )
+{
+  gcc_checking_assert (m_range_tree[code] == NULL);
+  m_range_tree[code] = 
+}
   range_operator *m_range_tree[MAX_TREE_CODES];
   void initialize_integral_ops ();
   void initialize_pointer_ops ();
   void initialize_float_ops ();
 };
 
-
-// Return a pointer to the range_operator instance, if there is one
-// associated with tree_code CODE.
-
-inline range_operator *
-range_op_table::operator[] (enum tree_code code)
-{
-  gcc_checking_assert (code >= 0 && code < MAX_TREE_CODES);
-  return m_range_tree[code];
-}
-
-// Add OP to the handler table for CODE.
-
-inline void
-range_op_table::set (enum tree_code code, range_operator )
-{
-  gcc_checking_assert (m_range_tree[code] == NULL);
-  m_range_tree[code] = 
-}
-
 extern range_operator *ptr_op_widen_mult_signed;
 extern range_operator *ptr_op_widen_mult_unsigned;
 extern range_operator *ptr_op_widen_plus_signed;
-- 
2.40.1



[COMMITTED 6/17] - Move operator_min to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 508645fd461ceb8b743837e24411df2e17bd3950 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:09:58 -0400
Subject: [PATCH 06/17] Move operator_min to the unified range-op table.

	* range-op-mixed.h (class operator_min): Move from...
	* range-op.cc (unified_table::unified_table): Add MIN_EXPR.
	(class operator_min): Move from here.
	(integral_table::integral_table): Remove MIN_EXPR.
---
 gcc/range-op-mixed.h | 11 +++
 gcc/range-op.cc  | 18 +++---
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 8a11d61220c..7bd9b5e1129 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -596,4 +596,15 @@ private:
 		const wide_int _ub) const final override;
 };
 
+class operator_min : public range_operator
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 07e0c88e209..a777fb0d8a3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -80,6 +80,7 @@ operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
 operator_bitwise_or op_bitwise_or;
+operator_min op_min;
 
 // Invoke the initialization routines for each class of range.
 
@@ -119,6 +120,7 @@ unified_table::unified_table ()
   // speifc version is provided.
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
+  set (MIN_EXPR, op_min);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -1980,17 +1982,12 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 }
 
 
-class operator_min : public range_operator
+void
+operator_min::update_bitmask (irange , const irange ,
+			  const irange ) const
 {
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, MIN_EXPR, lh, rh); }
-} op_min;
+  update_known_bitmask (r, MIN_EXPR, lh, rh);
+}
 
 void
 operator_min::wi_fold (irange , tree type,
@@ -4534,7 +4531,6 @@ pointer_or_operator::wi_fold (irange , tree type,
 
 integral_table::integral_table ()
 {
-  set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
 }
 
-- 
2.40.1



[COMMITTED 13/17] - Remove type from range_op_handler table selection

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Lucky 13.  WIth the unified table complete, it is no longer necessary to 
specify a type when constructing a range_op_handler. This patch removes 
that requirement.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 8934830333933349d41e62f9fd6a3d21ab71150c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:41:20 -0400
Subject: [PATCH 13/17] Remove type from range_op_handler table selection

With the unified table complete, we no loonger need to specify a type
to choose a table when setting a range_op_handler.

	* gimple-range-gori.cc (gori_compute::condexpr_adjust): Do not
	pass type.
	* gimple-range-op.cc (get_code): Rename from get_code_and_type
	and simplify.
	(gimple_range_op_handler::supported_p): No need for type.
	(gimple_range_op_handler::gimple_range_op_handler): Ditto.
	(cfn_copysign::fold_range): Ditto.
	(cfn_ubsan::fold_range): Ditto.
	* ipa-cp.cc (ipa_vr_operation_and_type_effects): Ditto.
	* ipa-fnsummary.cc (evaluate_conditions_for_known_args): Ditto.
	* range-op-float.cc (operator_plus::op1_range): Ditto.
	(operator_mult::op1_range): Ditto.
	(range_op_float_tests): Ditto.
	* range-op.cc (get_op_handler): Remove.
	(range_op_handler::set_op_handler): Remove.
	(operator_plus::op1_range): No need for type.
	(operator_minus::op1_range): Ditto.
	(operator_mult::op1_range): Ditto.
	(operator_exact_divide::op1_range): Ditto.
	(operator_cast::op1_range): Ditto.
	(perator_bitwise_not::fold_range): Ditto.
	(operator_negate::fold_range): Ditto.
	* range-op.h (range_op_handler::range_op_handler): Remove type param.
	(range_cast): No need for type.
	(range_op_table::operator[]): Check for enum_code >= 0.
	* tree-data-ref.cc (compute_distributive_range): No need for type.
	* tree-ssa-loop-unswitch.cc (unswitch_predicate): Ditto.
	* value-query.cc (range_query::get_tree_range): Ditto.
	* value-relation.cc (relation_oracle::validate_relation): Ditto.
	* vr-values.cc (range_of_var_in_loop): Ditto.
	(simplify_using_ranges::fold_cond_with_ops): Ditto.
---
 gcc/gimple-range-gori.cc  |  2 +-
 gcc/gimple-range-op.cc| 42 ++-
 gcc/ipa-cp.cc |  6 ++---
 gcc/ipa-fnsummary.cc  |  6 ++---
 gcc/range-op-float.cc |  6 ++---
 gcc/range-op.cc   | 39 
 gcc/range-op.h| 10 +++--
 gcc/tree-data-ref.cc  |  4 ++--
 gcc/tree-ssa-loop-unswitch.cc |  2 +-
 gcc/value-query.cc|  5 ++---
 gcc/value-relation.cc |  2 +-
 gcc/vr-values.cc  |  6 ++---
 12 files changed, 43 insertions(+), 87 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index a1c8d51e484..abc70cd54ee 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1478,7 +1478,7 @@ gori_compute::condexpr_adjust (vrange , vrange , gimple *, tree cond,
   tree type = TREE_TYPE (gimple_assign_rhs1 (cond_def));
   if (!range_compatible_p (type, TREE_TYPE (gimple_assign_rhs2 (cond_def
 return false;
-  range_op_handler hand (gimple_assign_rhs_code (cond_def), type);
+  range_op_handler hand (gimple_assign_rhs_code (cond_def));
   if (!hand)
 return false;
 
diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index b6b10e47b78..4cbc981ee04 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -94,28 +94,14 @@ gimple_range_base_of_assignment (const gimple *stmt)
 
 // If statement is supported by range-ops, set the CODE and return the TYPE.
 
-static tree
-get_code_and_type (gimple *s, enum tree_code )
+static inline enum tree_code
+get_code (gimple *s)
 {
-  tree type = NULL_TREE;
-  code = NOP_EXPR;
-
   if (const gassign *ass = dyn_cast (s))
-{
-  code = gimple_assign_rhs_code (ass);
-  // The LHS of a comparison is always an int, so we must look at
-  // the operands.
-  if (TREE_CODE_CLASS (code) == tcc_comparison)
-	type = TREE_TYPE (gimple_assign_rhs1 (ass));
-  else
-	type = TREE_TYPE (gimple_assign_lhs (ass));
-}
-  else if (const gcond *cond = dyn_cast (s))
-{
-  code = gimple_cond_code (cond);
-  type = TREE_TYPE (gimple_cond_lhs (cond));
-}
-  return type;
+return gimple_assign_rhs_code (ass);
+  if (const gcond *cond = dyn_cast (s))
+return gimple_cond_code (cond);
+  return ERROR_MARK;
 }
 
 // If statement S has a supported range_op handler return TRUE.
@@ -123,9 +109,8 @@ get_code_and_type (gimple *s, enum tree_code )
 bool
 gimple_range_op_handler::supported_p (gimple *s)
 {
-  enum tree_code code;
-  tree type = get_code_and_type (s, code);
-  if (type && range_op_handler (code, type))
+  enum tree_code code = get_code (s);
+  if (range_op_handler (code))
 return true;
   if (is_a  (s) && gimple_range_op_handler (s))
 return true;
@@ -135,14 +120,11 @@ gimple_range_op_handler::supported_p (gimple *s)
 // Construct a handler object for statement S.
 
 gimple_range_op_handler::gimple_range_op_handler 

[COMMITTED 8/17] - Split pointer based range operators to range-op-ptr.cc

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This patch moves all the pointer specific code into a new file 
range-op-ptr.cc


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cb511d2209fa3a05801983a6965656734c1592c6 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:17:51 -0400
Subject: [PATCH 08/17] Split pointer ibased range operators to range-op-ptr.cc

MOve the pointer table and all pointer specific operators into a
new file for pointers.

	* Makefile.in (OBJS): Add range-op-ptr.o.
	* range-op-mixed.h (update_known_bitmask): Move prototype here.
	(minus_op1_op2_relation_effect): Move prototype here.
	(wi_includes_zero_p): Move function to here.
	(wi_zero_p): Ditto.
	* range-op.cc (update_known_bitmask): Remove static.
	(wi_includes_zero_p): Move to header.
	(wi_zero_p): Move to header.
	(minus_op1_op2_relation_effect): Remove static.
	(operator_pointer_diff): Move class and routines to range-op-ptr.cc.
	(pointer_plus_operator): Ditto.
	(pointer_min_max_operator): Ditto.
	(pointer_and_operator): Ditto.
	(pointer_or_operator): Ditto.
	(pointer_table): Ditto.
	(range_op_table::initialize_pointer_ops): Ditto.
	* range-op-ptr.cc: New.
---
 gcc/Makefile.in  |   1 +
 gcc/range-op-mixed.h |  25 
 gcc/range-op-ptr.cc  | 286 +++
 gcc/range-op.cc  | 258 +-
 4 files changed, 314 insertions(+), 256 deletions(-)
 create mode 100644 gcc/range-op-ptr.cc

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 0c02f312985..4be82e83b9e 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1588,6 +1588,7 @@ OBJS = \
 	range.o \
 	range-op.o \
 	range-op-float.o \
+	range-op-ptr.o \
 	read-md.o \
 	read-rtl.o \
 	read-rtl-function.o \
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index cd137acd0e6..b188f5a516e 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -22,6 +22,31 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_RANGE_OP_MIXED_H
 #define GCC_RANGE_OP_MIXED_H
 
+void update_known_bitmask (irange &, tree_code, const irange &, const irange &);
+bool minus_op1_op2_relation_effect (irange _range, tree type,
+const irange &, const irange &,
+relation_kind rel);
+
+
+// Return TRUE if 0 is within [WMIN, WMAX].
+
+inline bool
+wi_includes_zero_p (tree type, const wide_int , const wide_int )
+{
+  signop sign = TYPE_SIGN (type);
+  return wi::le_p (wmin, 0, sign) && wi::ge_p (wmax, 0, sign);
+}
+
+// Return TRUE if [WMIN, WMAX] is the singleton 0.
+
+inline bool
+wi_zero_p (tree type, const wide_int , const wide_int )
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return wmin == wmax && wi::eq_p (wmin, wi::zero (prec));
+}
+
+
 enum bool_range_state { BRS_FALSE, BRS_TRUE, BRS_EMPTY, BRS_FULL };
 bool_range_state get_bool_state (vrange , const vrange , tree val_type);
 
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
new file mode 100644
index 000..55c37cc8c86
--- /dev/null
+++ b/gcc/range-op-ptr.cc
@@ -0,0 +1,286 @@
+/* Code for range operators.
+   Copyright (C) 2017-2023 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod 
+   and Aldy Hernandez .
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "optabs-tree.h"
+#include "gimple-pretty-print.h"
+#include "diagnostic-core.h"
+#include "flags.h"
+#include "fold-const.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "cfganal.h"
+#include "gimple-iterator.h"
+#include "gimple-fold.h"
+#include "tree-eh.h"
+#include "gimple-walk.h"
+#include "tree-cfg.h"
+#include "wide-int.h"
+#include "value-relation.h"
+#include "range-op.h"
+#include "tree-ssa-ccp.h"
+#include "range-op-mixed.h"
+
+class pointer_plus_operator : public range_operator
+{
+  using range_operator::op2_range;
+public:
+  virtual void wi_fold (irange , tree type,
+			const wide_int _lb,
+			const wide_int _ub,
+			const wide_int _lb,
+			const wide_int _ub) const;
+  virtual bool op2_range (irange , tree type,
+			  const irange ,
+			  const irange ,
+			  relation_trio = TRIO_VARYING) const;
+  void update_bitmask (irange , const irange , const irange ) const
+{ update_known_bitmask (r, 

[COMMITTED 5/17] - Move operator_bitwise_or to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From a71ee5c2d48691280f76a90e2838d968f45de0c8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:05:33 -0400
Subject: [PATCH 05/17] Move operator_bitwise_or to the unified range-op table.

	* range-op-mixed.h (class operator_bitwise_or): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_IOR_EXPR.
	(class operator_bitwise_or): Move from here.
	(integral_table::integral_table): Remove BIT_IOR_EXPR.
---
 gcc/range-op-mixed.h | 19 +++
 gcc/range-op.cc  | 28 +++-
 2 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index b3d51f8a54e..8a11d61220c 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -577,4 +577,23 @@ private:
 const irange ) const;
 };
 
+class operator_bitwise_or : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 57bd95a1151..07e0c88e209 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -79,6 +79,7 @@ operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
+operator_bitwise_or op_bitwise_or;
 
 // Invoke the initialization routines for each class of range.
 
@@ -117,6 +118,7 @@ unified_table::unified_table ()
   // implementation.  These also remain in the pointer table until a pointer
   // speifc version is provided.
   set (BIT_AND_EXPR, op_bitwise_and);
+  set (BIT_IOR_EXPR, op_bitwise_or);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -3608,27 +3610,12 @@ operator_logical_or::op2_range (irange , tree type,
 }
 
 
-class operator_bitwise_or : public range_operator
+void
+operator_bitwise_or::update_bitmask (irange , const irange ,
+ const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_IOR_EXPR, lh, rh); }
-} op_bitwise_or;
+  update_known_bitmask (r, BIT_IOR_EXPR, lh, rh);
+}
 
 void
 operator_bitwise_or::wi_fold (irange , tree type,
@@ -4549,7 +4536,6 @@ integral_table::integral_table ()
 {
   set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
-  set (BIT_IOR_EXPR, op_bitwise_or);
 }
 
 // Initialize any integral operators to the primary table
-- 
2.40.1



[COMMITTED 4/17] - Move operator_bitwise_and to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From f2166fc81194a3e4e9ef185a7404551b410bb752 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:02:09 -0400
Subject: [PATCH 04/17] Move operator_bitwise_and to the unified range-op
 table.

At this point, the remaining 4 integral operation have different
impllementations than pointers, so we now check for a pointer table
entry first, then if there is nothing, look at the Unified table.

	* range-op-mixed.h (class operator_bitwise_and): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_AND_EXPR.
	(get_op_handler): Check for a pointer table entry first.
	(class operator_bitwise_and): Move from here.
	(integral_table::integral_table): Remove BIT_AND_EXPR.
---
 gcc/range-op-mixed.h | 27 
 gcc/range-op.cc  | 49 ++--
 2 files changed, 42 insertions(+), 34 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 644473053e0..b3d51f8a54e 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -550,4 +550,31 @@ private:
 		const wide_int _ub, const wide_int _lb,
 		const wide_int _ub) const final override;
 };
+
+class operator_bitwise_and : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind) const final override;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+  void simple_op1_range_solver (irange , tree type,
+const irange ,
+const irange ) const;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 11f576c55c5..57bd95a1151 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -78,6 +78,7 @@ operator_mult op_mult;
 operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
+operator_bitwise_and op_bitwise_and;
 
 // Invoke the initialization routines for each class of range.
 
@@ -111,6 +112,11 @@ unified_table::unified_table ()
   set (ADDR_EXPR, op_addr);
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
+
+  // These are in both integer and pointer tables, but pointer has a different
+  // implementation.  These also remain in the pointer table until a pointer
+  // speifc version is provided.
+  set (BIT_AND_EXPR, op_bitwise_and);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -118,16 +124,17 @@ unified_table::unified_table ()
 range_operator *
 get_op_handler (enum tree_code code, tree type)
 {
+  // If this is pointer type and there is pointer specifc routine, use it.
+  if (POINTER_TYPE_P (type) && pointer_tree_table[code])
+return pointer_tree_table[code];
+
   if (unified_tree_table[code])
 {
   // Should not be in any other table if it is in the unified table.
-  gcc_checking_assert (!pointer_tree_table[code]);
   gcc_checking_assert (!integral_tree_table[code]);
   return unified_tree_table[code];
 }
 
-  if (POINTER_TYPE_P (type))
-return pointer_tree_table[code];
   if (INTEGRAL_TYPE_P (type))
 return integral_tree_table[code];
   return NULL;
@@ -3121,37 +3128,12 @@ operator_logical_and::op2_range (irange , tree type,
 }
 
 
-class operator_bitwise_and : public range_operator
+void
+operator_bitwise_and::update_bitmask (irange , const irange ,
+  const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::lhs_op1_relation;
-public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	  const irange ,
-	  const irange ,
-	  relation_kind) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_AND_EXPR, lh, rh); }
-private:
-  void simple_op1_range_solver (irange , tree type,
-const irange ,
-const irange ) const;
-} op_bitwise_and;
-
+  

[COMMITTED 11/17] - Add a hybrid MIN_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 08f2e419b1e29f114857b3d817904abf3b4891be Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:34:26 -0400
Subject: [PATCH 11/17] Add a hybrid MIN_EXPR operator for integer and pointer.

This adds an operator to the unified table for MIN_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_min): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove MIN_EXPR.
	(class hybrid_min_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_min_operator.
	* range-op.cc (unified_table::unified_table): Comment out MIN_EXPR.
---
 gcc/range-op-mixed.h |  6 +++---
 gcc/range-op-ptr.cc  | 28 +++-
 gcc/range-op.cc  |  2 +-
 3 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index e4852e974c4..a65935435c2 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -625,11 +625,11 @@ class operator_min : public range_operator
 {
 public:
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 
 class operator_max : public range_operator
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 7b22d0bf05b..483e43ca994 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -270,7 +270,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
 
@@ -380,6 +379,32 @@ public:
 }
 } op_hybrid_or;
 
+// Temporary class which dispatches routines to either the INT version or
+// the pointer version depending on the type.  Once PRANGE is a range
+// class, we can remove the hybrid.
+
+class hybrid_min_operator : public operator_min
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_min::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_min::wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+  else
+	return op_ptr_min_max.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_min;
+
+
 
 
 // Initialize any pointer operators to the primary table
@@ -391,4 +416,5 @@ range_op_table::initialize_pointer_ops ()
   set (POINTER_DIFF_EXPR, op_pointer_diff);
   set (BIT_AND_EXPR, op_hybrid_and);
   set (BIT_IOR_EXPR, op_hybrid_or);
+  set (MIN_EXPR, op_hybrid_min);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 0a9a3297de7..481f3b1324d 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -123,7 +123,7 @@ unified_table::unified_table ()
 
   // set (BIT_AND_EXPR, op_bitwise_and);
   // set (BIT_IOR_EXPR, op_bitwise_or);
-  set (MIN_EXPR, op_min);
+  // set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
 }
 
-- 
2.40.1



[COMMITTED 7/17] - Move operator_max to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This is the last of the integral operators, so also remove the integral 
table.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6585fa54e0f2a54f1a398b49b5b4b6a9cd6da4ea Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:10:54 -0400
Subject: [PATCH 07/17] Move operator_max to the unified range-op table.

Also remove the integral table.

	* range-op-mixed.h (class operator_max): Move from...
	* range-op.cc (unified_table::unified_table): Add MAX_EXPR.
	(get_op_handler): Remove the integral table.
	(class operator_max): Move from here.
	(integral_table::integral_table): Delete.
	* range-op.h (class integral_table): Delete.
---
 gcc/range-op-mixed.h | 10 ++
 gcc/range-op.cc  | 34 --
 gcc/range-op.h   |  9 -
 3 files changed, 18 insertions(+), 35 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 7bd9b5e1129..cd137acd0e6 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -607,4 +607,14 @@ private:
 		const wide_int _ub) const final override;
 };
 
+class operator_max : public range_operator
+{
+public:
+  void update_bitmask (irange , const irange ,
+  const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a777fb0d8a3..e83f627a722 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,7 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-integral_table integral_tree_table;
 pointer_table pointer_tree_table;
 
 // Instantiate a range_op_table for unified operations.
@@ -81,6 +80,7 @@ operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
 operator_bitwise_or op_bitwise_or;
 operator_min op_min;
+operator_max op_max;
 
 // Invoke the initialization routines for each class of range.
 
@@ -121,6 +121,7 @@ unified_table::unified_table ()
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (MIN_EXPR, op_min);
+  set (MAX_EXPR, op_max);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -132,16 +133,7 @@ get_op_handler (enum tree_code code, tree type)
   if (POINTER_TYPE_P (type) && pointer_tree_table[code])
 return pointer_tree_table[code];
 
-  if (unified_tree_table[code])
-{
-  // Should not be in any other table if it is in the unified table.
-  gcc_checking_assert (!integral_tree_table[code]);
-  return unified_tree_table[code];
-}
-
-  if (INTEGRAL_TYPE_P (type))
-return integral_tree_table[code];
-  return NULL;
+  return unified_tree_table[code];
 }
 
 range_op_handler::range_op_handler ()
@@ -2001,17 +1993,12 @@ operator_min::wi_fold (irange , tree type,
 }
 
 
-class operator_max : public range_operator
+void
+operator_max::update_bitmask (irange , const irange ,
+			  const irange ) const
 {
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, MAX_EXPR, lh, rh); }
-} op_max;
+  update_known_bitmask (r, MAX_EXPR, lh, rh);
+}
 
 void
 operator_max::wi_fold (irange , tree type,
@@ -4529,11 +4516,6 @@ pointer_or_operator::wi_fold (irange , tree type,
 r.set_varying (type);
 }
 
-integral_table::integral_table ()
-{
-  set (MAX_EXPR, op_max);
-}
-
 // Initialize any integral operators to the primary table
 
 void
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 0f5ee41f96c..08c51bace40 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -299,15 +299,6 @@ range_op_table::set (enum tree_code code, range_operator )
   m_range_tree[code] = 
 }
 
-// This holds the range op tables
-
-class integral_table : public range_op_table
-{
-public:
-  integral_table ();
-};
-extern integral_table integral_tree_table;
-
 // Instantiate a range op table for pointer operations.
 
 class pointer_table : public range_op_table
-- 
2.40.1



[COMMITTED 3/17] - Move operator_bitwise_xor to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cc18db2826c5449e84366644fa461816fa5f3f99 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:01:05 -0400
Subject: [PATCH 03/17] Move operator_bitwise_xor to the unified range-op
 table.

	* range-op-mixed.h (class operator_bitwise_xor): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_XOR_EXPR.
	(class operator_bitwise_xor): Move from here.
	(integral_table::integral_table): Remove BIT_XOR_EXPR.
	(pointer_table::pointer_table): Remove BIT_XOR_EXPR.
---
 gcc/range-op-mixed.h | 23 +++
 gcc/range-op.cc  | 36 +++-
 2 files changed, 30 insertions(+), 29 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index ba04c51a2d8..644473053e0 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -527,4 +527,27 @@ public:
 		  relation_trio rel = TRIO_VARYING) const final override;
 };
 
+class operator_bitwise_xor : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_op2_relation_effect (irange _range,
+	tree type,
+	const irange _range,
+	const irange _range,
+	relation_kind rel) const;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 107582a9571..11f576c55c5 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -77,6 +77,7 @@ operator_negate op_negate;
 operator_mult op_mult;
 operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
+operator_bitwise_xor op_bitwise_xor;
 
 // Invoke the initialization routines for each class of range.
 
@@ -109,6 +110,7 @@ unified_table::unified_table ()
   // integral implementation.
   set (ADDR_EXPR, op_addr);
   set (BIT_NOT_EXPR, op_bitwise_not);
+  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -3732,33 +3734,12 @@ operator_bitwise_or::op2_range (irange , tree type,
   return operator_bitwise_or::op1_range (r, type, lhs, op1);
 }
 
-
-class operator_bitwise_xor : public range_operator
+void
+operator_bitwise_xor::update_bitmask (irange , const irange ,
+  const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_op2_relation_effect (irange _range,
-	tree type,
-	const irange _range,
-	const irange _range,
-	relation_kind rel) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_XOR_EXPR, lh, rh); }
-} op_bitwise_xor;
+  update_known_bitmask (r, BIT_XOR_EXPR, lh, rh);
+}
 
 void
 operator_bitwise_xor::wi_fold (irange , tree type,
@@ -4588,7 +4569,6 @@ integral_table::integral_table ()
   set (MAX_EXPR, op_max);
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
-  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // Initialize any integral operators to the primary table
@@ -4618,8 +4598,6 @@ pointer_table::pointer_table ()
   set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
-
-  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // Initialize any pointer operators to the primary table
-- 
2.40.1



[COMMITTED 2/17] - Move operator_bitwise_not to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5bb4c53870db1331592a89119f41beee2b17d832 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 15:59:43 -0400
Subject: [PATCH 02/17] Move operator_bitwise_not to the unified range-op
 table.

	* range-op-mixed.h (class operator_bitwise_not): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_NOT_EXPR.
	(class operator_bitwise_not): Move from here.
	(integral_table::integral_table): Remove BIT_NOT_EXPR.
	(pointer_table::pointer_table): Remove BIT_NOT_EXPR.
---
 gcc/range-op-mixed.h | 13 +
 gcc/range-op.cc  | 21 +++--
 2 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index d31b144169d..ba04c51a2d8 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -514,4 +514,17 @@ public:
 		  relation_trio rel = TRIO_VARYING) const final override;
 };
 
+class operator_bitwise_not : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 20cc9b0dc9c..107582a9571 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -76,6 +76,7 @@ operator_minus op_minus;
 operator_negate op_negate;
 operator_mult op_mult;
 operator_addr_expr op_addr;
+operator_bitwise_not op_bitwise_not;
 
 // Invoke the initialization routines for each class of range.
 
@@ -105,8 +106,9 @@ unified_table::unified_table ()
   set (MULT_EXPR, op_mult);
 
   // Occur in both integer and pointer tables, but currently share
-  // integral implelmentation.
+  // integral implementation.
   set (ADDR_EXPR, op_addr);
+  set (BIT_NOT_EXPR, op_bitwise_not);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4080,21 +4082,6 @@ operator_logical_not::op1_range (irange ,
 }
 
 
-class operator_bitwise_not : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-} op_bitwise_not;
-
 bool
 operator_bitwise_not::fold_range (irange , tree type,
   const irange ,
@@ -4602,7 +4589,6 @@ integral_table::integral_table ()
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
-  set (BIT_NOT_EXPR, op_bitwise_not);
 }
 
 // Initialize any integral operators to the primary table
@@ -4633,7 +4619,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
-- 
2.40.1



[COMMITTED 1/17] Move operator_addr_expr to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew

From 438f8281ad2d821e09eaf5691d1b76b6f2f39b4c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 15:56:15 -0400
Subject: [PATCH 01/17] Move operator_addr_expr to the unified range-op table.

	* range-op-mixed.h (class operator_addr_expr): Move from...
	* range-op.cc (unified_table::unified_table): Add ADDR_EXPR.
	(class operator_addr_expr): Move from here.
	(integral_table::integral_table): Remove ADDR_EXPR.
	(pointer_table::pointer_table): Remove ADDR_EXPR.
---
 gcc/range-op-mixed.h | 13 +
 gcc/range-op.cc  | 23 +--
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 52b8570cb2a..d31b144169d 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -501,4 +501,17 @@ public:
 		relation_kind kind) const final override;
 };
 
+class operator_addr_expr : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 028631c6851..20cc9b0dc9c 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -75,6 +75,7 @@ operator_abs op_abs;
 operator_minus op_minus;
 operator_negate op_negate;
 operator_mult op_mult;
+operator_addr_expr op_addr;
 
 // Invoke the initialization routines for each class of range.
 
@@ -102,6 +103,10 @@ unified_table::unified_table ()
   set (MINUS_EXPR, op_minus);
   set (NEGATE_EXPR, op_negate);
   set (MULT_EXPR, op_mult);
+
+  // Occur in both integer and pointer tables, but currently share
+  // integral implelmentation.
+  set (ADDR_EXPR, op_addr);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4366,21 +4371,6 @@ operator_negate::op1_range (irange , tree type,
 }
 
 
-class operator_addr_expr : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-} op_addr;
-
 bool
 operator_addr_expr::fold_range (irange , tree type,
 const irange ,
@@ -4613,7 +4603,6 @@ integral_table::integral_table ()
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
   set (BIT_NOT_EXPR, op_bitwise_not);
-  set (ADDR_EXPR, op_addr);
 }
 
 // Initialize any integral operators to the primary table
@@ -4644,8 +4633,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (ADDR_EXPR, op_addr);
-
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
 }
-- 
2.40.1



[COMMITTED 0/17] - Range-op dispatch unification rework

2023-06-12 Thread Andrew MacLeod via Gcc-patches

This patch set completes the range-op dispatch and unification rework.

The first 7 patches move the remainder of the integral table to the 
unified table, and remove the integer table.


The 8th patch moves all the pointer specific code into a new file 
range-op-ptr.cc


Patches 9-12 introduce a "hybrid" operator class for the 4 operations 
which pointers and integer share a TREE_CODE, but have different 
implementations.  And extra hybrid class is introduced in the pointer 
file which inherits from the integer version, and adds new overloads for 
the used methods which look sa tthe type being passed in and does the 
dispatcxh itself to either the inherited integer version, or call the 
pointer version opcode.


This allows us to have a unified entry for those 4 operators 
(BIT_AND_EXPR, BIT_IOR_EXPR, MIN_EXPR, and MAX_EXPR) and move on.   WHen 
we introduce a pointer range type (ie PRANGE), we can simply add the 
prange signature to the appropriate range_operator methods, and remove 
the pointer and hybrid classes.


 patch 13 thru 16 does some tweaking to range_op_handler and hows its 
used. It now provides a default operator under the covers, so you no 
longer need to check if its valid.   The valid check now simply 
indicates if its has a custom operator implemented or not. This means 
you can simply write:


if (range_op_handler (CONVERT_EXPR).fold_range (...  ))

without worrying about whether there is an entry.  If there is no 
CONVERT_EXPR implemented, you'll simple get false back from all the calls.


Combined with the previous work, it is now always safe to call any 
range_operator routine via range_op_handler with any set of types for 
vrange parameters (including unsupported types)  on any tree code, and 
you will simply get false back if it isn't implemented.


Andrew



[COMMITTED 15/15] Unify MULT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

This is the final shared integer/float opcode.

This patch also removes the floating point table and all references to it.

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 11/15] Unify PLUS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From cc4eaf6f1e1958f920007d4cc7cafb635b5dda64 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:41:28 -0400
Subject: [PATCH 11/31] Unify PLUS_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_plus): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_plus::fold_range): Rename from foperator_plus.
	(operator_plus::op1_range): Ditto.
	(operator_plus::op2_range): Ditto.
	(operator_plus::rv_fold): Ditto.
	(float_table::float_table): Remove PLUS_EXPR.
	* range-op-mixed.h (class operator_plus): Combined from integer
	and float files.
	* range-op.cc (op_plus): New object.
	(unified_table::unified_table): Add PLUS_EXPR.
	(class operator_plus): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove PLUS_EXPR.
	(pointer_table::pointer_table): Remove PLUS_EXPR.
---
 gcc/range-op-float.cc | 94 ---
 gcc/range-op-mixed.h  | 39 ++
 gcc/range-op.cc   | 37 -
 3 files changed, 90 insertions(+), 80 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 11d76f2ef25..bd1b79281d0 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -2254,54 +2254,49 @@ float_widen_lhs_range (tree type, const frange )
   return ret;
 }
 
-class foperator_plus : public range_operator
+bool
+operator_plus::op1_range (frange , tree type, const frange ,
+			  const frange , relation_trio) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual bool op1_range (frange , tree type,
-			  const frange ,
-			  const frange ,
-			  relation_trio = TRIO_VARYING) const final override
-  {
-if (lhs.undefined_p ())
-  return false;
-range_op_handler minus (MINUS_EXPR, type);
-if (!minus)
-  return false;
-frange wlhs = float_widen_lhs_range (type, lhs);
-return float_binary_op_range_finish (minus.fold_range (r, type, wlhs, op2),
-	 r, type, wlhs);
-  }
-  virtual bool op2_range (frange , tree type,
-			  const frange ,
-			  const frange ,
-			  relation_trio = TRIO_VARYING) const final override
-  {
-return op1_range (r, type, lhs, op1);
-  }
-private:
-  void rv_fold (REAL_VALUE_TYPE , REAL_VALUE_TYPE , bool _nan,
-		tree type,
-		const REAL_VALUE_TYPE _lb,
-		const REAL_VALUE_TYPE _ub,
-		const REAL_VALUE_TYPE _lb,
-		const REAL_VALUE_TYPE _ub,
-		relation_kind) const final override
-  {
-frange_arithmetic (PLUS_EXPR, type, lb, lh_lb, rh_lb, dconstninf);
-frange_arithmetic (PLUS_EXPR, type, ub, lh_ub, rh_ub, dconstinf);
+  if (lhs.undefined_p ())
+return false;
+  range_op_handler minus (MINUS_EXPR, type);
+  if (!minus)
+return false;
+  frange wlhs = float_widen_lhs_range (type, lhs);
+  return float_binary_op_range_finish (minus.fold_range (r, type, wlhs, op2),
+   r, type, wlhs);
+}
 
-// [-INF] + [+INF] = NAN
-if (real_isinf (_lb, true) && real_isinf (_ub, false))
-  maybe_nan = true;
-// [+INF] + [-INF] = NAN
-else if (real_isinf (_ub, false) && real_isinf (_lb, true))
-  maybe_nan = true;
-else
-  maybe_nan = false;
-  }
-} fop_plus;
+bool
+operator_plus::op2_range (frange , tree type,
+			  const frange , const frange ,
+			  relation_trio) const
+{
+  return op1_range (r, type, lhs, op1);
+}
+
+void
+operator_plus::rv_fold (REAL_VALUE_TYPE , REAL_VALUE_TYPE ,
+			bool _nan, tree type,
+			const REAL_VALUE_TYPE _lb,
+			const REAL_VALUE_TYPE _ub,
+			const REAL_VALUE_TYPE _lb,
+			const REAL_VALUE_TYPE _ub,
+			relation_kind) const
+{
+  frange_arithmetic (PLUS_EXPR, type, lb, lh_lb, rh_lb, dconstninf);
+  frange_arithmetic (PLUS_EXPR, type, ub, lh_ub, rh_ub, dconstinf);
+
+  // [-INF] + [+INF] = NAN
+  if (real_isinf (_lb, true) && real_isinf (_ub, false))
+maybe_nan = true;
+  // [+INF] + [-INF] = NAN
+  else if (real_isinf (_ub, false) && real_isinf (_lb, true))
+maybe_nan = true;
+  else
+maybe_nan = false;
+}
 
 
 class foperator_minus : public range_operator
@@ -2317,9 +2312,9 @@ public:
 if (lhs.undefined_p ())
   return false;
 frange wlhs = float_widen_lhs_range (type, lhs);
-return float_binary_op_range_finish (fop_plus.fold_range (r, type, wlhs,
-			  op2),
-	 r, type, wlhs);
+return float_binary_op_range_finish (
+		range_op_handler (PLUS_EXPR).fold_range (r, type, wlhs, op2),
+		r, type, wlhs);
   }
   virtual bool op2_range (frange , tree type,
 			  const frange ,
@@ -2698,7 +2693,6 @@ float_table::float_table ()
 {
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
-  set (PLUS_EXPR, fop_plus);
   set (MINUS_EXPR, fop_minus);
   set (MULT_EXPR, fop_mult);
 }
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 9de8479cd24..fbfe3f825a3 100644
--- 

[COMMITTED 14/15] Unify NEGATE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 6/15] Unify GT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify GT_EXPR the  range operator

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From e5a4bb7c12d00926e0c7bbf0c77dd1be8f23a39a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:32:25 -0400
Subject: [PATCH 06/31] Unify GT_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_gt): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_gt::fold_range): Rename from foperator_gt.
	(operator_gt::op1_range): Ditto.
	(float_table::float_table): Remove GT_EXPR.
	* range-op-mixed.h (class operator_gt): Combined from integer
	and float files.
	* range-op.cc (op_gt): New object.
	(unified_table::unified_table): Add GT_EXPR.
	(class operator_gt): Move to range-op-mixed.h.
	(gt_op1_op2_relation): Fold into
	operator_gt::op1_op2_relation.
	(integral_table::integral_table): Remove GT_EXPR.
	(pointer_table::pointer_table): Remove GT_EXPR.
	* range-op.h (gt_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 31 ++
 gcc/range-op.cc   | 40 +++--
 gcc/range-op.h|  1 -
 4 files changed, 54 insertions(+), 70 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index a480f1641d2..2f090e75245 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -961,32 +961,10 @@ operator_le::op2_range (frange ,
   return true;
 }
 
-class foperator_gt : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return gt_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_gt;
-
 bool
-foperator_gt::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_gt::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_GT))
 return true;
@@ -1004,11 +982,11 @@ foperator_gt::fold_range (irange , tree type,
 }
 
 bool
-foperator_gt::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_gt::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1043,11 +1021,11 @@ foperator_gt::op1_range (frange ,
 }
 
 bool
-foperator_gt::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_gt::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1723,7 +1701,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_gt.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (GT_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2744,7 +2723,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
 
   set (ABS_EXPR, fop_abs);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index dd42d98ca49..1c68d54b085 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -204,4 +204,35 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_gt :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const;
+  bool op1_range (frange , tree type,
+		  const irange , const frange ,

[COMMITTED 12/15] Unify ABS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 13/15] Unify MINUS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 10/15] Unify operator_cast range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From ee46a15733524103a9eda433df5dc44cdc055d73 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:39:54 -0400
Subject: [PATCH 10/31] Unify operator_cast range operator

Move the declaration of the class to the range-op-mixed header, and use it
in the new unified table.

	* range-op-mixed.h (class operator_cast): Combined from integer
	and float files.
	* range-op.cc (op_cast): New object.
	(unified_table::unified_table): Add op_cast
	(class operator_cast): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove op_cast
	(pointer_table::pointer_table): Remove op_cast.
---
 gcc/range-op-mixed.h | 24 
 gcc/range-op.cc  | 34 --
 2 files changed, 28 insertions(+), 30 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 5b7fbe89856..9de8479cd24 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -304,4 +304,28 @@ public:
 		   relation_trio = TRIO_VARYING) const final override;
 };
 
+
+class operator_cast: public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::lhs_op1_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind) const final override;
+private:
+  bool truncating_cast_p (const irange , const irange ) const;
+  bool inside_domain_p (const wide_int , const wide_int ,
+			const irange ) const;
+  void fold_pair (irange , unsigned index, const irange ,
+			   const irange ) const;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 31d4e1a1739..7d89b633da3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -70,6 +70,7 @@ operator_gt op_gt;
 operator_ge op_ge;
 operator_identity op_ident;
 operator_cst op_cst;
+operator_cast op_cast;
 
 // Invoke the initialization routines for each class of range.
 
@@ -90,6 +91,9 @@ unified_table::unified_table ()
   set (OBJ_TYPE_REF, op_ident);
   set (REAL_CST, op_cst);
   set (INTEGER_CST, op_cst);
+  set (NOP_EXPR, op_cast);
+  set (CONVERT_EXPR, op_cast);
+
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -2868,32 +2872,6 @@ operator_rshift::wi_fold (irange , tree type,
 }
 
 
-class operator_cast: public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::lhs_op1_relation;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	  const irange ,
-	  const irange ,
-	  relation_kind) const;
-private:
-  bool truncating_cast_p (const irange , const irange ) const;
-  bool inside_domain_p (const wide_int , const wide_int ,
-			const irange ) const;
-  void fold_pair (irange , unsigned index, const irange ,
-			   const irange ) const;
-} op_cast;
-
 // Add a partial equivalence between the LHS and op1 for casts.
 
 relation_kind
@@ -4744,8 +4722,6 @@ integral_table::integral_table ()
   set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
   set (MULT_EXPR, op_mult);
-  set (NOP_EXPR, op_cast);
-  set (CONVERT_EXPR, op_cast);
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
@@ -4784,8 +4760,6 @@ pointer_table::pointer_table ()
   set (MAX_EXPR, op_ptr_min_max);
 
   set (ADDR_EXPR, op_addr);
-  set (NOP_EXPR, op_cast);
-  set (CONVERT_EXPR, op_cast);
 
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
-- 
2.40.1



[COMMITTED 5/15] Unify LE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the LE_EXPR opcode.

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 9de70a61ca83d50c35f73eafaaa7276d8f0ad211 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:30:56 -0400
Subject: [PATCH 05/31] Unify LE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_le): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_le::fold_range): Rename from foperator_le.
	(operator_le::op1_range): Ditto.
	(float_table::float_table): Remove LE_EXPR.
	* range-op-mixed.h (class operator_le): Combined from integer
	and float files.
	* range-op.cc (op_le): New object.
	(unified_table::unified_table): Add LE_EXPR.
	(class operator_le): Move to range-op-mixed.h.
	(le_op1_op2_relation): Fold into
	operator_le::op1_op2_relation.
	(integral_table::integral_table): Remove LE_EXPR.
	(pointer_table::pointer_table): Remove LE_EXPR.
	* range-op.h (le_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 33 +++
 gcc/range-op.cc   | 39 +++-
 gcc/range-op.h|  1 -
 4 files changed, 56 insertions(+), 69 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 1b0ac9a7fc2..a480f1641d2 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -873,32 +873,10 @@ operator_lt::op2_range (frange ,
   return true;
 }
 
-class foperator_le : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio rel = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return le_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
-} fop_le;
-
 bool
-foperator_le::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_le::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_LE))
 return true;
@@ -916,11 +894,11 @@ foperator_le::fold_range (irange , tree type,
 }
 
 bool
-foperator_le::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_le::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -949,11 +927,11 @@ foperator_le::op1_range (frange ,
 }
 
 bool
-foperator_le::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_le::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1637,7 +1615,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_le.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (LE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2765,7 +2744,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
 
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index bc93ab5be06..dd42d98ca49 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -171,4 +171,37 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_le :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type,
+	

[COMMITTED 7/15] Unify GE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify GE_EXPR the range operator

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 364b936b8d82e86c73b2b964d4c8a2c16dcbedf8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:33:33 -0400
Subject: [PATCH 07/31] Unify GE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_ge): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_ge::fold_range): Rename from foperator_ge.
	(operator_ge::op1_range): Ditto.
	(float_table::float_table): Remove GE_EXPR.
	* range-op-mixed.h (class operator_ge): Combined from integer
	and float files.
	* range-op.cc (op_ge): New object.
	(unified_table::unified_table): Add GE_EXPR.
	(class operator_ge): Move to range-op-mixed.h.
	(ge_op1_op2_relation): Fold into
	operator_ge::op1_op2_relation.
	(integral_table::integral_table): Remove GE_EXPR.
	(pointer_table::pointer_table): Remove GE_EXPR.
	* range-op.h (ge_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 54 +++
 gcc/range-op-mixed.h  | 33 ++
 gcc/range-op.cc   | 39 +++
 gcc/range-op.h|  3 ---
 4 files changed, 55 insertions(+), 74 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 2f090e75245..4faca62c48f 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -1059,32 +1059,10 @@ operator_gt::op2_range (frange ,
   return true;
 }
 
-class foperator_ge : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return ge_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_ge;
-
 bool
-foperator_ge::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_ge::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_GE))
 return true;
@@ -1102,11 +1080,11 @@ foperator_ge::fold_range (irange , tree type,
 }
 
 bool
-foperator_ge::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_ge::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1137,10 +1115,10 @@ foperator_ge::op1_range (frange ,
 }
 
 bool
-foperator_ge::op2_range (frange , tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_ge::op2_range (frange , tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1813,7 +1791,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_ge.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (GE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2720,11 +2699,6 @@ float_table::float_table ()
   set (OBJ_TYPE_REF, fop_identity);
   set (REAL_CST, fop_identity);
 
-  // All the relational operators are expected to work, because the
-  // calculation of ranges on outgoing edges expect the handlers to be
-  // present.
-  set (GE_EXPR, fop_ge);
-
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
   set (PLUS_EXPR, fop_plus);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 1c68d54b085..d6cd3683932 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -235,4 +235,37 @@ public:
   relation_kind op1_op2_relation (const irange ) const final override;
   void update_bitmask (irange , const irange , const irange ) const;
 };
+
+class operator_ge :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range 

[COMMITTED 9/15] Unify operator_cst range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches
THis patch move the CONST operator into the mixed header.  It also sets 
REAL_CST to use this instead, as it has no op1_range routines.



Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 35a580f09eaceda5b0dd370b1e39fe05ba0a154f Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:37:11 -0400
Subject: [PATCH 09/31] Unify operator_cst range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (operator_cst::fold_range): New.
	* range-op-mixed.h (class operator_cst): Move from integer file.
	* range-op.cc (op_cst): New object.
	(unified_table::unified_table): Add op_cst. Also use for REAL_CST.
	(class operator_cst): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove op_cst.
	(pointer_table::pointer_table): Remove op_cst.
---
 gcc/range-op-float.cc |  7 +++
 gcc/range-op-mixed.h  | 12 
 gcc/range-op.cc   | 16 +++-
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index bc8ecc61bce..11d76f2ef25 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -557,6 +557,13 @@ operator_identity::op1_range (frange , tree, const frange ,
   return true;
 }
 
+bool
+operator_cst::fold_range (frange , tree, const frange ,
+			  const frange &, relation_trio) const
+{
+  r = op1;
+  return true;
+}
 
 bool
 operator_equal::op2_range (frange , tree type,
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index f30f7d019ee..5b7fbe89856 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -292,4 +292,16 @@ public:
   relation_kind rel) const final override;
 };
 
+class operator_cst : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool fold_range (frange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 70684b4c7f7..31d4e1a1739 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -69,6 +69,7 @@ operator_le op_le;
 operator_gt op_gt;
 operator_ge op_ge;
 operator_identity op_ident;
+operator_cst op_cst;
 
 // Invoke the initialization routines for each class of range.
 
@@ -87,7 +88,8 @@ unified_table::unified_table ()
   set (SSA_NAME, op_ident);
   set (PAREN_EXPR, op_ident);
   set (OBJ_TYPE_REF, op_ident);
-  set (REAL_CST, op_ident);
+  set (REAL_CST, op_cst);
+  set (INTEGER_CST, op_cst);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4224,16 +4226,6 @@ operator_bitwise_not::op1_range (irange , tree type,
 }
 
 
-class operator_cst : public range_operator
-{
-  using range_operator::fold_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-} op_integer_cst;
-
 bool
 operator_cst::fold_range (irange , tree type ATTRIBUTE_UNUSED,
 			  const irange ,
@@ -4758,7 +4750,6 @@ integral_table::integral_table ()
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
   set (BIT_NOT_EXPR, op_bitwise_not);
-  set (INTEGER_CST, op_integer_cst);
   set (ABS_EXPR, op_abs);
   set (NEGATE_EXPR, op_negate);
   set (ADDR_EXPR, op_addr);
@@ -4792,7 +4783,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (INTEGER_CST, op_integer_cst);
   set (ADDR_EXPR, op_addr);
   set (NOP_EXPR, op_cast);
   set (CONVERT_EXPR, op_cast);
-- 
2.40.1



[COMMITTED 8/15] Unify Identity range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches
This unifies the identity operation, which is used by SSA_NAME, 
PAREN_EXPR, OBJ_TYPE_REF and REAL_CST.


REAL_CST is using it incorrectly, but preserves current functionality.  
There will not be an SSA_NAME in the op1 position, so there is no point 
in having an op1_range routine.  That will be corrected in the next patch.


Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 60b00c6f187450e1f3ffac1b64986ae74b8b948b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:35:24 -0400
Subject: [PATCH 08/31] Unify Identity range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_identity): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_identity::fold_range): Rename from foperator_identity.
	(operator_identity::op1_range): Ditto.
	(float_table::float_table): Remove fop_identity.
	* range-op-mixed.h (class operator_identity): Combined from integer
	and float files.
	* range-op.cc (op_identity): New object.
	(unified_table::unified_table): Add op_identity.
	(class operator_identity): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove identity.
	(pointer_table::pointer_table): Remove identity.
---
 gcc/range-op-float.cc | 40 +++-
 gcc/range-op-mixed.h  | 24 
 gcc/range-op.cc   | 29 +
 3 files changed, 44 insertions(+), 49 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 4faca62c48f..bc8ecc61bce 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -541,27 +541,22 @@ build_gt (frange , tree type, const frange )
 }
 
 
-class foperator_identity : public range_operator
+bool
+operator_identity::fold_range (frange , tree, const frange ,
+			   const frange &, relation_trio) const
 {
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  bool fold_range (frange , tree type ATTRIBUTE_UNUSED,
-		   const frange , const frange  ATTRIBUTE_UNUSED,
-		   relation_trio = TRIO_VARYING) const final override
-  {
-r = op1;
-return true;
-  }
-  bool op1_range (frange , tree type ATTRIBUTE_UNUSED,
-		  const frange , const frange  ATTRIBUTE_UNUSED,
-		  relation_trio = TRIO_VARYING) const final override
-  {
-r = lhs;
-return true;
-  }
-public:
-} fop_identity;
+  r = op1;
+  return true;
+}
+
+bool
+operator_identity::op1_range (frange , tree, const frange ,
+			  const frange &, relation_trio) const
+{
+  r = lhs;
+  return true;
+}
+
 
 bool
 operator_equal::op2_range (frange , tree type,
@@ -2694,11 +2689,6 @@ private:
 
 float_table::float_table ()
 {
-  set (SSA_NAME, fop_identity);
-  set (PAREN_EXPR, fop_identity);
-  set (OBJ_TYPE_REF, fop_identity);
-  set (REAL_CST, fop_identity);
-
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
   set (PLUS_EXPR, fop_plus);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index d6cd3683932..f30f7d019ee 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -268,4 +268,28 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_identity : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::lhs_op1_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool fold_range (frange , tree type ATTRIBUTE_UNUSED,
+		   const frange , const frange  ATTRIBUTE_UNUSED,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type ATTRIBUTE_UNUSED,
+		  const frange , const frange  ATTRIBUTE_UNUSED,
+		  relation_trio = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind rel) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a127da22006..70684b4c7f7 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -68,6 +68,7 @@ operator_lt op_lt;
 operator_le op_le;
 operator_gt op_gt;
 operator_ge op_ge;
+operator_identity op_ident;
 
 // Invoke the initialization routines for each class of range.
 
@@ -83,6 +84,10 @@ unified_table::unified_table ()
   set (LE_EXPR, op_le);
   set (GT_EXPR, op_gt);
   set (GE_EXPR, op_ge);
+  set (SSA_NAME, op_ident);
+  set (PAREN_EXPR, op_ident);
+  set (OBJ_TYPE_REF, op_ident);
+  set (REAL_CST, op_ident);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4240,26 +4245,6 @@ operator_cst::fold_range (irange , tree type ATTRIBUTE_UNUSED,
 }
 
 
-class 

[PATCH 2/15] Unify EQ_EXPR range operator.

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the EQ_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 684959c5c058c2368e65c4c308a2cb3e3912782e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:18:39 -0400
Subject: [PATCH 02/31] Unify EQ_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_equal): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_equal::fold_range): Rename from foperator_equal.
	(operator_equal::op1_range): Ditto.
	(float_table::float_table): Remove EQ_EXPR.
	* range-op-mixed.h (class operator_equal): Combined from integer
	and float files.
	* range-op.cc (op_equal): New object.
	(unified_table::unified_table): Add EQ_EXPR.
	(class operator_equal): Move to range-op-mixed.h.
	(equal_op1_op2_relation): Fold into
	operator_equal::op1_op2_relation.
	(integral_table::integral_table): Remove EQ_EXPR.
	(pointer_table::pointer_table): Remove EQ_EXPR.
	* range-op.h (equal_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 41 ---
 gcc/range-op-mixed.h  | 37 +++
 gcc/range-op.cc   | 45 +--
 gcc/range-op.h|  1 -
 4 files changed, 62 insertions(+), 62 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 8659217659c..98636cec8cf 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -563,35 +563,18 @@ public:
 public:
 } fop_identity;
 
-class foperator_equal : public range_operator
+bool
+operator_equal::op2_range (frange , tree type,
+			   const irange , const frange ,
+			   relation_trio rel) const
 {
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return equal_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override
-  {
-return op1_range (r, type, lhs, op1, rel.swap_op1_op2 ());
-  }
-} fop_equal;
+  return op1_range (r, type, lhs, op1, rel.swap_op1_op2 ());
+}
 
 bool
-foperator_equal::fold_range (irange , tree type,
-			 const frange , const frange ,
-			 relation_trio rel) const
+operator_equal::fold_range (irange , tree type,
+			const frange , const frange ,
+			relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_EQ))
 return true;
@@ -644,7 +627,7 @@ foperator_equal::fold_range (irange , tree type,
 }
 
 bool
-foperator_equal::op1_range (frange , tree type,
+operator_equal::op1_range (frange , tree type,
 			const irange ,
 			const frange ,
 			relation_trio trio) const
@@ -2021,7 +2004,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_equal.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (EQ_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2819,7 +2803,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (EQ_EXPR, fop_equal);
   set (NE_EXPR, fop_not_equal);
   set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index a78bc2ba59c..79e2cbd8532 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -75,4 +75,41 @@ relop_early_resolve (irange , tree type, const vrange ,
   return false;
 }
 
+// --
+//  Mixed Mode Operators.
+// --
+
+class operator_equal : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) 

[COMMITTED 4/15] Unify LT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the LT_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From f7c1366a89edf1ffdd9c495cff544358f2ff395e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:29:15 -0400
Subject: [PATCH 04/31] Unify LT_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_lt): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_lt::fold_range): Rename from foperator_lt.
	(operator_lt::op1_range): Ditto.
	(float_table::float_table): Remove LT_EXPR.
	* range-op-mixed.h (class operator_lt): Combined from integer
	and float files.
	* range-op.cc (op_lt): New object.
	(unified_table::unified_table): Add LT_EXPR.
	(class operator_lt): Move to range-op-mixed.h.
	(lt_op1_op2_relation): Fold into
	operator_lt::op1_op2_relation.
	(integral_table::integral_table): Remove LT_EXPR.
	(pointer_table::pointer_table): Remove LT_EXPR.
	* range-op.h (lt_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 30 +
 gcc/range-op.cc   | 39 +++-
 gcc/range-op.h|  1 -
 4 files changed, 53 insertions(+), 69 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index ec24167a8c5..1b0ac9a7fc2 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -779,32 +779,10 @@ operator_not_equal::op1_range (frange , tree type,
   return true;
 }
 
-class foperator_lt : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return lt_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_lt;
-
 bool
-foperator_lt::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_lt::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_LT))
 return true;
@@ -822,11 +800,11 @@ foperator_lt::fold_range (irange , tree type,
 }
 
 bool
-foperator_lt::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_lt::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -859,11 +837,11 @@ foperator_lt::op1_range (frange ,
 }
 
 bool
-foperator_lt::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_lt::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1547,7 +1525,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_lt.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (LT_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2786,7 +2765,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 03a988d9c8a..bc93ab5be06 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -141,4 +141,34 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_lt :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range 

[COMMITTED 3/15] Unify NE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the NE_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From cb409a3b3367109944ff332899ec401dc60f678c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:25:49 -0400
Subject: [PATCH 03/31] Unify NE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_not_equal): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_equal::fold_range): Rename from foperator_not_equal.
	(operator_equal::op1_range): Ditto.
	(float_table::float_table): Remove NE_EXPR.
	* range-op-mixed.h (class operator_not_equal): Combined from integer
	and float files.
	* range-op.cc (op_equal): New object.
	(unified_table::unified_table): Add NE_EXPR.
	(class operator_not_equal): Move to range-op-mixed.h.
	(not_equal_op1_op2_relation): Fold into
	operator_not_equal::op1_op2_relation.
	(integral_table::integral_table): Remove NE_EXPR.
	(pointer_table::pointer_table): Remove NE_EXPR.
	* range-op.h (not_equal_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 36 +---
 gcc/range-op-mixed.h  | 29 +
 gcc/range-op.cc   | 41 ++---
 gcc/range-op.h|  1 -
 4 files changed, 48 insertions(+), 59 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 98636cec8cf..ec24167a8c5 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -675,28 +675,10 @@ operator_equal::op1_range (frange , tree type,
   return true;
 }
 
-class foperator_not_equal : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio rel = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return not_equal_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_not_equal;
-
 bool
-foperator_not_equal::fold_range (irange , tree type,
- const frange , const frange ,
- relation_trio rel) const
+operator_not_equal::fold_range (irange , tree type,
+const frange , const frange ,
+relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_NE))
 return true;
@@ -750,10 +732,10 @@ foperator_not_equal::fold_range (irange , tree type,
 }
 
 bool
-foperator_not_equal::op1_range (frange , tree type,
-const irange ,
-const frange ,
-relation_trio trio) const
+operator_not_equal::op1_range (frange , tree type,
+			   const irange ,
+			   const frange ,
+			   relation_trio trio) const
 {
   relation_kind rel = trio.op1_op2 ();
   switch (get_bool_state (r, lhs, type))
@@ -2086,7 +2068,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_not_equal.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (NE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2803,7 +2786,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (NE_EXPR, fop_not_equal);
   set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 79e2cbd8532..03a988d9c8a 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -112,4 +112,33 @@ public:
 		   const irange ) const final override;
 };
 
+class operator_not_equal : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type,
+		  const irange , const frange ,
+		  relation_trio = TRIO_VARYING) const final override;
+
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+
+  relation_kind op1_op2_relation (const irange ) const final override;
+  void update_bitmask (irange , const 

[COMMITTED 1/15] - Provide a unified range-op table.

2023-06-09 Thread Andrew MacLeod via Gcc-patches
With all the operators unified under range_operator, I can now start 
moving them into a unified table rather than have them spread around in 
various type tables.


This patch creates a range_table for the unified operations, and has 
checks to ensure that if the operator comes from the unified table, it 
does not exist in any other table.  This is a sanity check for the 
duration of the transition. This patch also moves all the low hanging 
fruit opcodes which exist ion only one table directly into the unified 
table.


It also introduced range-op-mixed.h which will contain the class 
declarations for any "shared" operators so that each implementation file 
can see the decl.   It also contains the common routines from range-op.h 
that some of the operators need.


With the next patch , I will begin the transition, one opcode at a time, 
to the unified table.   By doing this in separate commits, we can easily 
isolate any potential problems that show up... not that I expect any, 
its fairly mechanical..but just in case.


This first set will move all the integer/float shared opcodes into the 
header file and into the unified table.  The foperator_* classes will be 
removed and the methods themselves will simply be renamed to implement 
the unified range_operator method.  The code will remain in 
range-op-float.cc for those implementations, and the integral versions 
will remain in range-op.cc.


All the patches bootstrap on x86_64-pc-linux-gnu and pass all 
regressions.  I wont add much more comment for most of them when they 
are equivalent in functionality.   Pushed.


Andrew
From df392fbd5d13c944479ed00fcb805e6f26d3fd4b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 12:58:57 -0400
Subject: [PATCH 01/31] Provide a unified range-op table.

Create a table to prepare for unifying all operations into a single table.
Move any operators which only occur in one table to the approriate
initialization routine.
Provide a mixed header file for range-ops with multiple categories.

	* range-op-float.cc (class float_table): Move to header.
	(float_table::float_table): Move float only operators to...
	(range_op_table::initialize_float_ops): Here.
	* range-op-mixed.h: New.
	* range-op.cc (integral_tree_table, pointer_tree_table): Moved
	to top of file.
	(float_tree_table): Moved from range-op-float.cc.
	(unified_tree_table): New.
	(unified_table::unified_table): New.  Call initialize routines.
	(get_op_handler): Check unified table first.
	(range_op_handler::range_op_handler): Handle no type constructor.
	(integral_table::integral_table): Move integral only operators to...
	(range_op_table::initialize_integral_ops): Here.
	(pointer_table::pointer_table): Move pointer only operators to...
	(range_op_table::initialize_pointer_ops): Here.
	* range-op.h (enum bool_range_state): Move to range-op-mixed.h.
	(get_bool_state): Ditto.
	(empty_range_varying): Ditto.
	(relop_early_resolve): Ditto.
	(class range_op_table): Add new init methods for range types.
	(class integral_table): Move declaration to here.
	(class pointer_table): Move declaration to here.
	(class float_table): Move declaration to here.
---
 gcc/range-op-float.cc | 29 +++---
 gcc/range-op-mixed.h  | 78 +
 gcc/range-op.cc   | 89 +--
 gcc/range-op.h| 89 ---
 4 files changed, 185 insertions(+), 100 deletions(-)
 create mode 100644 gcc/range-op-mixed.h

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index bb10accd78f..8659217659c 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -45,6 +45,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "wide-int.h"
 #include "value-relation.h"
 #include "range-op.h"
+#include "range-op-mixed.h"
 
 // Default definitions for floating point operators.
 
@@ -2807,15 +2808,6 @@ private:
   }
 } fop_div;
 
-// Instantiate a range_op_table for floating point operations.
-class float_table : public range_op_table
-{
-  public:
-float_table ();
-} global_floating_table;
-
-// Pointer to the float table so the dispatch code can access it.
-range_op_table *floating_tree_table = _floating_table;
 
 float_table::float_table ()
 {
@@ -2833,6 +2825,19 @@ float_table::float_table ()
   set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
+
+  set (ABS_EXPR, fop_abs);
+  set (NEGATE_EXPR, fop_negate);
+  set (PLUS_EXPR, fop_plus);
+  set (MINUS_EXPR, fop_minus);
+  set (MULT_EXPR, fop_mult);
+}
+
+// Initialize any pointer operators to the primary table
+
+void
+range_op_table::initialize_float_ops ()
+{
   set (UNLE_EXPR, fop_unordered_le);
   set (UNLT_EXPR, fop_unordered_lt);
   set (UNGE_EXPR, fop_unordered_ge);
@@ -2841,12 +2846,6 @@ float_table::float_table ()
   set (ORDERED_EXPR, fop_ordered);
   set (UNORDERED_EXPR, fop_unordered);
   set (LTGT_EXPR, fop_ltgt);
-
-  set (ABS_EXPR, fop_abs);
-  set (NEGATE_EXPR, 

[COMMITTED] PR ipa/109886 - Also check type being cast to

2023-06-09 Thread Andrew MacLeod via Gcc-patches
before casting into an irange, make sure the type being cast into is 
also supported by irange.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6314d76cf87df92a0f7d0fdd48240283e667998a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 10:17:59 -0400
Subject: [PATCH 2/2] Also check type being cast to

before casting into an irange, make sure the type being cast into
is also supported.

	PR ipa/109886
	* ipa-prop.cc (ipa_compute_jump_functions_for_edge): Check param
	type as well.
---
 gcc/ipa-prop.cc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/gcc/ipa-prop.cc b/gcc/ipa-prop.cc
index ab6de9f10da..4e9a307ad4d 100644
--- a/gcc/ipa-prop.cc
+++ b/gcc/ipa-prop.cc
@@ -2405,6 +2405,7 @@ ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
 		 of this file uses value_range's, which only hold
 		 integers and pointers.  */
 	  && irange::supports_p (TREE_TYPE (arg))
+	  && irange::supports_p (param_type)
 	  && get_range_query (cfun)->range_of_expr (vr, arg)
 	  && !vr.undefined_p ())
 	{
-- 
2.40.1



[COMMITTED] Relocate range_cast to header, and add a generic version.

2023-06-09 Thread Andrew MacLeod via Gcc-patches

THis patch moves range_cast into the header file and makes it inlinable.

 I also added a trap so that if you try to cast into an unsupported 
type, it traps.  It can't return a value of the correct type, so the 
caller needs to be doing something else...


Such as using the new variant of range_cast provided here which uses a 
Value_Range.  This is the malleable range type and it first sets the 
type appropriately.   This will also work for unsupported types, and 
will assist with things like  float to int casts and vice versa.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From de03afe3168db7e2eb2a594293c846188a1b5be8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 31 May 2023 17:02:00 -0400
Subject: [PATCH 1/2] Relocate range_cast to header, and add a generic version.

Make range_cast inlinable by moving it to the header file.
Also trap if the destination is not capable of representing the cast type.
Add a generic version which can change range classes.. ie float to int.

	* range-op.cc (range_cast): Move to...
	* range-op.h (range_cast): Here and add generic a version.
---
 gcc/range-op.cc | 18 --
 gcc/range-op.h  | 44 +++-
 2 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 4d122de3026..44a95b20ffa 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -4929,24 +4929,6 @@ pointer_table::pointer_table ()
   set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
-// Cast the range in R to TYPE.
-
-bool
-range_cast (vrange , tree type)
-{
-  Value_Range tmp (r);
-  Value_Range varying (type);
-  varying.set_varying (type);
-  range_op_handler op (CONVERT_EXPR, type);
-  // Call op_convert, if it fails, the result is varying.
-  if (!op || !op.fold_range (r, type, tmp, varying))
-{
-  r.set_varying (type);
-  return false;
-}
-  return true;
-}
-
 #if CHECKING_P
 #include "selftest.h"
 
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 7af58736c3f..2abec3299ef 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -216,7 +216,49 @@ protected:
   range_operator *m_operator;
 };
 
-extern bool range_cast (vrange &, tree type);
+// Cast the range in R to TYPE if R supports TYPE.
+
+inline bool
+range_cast (vrange , tree type)
+{
+  gcc_checking_assert (r.supports_type_p (type));
+  Value_Range tmp (r);
+  Value_Range varying (type);
+  varying.set_varying (type);
+  range_op_handler op (CONVERT_EXPR, type);
+  // Call op_convert, if it fails, the result is varying.
+  if (!op || !op.fold_range (r, type, tmp, varying))
+{
+  r.set_varying (type);
+  return false;
+}
+  return true;
+}
+
+// Range cast which is capable of switching range kinds.
+// ie for float to int.
+
+inline bool
+range_cast (Value_Range , tree type)
+{
+  Value_Range tmp (r);
+  Value_Range varying (type);
+  varying.set_varying (type);
+
+  // Ensure we are in the correct mode for the call to fold.
+  r.set_type (type);
+
+  range_op_handler op (CONVERT_EXPR, type);
+  // Call op_convert, if it fails, the result is varying.
+  if (!op || !op.fold_range (r, type, tmp, varying))
+{
+  r.set_varying (type);
+  return false;
+}
+  return true;
+}
+
+
 extern void wi_set_zero_nonzero_bits (tree type,
   const wide_int &, const wide_int &,
   wide_int _nonzero,
-- 
2.40.1



[COMMITTED 3/4] Unify range_operators to one class.

2023-06-08 Thread Andrew MacLeod via Gcc-patches
Range_operator and range_operator_float are 2 different classes, which 
was not the original intent. This makes generalized dispatch to the 
appropriate function more difficult.  The distinction between what is a 
float operator and what is an integral operator also blurs when some 
methods have multiple types.  ie, casts : INT = FLOAT and FLOAT = INT, 
or other mixed operations like INT = FLOAT < FLOAT


This patch unifies all possible invocation patterns in one 
range_operator class. All the float operators now inherit from 
range_operator, and this allows the float table to use the general 
range_op_table class instead of re-implementing another kind of table.  
THis paves the way for the next patch which provides generalized 
dispatch for the various routines from a VRANGE.


There is little functional difference after this patch. Bootstraps on 
x86_64-pc-linux-gnu with no regressions.  Pushed.


Andrew
From e925119d520ac10674ed42faf14955aaf130c03b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 31 May 2023 12:31:53 -0400
Subject: [PATCH 3/4] Unify range_operators to one class.

Range_operator and range_operator_float are 2 different classes, making
generalized dispatch difficult.  The distinction between what is a float
operator and what is an integral operator also blurs when some methods
have multiple types.  ie, casts : INT = FLOAT and FLOAT = INT

This patch unifies all possible invocation patterns in one class, and
switches the float table to use the general range_op_table.

	* gimple-range-op.cc (cfn_constant_float_p): Change base class.
	(cfn_pass_through_arg1): Adjust using statemenmt.
	(cfn_signbit): Change base class, adjust using statement.
	(cfn_copysign): Ditto.
	(cfn_sqrt): Ditto.
	(cfn_sincos): Ditto.
	* range-op-float.cc (fold_range): Change class to range_operator.
	(rv_fold): Ditto.
	(op1_range): Ditto
	(op2_range): Ditto
	(lhs_op1_relation): Ditto.
	(lhs_op2_relation): Ditto.
	(op1_op2_relation): Ditto.
	(foperator_*): Ditto.
	(class float_table): New.  Inherit from range_op_table.
	(floating_tree_table) Change to range_op_table pointer.
	(class floating_op_table): Delete.
	* range-op.cc (operator_equal): Adjust using statement.
	(operator_not_equal): Ditto.
	(operator_lt, operator_le, operator_gt, operator_ge): Ditto.
	(operator_minus, operator_cast): Ditto.
	(operator_bitwise_and, pointer_plus_operator): Ditto.
	(get_float_handle): Change return type.
	* range-op.h (range_operator_float): Delete.  Relocate all methods
	into class range_operator.
	(range_op_handler::m_float): Change type to range_operator.
	(floating_op_table): Delete.
	(floating_tree_table): Change type.
---
 gcc/gimple-range-op.cc |  27 ++---
 gcc/range-op-float.cc  | 222 +++--
 gcc/range-op.cc|  12 ++-
 gcc/range-op.h | 124 +++
 4 files changed, 183 insertions(+), 202 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 59c47e2074d..293d76402e1 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -268,10 +268,10 @@ gimple_range_op_handler::calc_op2 (vrange , const vrange _range,
 // 
 
 // Implement range operator for float CFN_BUILT_IN_CONSTANT_P.
-class cfn_constant_float_p : public range_operator_float
+class cfn_constant_float_p : public range_operator
 {
 public:
-  using range_operator_float::fold_range;
+  using range_operator::fold_range;
   virtual bool fold_range (irange , tree type, const frange ,
 			   const irange &, relation_trio) const
   {
@@ -319,6 +319,7 @@ class cfn_pass_through_arg1 : public range_operator
 {
 public:
   using range_operator::fold_range;
+  using range_operator::op1_range;
   virtual bool fold_range (irange , tree, const irange ,
 			   const irange &, relation_trio) const
   {
@@ -334,11 +335,11 @@ public:
 } op_cfn_pass_through_arg1;
 
 // Implement range operator for CFN_BUILT_IN_SIGNBIT.
-class cfn_signbit : public range_operator_float
+class cfn_signbit : public range_operator
 {
 public:
-  using range_operator_float::fold_range;
-  using range_operator_float::op1_range;
+  using range_operator::fold_range;
+  using range_operator::op1_range;
   virtual bool fold_range (irange , tree type, const frange ,
 			   const irange &, relation_trio) const override
   {
@@ -373,10 +374,10 @@ public:
 } op_cfn_signbit;
 
 // Implement range operator for CFN_BUILT_IN_COPYSIGN
-class cfn_copysign : public range_operator_float
+class cfn_copysign : public range_operator
 {
 public:
-  using range_operator_float::fold_range;
+  using range_operator::fold_range;
   virtual bool fold_range (frange , tree type, const frange ,
 			   const frange , relation_trio) const override
   {
@@ -464,11 +465,11 @@ frange_mpfr_arg1 (REAL_VALUE_TYPE *res_low, REAL_VALUE_TYPE *res_high,
   return true;
 }
 
-class cfn_sqrt : public range_operator_float
+class cfn_sqrt : public range_operator
 {
 public:
-  

[COMMITTED 4/4] Provide a new dispatch mechanism for range-ops.

2023-06-08 Thread Andrew MacLeod via Gcc-patches

This patch introduces a new dispatch mechanism for range_op_handler.

Instead of ad-hoc if then elses based on is_a and is_a,frange 
*>, the discriminators in class vrange are used for each operand to 
create a triplet, ie (III for "LHS = Irange, op1 = Irange, op2 = 
Irange", and IFI for "Irange Frange Irange")


These triplets are then used ina  switch to dispatch the call to the 
approriate one in range_operator for those types.  And added bonus is 
that if there is a pattern that is not recognized, we no longer trap.. 
Tthe dispatch routine simply returns the same thing as a default routine 
does in range_operator.. either false or VREL_VARYING depending on the 
signature.


This will make adding additional range types much simplier going 
forward, and aleviates the need to check for supported types before 
invoking routines like fold_range.


As part fo the rework, this patch also simplifies range_op_handler to 
now only contain a single pointer to a range_operator instead of 2 
pointers and a flag.   This is enabled by the previous patch which 
unifies all range operators to one class.


And added bonus is a bit of a compile time improvement for VRP and 
threading, as well as other clients due to less conditional checks at 
dispatch time. It only amounts to about a 0.75% improvement in those 
passes for the moment... but every little bit helps.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From f36f25792b3cb0b9067f318dd4d5c968f75a5c3d Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 31 May 2023 13:10:31 -0400
Subject: [PATCH 4/4] Provide a new dispatch mechanism for range-ops.

Simplify range_op_handler to have a single range_operator pointer and
provide a more flexible dispatch mechanism for calls via generic vrange
classes.   This is more extensible for adding new classes of range support.
Any unsupported dispatch patterns will simply return FALSE now rather
than generating compile time exceptions, aleviating the need to
constantly check for supoprted types.

	* gimple-range-op.cc
	(gimple_range_op_handler::gimple_range_op_handler): Adjust.
	(gimple_range_op_handler::maybe_builtin_call): Adjust.
	* gimple-range-op.h (operand1, operand2): Use m_operator.
	* range-op.cc (integral_table, pointer_table): Relocate.
	(get_op_handler): Rename from get_handler and handle all types.
	(range_op_handler::range_op_handler): Relocate.
	(range_op_handler::set_op_handler): Relocate and adjust.
	(range_op_handler::range_op_handler): Relocate.
	(dispatch_trio): New.
	(RO_III, RO_IFI, RO_IFF, RO_FFF, RO_FIF, RO_FII): New consts.
	(range_op_handler::dispatch_kind): New.
	(range_op_handler::fold_range): Relocate and Use new dispatch value.
	(range_op_handler::op1_range): Ditto.
	(range_op_handler::op2_range): Ditto.
	(range_op_handler::lhs_op1_relation): Ditto.
	(range_op_handler::lhs_op2_relation): Ditto.
	(range_op_handler::op1_op2_relation): Ditto.
	(range_op_handler::set_op_handler): Use m_operator member.
	* range-op.h (range_op_handler::operator bool): Use m_operator.
	(range_op_handler::dispatch_kind): New.
	(range_op_handler::m_valid): Delete.
	(range_op_handler::m_int): Delete
	(range_op_handler::m_float): Delete
	(range_op_handler::m_operator): New.
	(range_op_table::operator[]): Relocate from .cc file.
	(range_op_table::set): Ditto.
	* value-range.h (class vrange): Make range_op_handler a friend.
---
 gcc/gimple-range-op.cc |  84 +++-
 gcc/gimple-range-op.h  |   4 +-
 gcc/range-op.cc| 470 ++---
 gcc/range-op.h |  27 ++-
 gcc/value-range.h  |   1 +
 5 files changed, 306 insertions(+), 280 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 293d76402e1..b6b10e47b78 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -144,7 +144,7 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
   if (type)
 set_op_handler (code, type);
 
-  if (m_valid)
+  if (m_operator)
 switch (gimple_code (m_stmt))
   {
 	case GIMPLE_COND:
@@ -152,7 +152,7 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 	  m_op2 = gimple_cond_rhs (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
 	  if (!Value_Range::supports_type_p (TREE_TYPE (m_op1)))
-	m_valid = false;
+	m_operator = NULL;
 	  return;
 	case GIMPLE_ASSIGN:
 	  m_op1 = gimple_range_base_of_assignment (m_stmt);
@@ -171,7 +171,7 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 	m_op2 = gimple_assign_rhs2 (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
 	  if ((m_op1 && !Value_Range::supports_type_p (TREE_TYPE (m_op1
-	m_valid = false;
+	m_operator = NULL;
 	  return;
 	default:
 	  gcc_unreachable ();
@@ -1193,7 +1193,6 @@ gimple_range_op_handler::maybe_non_standard ()
   {
 	case WIDEN_MULT_EXPR:
 	{
-	  m_valid = false;
 	  m_op1 = gimple_assign_rhs1 (m_stmt);
 	  m_op2 = gimple_assign_rhs2 

[COMMITTED 2/4] - Remove tree_code from range-operator.

2023-06-08 Thread Andrew MacLeod via Gcc-patches
Range_operator had a tree code added last release to facilitate bitmask 
operations.  This was intended to be a temporary change until we could 
figure out something more strategic going forward.


This patch removes the tree_code and replaces it with a virtual routine 
to perform the masking. Each of the affected tree codes operators now 
call the bitmask routine via a virtual function.  At some point we may 
want to consolidate the code that CCP is using so that it resides in the 
range_operator, but the extensive parameter list used by that CCP 
routine makes that prohibitive to do at the moment.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew


From c5065669a36ba0c26841cb108d32f03058757e85 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 31 May 2023 10:55:28 -0400
Subject: [PATCH 2/4] Remove tree_code from range-operator.

Range_operator had a tree code added last release to facilitate
bitmask operations.  This removes the tree_code and replaces it with a
virtual routine to peform the masking.  Remove any duplicate instances
which are no longer needed.

	* range-op.cc (range_operator::fold_range): Call virtual routine.
	(range_operator::update_bitmask): New.
	(operator_equal::update_bitmask): New.
	(operator_not_equal::update_bitmask): New.
	(operator_lt::update_bitmask): New.
	(operator_le::update_bitmask): New.
	(operator_gt::update_bitmask): New.
	(operator_ge::update_bitmask): New.
	(operator_ge::update_bitmask): New.
	(operator_plus::update_bitmask): New.
	(operator_minus::update_bitmask): New.
	(operator_pointer_diff::update_bitmask): New.
	(operator_min::update_bitmask): New.
	(operator_max::update_bitmask): New.
	(operator_mult::update_bitmask): New.
	(operator_div:operator_div):New.
	(operator_div::update_bitmask): New.
	(operator_div::m_code): New member.
	(operator_exact_divide::operator_exact_divide): New constructor.
	(operator_lshift::update_bitmask): New.
	(operator_rshift::update_bitmask): New.
	(operator_bitwise_and::update_bitmask): New.
	(operator_bitwise_or::update_bitmask): New.
	(operator_bitwise_xor::update_bitmask): New.
	(operator_trunc_mod::update_bitmask): New.
	(op_ident, op_unknown, op_ptr_min_max): New.
	(op_nop, op_convert): Delete.
	(op_ssa, op_paren, op_obj_type): Delete.
	(op_realpart, op_imagpart): Delete.
	(op_ptr_min, op_ptr_max): Delete.
	(pointer_plus_operator:update_bitmask): New.
	(range_op_table::set): Do not use m_code.
	(integral_table::integral_table): Adjust to single instances.
	* range-op.h (range_operator::range_operator): Delete.
	(range_operator::m_code): Delete.
	(range_operator::update_bitmask): New.
---
 gcc/range-op.cc | 110 +---
 gcc/range-op.h  |   6 +--
 2 files changed, 79 insertions(+), 37 deletions(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 3ab2c665901..2deca3bac93 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -286,7 +286,7 @@ range_operator::fold_range (irange , tree type,
 	break;
 	}
   op1_op2_relation_effect (r, type, lh, rh, rel);
-  update_known_bitmask (r, m_code, lh, rh);
+  update_bitmask (r, lh, rh);
   return true;
 }
 
@@ -298,7 +298,7 @@ range_operator::fold_range (irange , tree type,
   wi_fold_in_parts (r, type, lh.lower_bound (), lh.upper_bound (),
 			rh.lower_bound (), rh.upper_bound ());
   op1_op2_relation_effect (r, type, lh, rh, rel);
-  update_known_bitmask (r, m_code, lh, rh);
+  update_bitmask (r, lh, rh);
   return true;
 }
 
@@ -316,12 +316,12 @@ range_operator::fold_range (irange , tree type,
 	if (r.varying_p ())
 	  {
 	op1_op2_relation_effect (r, type, lh, rh, rel);
-	update_known_bitmask (r, m_code, lh, rh);
+	update_bitmask (r, lh, rh);
 	return true;
 	  }
   }
   op1_op2_relation_effect (r, type, lh, rh, rel);
-  update_known_bitmask (r, m_code, lh, rh);
+  update_bitmask (r, lh, rh);
   return true;
 }
 
@@ -387,6 +387,14 @@ range_operator::op1_op2_relation_effect (irange _range ATTRIBUTE_UNUSED,
   return false;
 }
 
+// Apply any known bitmask updates based on this operator.
+
+void
+range_operator::update_bitmask (irange &, const irange &,
+   const irange &) const
+{
+}
+
 // Create and return a range from a pair of wide-ints that are known
 // to have overflowed (or underflowed).
 
@@ -562,6 +570,8 @@ public:
 			  const irange ,
 			  relation_trio = TRIO_VARYING) const;
   virtual relation_kind op1_op2_relation (const irange ) const;
+  void update_bitmask (irange , const irange , const irange ) const
+{ update_known_bitmask (r, EQ_EXPR, lh, rh); }
 } op_equal;
 
 // Check if the LHS range indicates a relation between OP1 and OP2.
@@ -682,6 +692,8 @@ public:
 			  const irange ,
 			  relation_trio = TRIO_VARYING) const;
   virtual relation_kind op1_op2_relation (const irange ) const;
+  void update_bitmask (irange , const irange , const irange ) const
+{ update_known_bitmask (r, NE_EXPR, lh, rh); }
 } op_not_equal;
 

[COMMITTED 1/4] Fix floating point bug in fold_range.

2023-06-08 Thread Andrew MacLeod via Gcc-patches
We currently do not have any floating point operators where operand 1 is 
a different type than the LHS. When we eventually do there is a bug in 
fold_range. If either operand is a known NAN, it returns a NAN of the 
type of operand 1 instead of the result type.


This patch sets it to the correct type.

Bootstraps on build-x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew

From ff0ef34aa04f7767933541f58f016600a3462c84 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 7 Jun 2023 14:03:35 -0400
Subject: [PATCH 1/4] Fix floating point bug in fold_range.

We currently do not have any floating point operators where operand 1 is
a different type than the LHS. When we eventually do there is a bug
in fold_range. If either operand is a known NAN, it returns a NAN
of the type of operand 1 instead of the result type.

	* range-op-float.cc (range_operator_float::fold_range): Return
	NAN of the result type.
---
 gcc/range-op-float.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index a99a6b01ed8..af598b60a79 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -57,7 +57,7 @@ range_operator_float::fold_range (frange , tree type,
 return true;
   if (op1.known_isnan () || op2.known_isnan ())
 {
-  r.set_nan (op1.type ());
+  r.set_nan (type);
   return true;
 }
 
-- 
2.40.1



[RFC] range-op restructuring

2023-06-01 Thread Andrew MacLeod via Gcc-patches
With the addition of floating point ranges, we did a lot of additional 
class abstraction, then added a bunch more routines for floating point. 
We didn't know how it would look in the end, so we just marched forward 
and got it working.


Now that has settled down a bit, and before we go and add more kinds of 
ranges, I want to visit restructuring the files and provide a better 
dispatch to the range operators from a vrange.


We currently dispatch based on the type of the statement.. int or 
float.  the line is blurred heavily when we have statements that have 
more than one kid of range.. ie


int_value = (int) float_value
  vs
float_value = (float) int_value

Under the current regime, both kinds of casts have to go into the float 
table.. and this is going to get more complicated if we add more 
distinct kinds of ranges. With the current implementation, the floating 
point range operators don't even inherit from range_operator, they are 
their own kind of operator.   The ideal situation is to have a single 
unified range-operator class which has all the combinations, and they 
rest in a single table.  This simplifies numerous things, and avoid us 
having to classify anything in some arbitrary way. It also moves us back 
in the direction of the original vision I had for range-ops.


Ive done an initial rough conversion so you can see what it looks like. 
I've attached a new range-op.h which shows class range-operator will all 
the virtual function combinations. The new dispatch mechniasm buys us 
about 1% speedup in both VRP and in jump_threading.  The new mechanism 
also handles unsupported combinations of operands smoothly, simply 
returning false if its an unsupported combination of aprameters that is 
invoked, which is what a default routine would do.


  As for conversion, lets take operator_not_equal as an example.    The 
end result in range-operator.h is:


class operator_not_equal : public range_operator
{
public:
  bool fold_range (irange , tree type, const irange , const 
irange , relation_trio = TRIO_VARYING) const final override;
  bool fold_range (irange , tree type, const frange , const 
frange , relation_trio = TRIO_VARYING) const final override;


  bool op1_range (irange , tree type, const irange , const irange 
, relation_trio = TRIO_VARYING) const final override;
  bool op1_range (frange , tree type, const irange , const frange 
, relation_trio = TRIO_VARYING) const final override;


  bool op2_range (frange , tree type, const irange , const frange 
, relation_trio rel = TRIO_VARYING) const final override;
  bool op2_range (irange , tree type, const irange , const irange 
, relation_trio = TRIO_VARYING) const final override;


  relation_kind op1_op2_relation (const irange ) const;
  void update_bitmask (irange , const irange , const irange ) 
const;

};
extern operator_not_equal rop_NE_EXPR;

When we add a new range type, such as pointers, you simply add the 
required prototypes, add new dispatch codes, and implement them.


This is going  to cause some churn but I am trying to keep it to a 
minimum.  I've been mucking about with it for a couple of weeks, and I 
was thinking to structure it something like this:


range-op.h and range-op.cc  will have the base range_operator class, 
along with the range_op_handler class  we move all the class headers, 
like the above operator_not_equal class into "range-operator.h"


Where the code goes is the biggest struggle. Initially I was going to 
put it all in one file. This would be best as it allows us to co-locate 
all the code for various classes of routines. But that would already be 
about 8000 lines for int and float combined, and will only get larger 
with new range types. I also considered just including all the 
range-op-int.cc and range-op-float.cc files into range-op.cc when it 
compiles, but you still end up with  a big compilation unit.  So this is 
what I'm think now:


We leave all the existing floating point code in range-op-float.cc, and 
then move all the existing integer code into a range-op-int.cc file (or 
I suppose we could even leave it in range-op.cc to avoid extra churn).  
The classes must move to  a header to be accessible from the various 
files which implement them.  This provides for minimal churn. a few 
deletes and renames, and thats it.  I've attached the diff which moves 
operator_not_equal to this form (once some other structuring is in place).


The plus is you can see in the header file range-operator.h exactly what 
is available for any opcode.  Whats less than ideal is that some of the 
routines are in range-op.cc and some are in range-op-float.cc,   
Ultimately, I dont think thats such a big deal as those floating point 
routines often require common infrastructure, such as nan querying that 
integer things dont need.   When we add say a pointer range class,  we 
would create range-op-pointer.cc and all the new stuff required for 
prange would go in that file.


That is the direction I am headed to 

Re: [COMMITTED 4/4] - Gimple range PHI analyzer and testcases

2023-05-25 Thread Andrew MacLeod via Gcc-patches



On 5/25/23 03:03, Richard Biener wrote:

On Wed, May 24, 2023 at 11:21 PM Andrew MacLeod via Gcc-patches


   There is about a 1.5% slowdown to VRP to invoke and utilize the
analyzer in all 3 passes of VRP.  overall compile time is 0.06% slower.

Bootstraps on x86_64-pc-linux-gnu  with no regressions.  Pushed.
Hm.  What I've noticed the last time looking at how ranger deals
with PHIs is that it diverts to SCEV analysis for all of them but
it could restrict itself to analyze PHIs in loop headers
(bb->loop_father->header == bb).  That only handles natural
loops of course but that was good enough for the old VRP implementation.
That might also help to keep the PHI anlyzer leaner by less entires.




I've only quickly looked at the PHI analyzer and I failed to understand
how you discover cycles.  I'm pointing you to the SCC value-numbering
cycle finding which you can find for example on the GCC 7 branch
(it's gone for quite some time) in tree-ssa-sccvn.c:DFS - that collects
strongly connected SSA components (it walks all uses, you probably
want to ignore virtuals).  SCEV also has its own cycle finding
(well, sort of) with the scev_dfs class and it restricts itself to
operations it handles (so it's more close to what you do).

I fear you're developing sth very ad-hoc here.


Not something Ad-hoc in this compiler!

This is primarily an initial value estimator.  There is no attempt to do 
any loop analysis or anything like that.


It doesn't look for cycles per se, merely PHI nodes which feed each 
other and are modified in a straight forward way.. ie initialized on one 
edge and modified via one statement that we can then look at to decide 
how it affects the range of all the PHI nodes. This can eventually be 
changed to a sequence of a few statements, but one gets us started with 
the simple cases. All the rest of the PHI arguments come from PHI nodes 
and share the same value.  This can allow us to project a range which is 
better than VARYING.  SCEV doesnt seem to help much in these cases.


 It's pretty straightforward which is why it isn't much code. all 
handled in  phi_analyzer::process_phi().   Add phi node to worklist, 
examine each argument, if it iis a PHI def, add it to the worklist  if 
it hasnt been processed, otherwise, its an external input to the group, 
and bail if we get more than 2 of these.


Andrew

Andrew





[COMMITTED 2/4] - Make ssa_cache a range_query.

2023-05-24 Thread Andrew MacLeod via Gcc-patches
By having an ssa_cache inherit from a range_query, and then providing a 
range_of_expr routine which returns the current global value, we open up 
the possibility of folding statements and doing other interesting things 
with an ssa-cache.


In particular, you can now call fold_range()  with an ssa-range cache 
and fold a stmt by retrieving the values which are stored in the cache.


This patch also provides a ranger object with a  const_query() method 
which will allow access to the current global ranges ranger knows for 
folding.   There are times where we use get_global_range_query(), but 
we'd actually get more accuarte results if we have a ranger and use 
const_query ().    const_query should be  a superset of what 
get_global_range_query knows.


There is 0 performance impact.

Bootstraps on x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew


From be6e6b93cc5d42a09a1f2be26dfdf7e3f897d296 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 24 May 2023 09:06:26 -0400
Subject: [PATCH 2/4] Make ssa_cache a range_query.

By providing range_of_expr as a range_query, we can fold and do other
interesting things using values from the global table.  Make ranger's
knonw globals available via const_query.

	* gimple-range-cache.cc (ssa_cache::range_of_expr): New.
	* gimple-range-cache.h (class ssa_cache): Inherit from range_query.
	(ranger_cache::const_query): New.
	* gimple-range.cc (gimple_ranger::const_query): New.
	* gimple-range.h (gimple_ranger::const_query): New prototype.
---
 gcc/gimple-range-cache.cc | 14 ++
 gcc/gimple-range-cache.h  |  5 -
 gcc/gimple-range.cc   |  8 
 gcc/gimple-range.h|  1 +
 4 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index f25abaffd34..52165d2405b 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -545,6 +545,20 @@ ssa_cache::~ssa_cache ()
   delete m_range_allocator;
 }
 
+// Enable a query to evaluate staements/ramnges based on picking up ranges
+// from just an ssa-cache.
+
+bool
+ssa_cache::range_of_expr (vrange , tree expr, gimple *stmt)
+{
+  if (!gimple_range_ssa_p (expr))
+return get_tree_range (r, expr, stmt);
+
+  if (!get_range (r, expr))
+gimple_range_global (r, expr, cfun);
+  return true;
+}
+
 // Return TRUE if the global range of NAME has a cache entry.
 
 bool
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 4fc98230430..afcf8d7de7b 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -52,7 +52,7 @@ private:
 // has been visited during this incarnation.  Once the ranger evaluates
 // a name, it is typically not re-evaluated again.
 
-class ssa_cache
+class ssa_cache : public range_query
 {
 public:
   ssa_cache ();
@@ -63,6 +63,8 @@ public:
   virtual void clear_range (tree name);
   virtual void clear ();
   void dump (FILE *f = stderr);
+  virtual bool range_of_expr (vrange , tree expr, gimple *stmt);
+
 protected:
   vec m_tab;
   vrange_allocator *m_range_allocator;
@@ -103,6 +105,7 @@ public:
   bool get_global_range (vrange , tree name) const;
   bool get_global_range (vrange , tree name, bool _p);
   void set_global_range (tree name, const vrange , bool changed = true);
+  range_query _query () { return m_globals; }
 
   void propagate_updated_value (tree name, basic_block bb);
 
diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 4fae3f95e6a..01e62d3ff39 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -70,6 +70,14 @@ gimple_ranger::~gimple_ranger ()
   m_stmt_list.release ();
 }
 
+// Return a range_query which accesses just the known global values.
+
+range_query &
+gimple_ranger::const_query ()
+{
+  return m_cache.const_query ();
+}
+
 bool
 gimple_ranger::range_of_expr (vrange , tree expr, gimple *stmt)
 {
diff --git a/gcc/gimple-range.h b/gcc/gimple-range.h
index e3aa9475f5e..6587e4923ff 100644
--- a/gcc/gimple-range.h
+++ b/gcc/gimple-range.h
@@ -64,6 +64,7 @@ public:
   bool fold_stmt (gimple_stmt_iterator *gsi, tree (*) (tree));
   void register_inferred_ranges (gimple *s);
   void register_transitive_inferred_ranges (basic_block bb);
+  range_query _query ();
 protected:
   bool fold_range_internal (vrange , gimple *s, tree name);
   void prefill_name (vrange , tree name);
-- 
2.40.1



[COMMITTED 4/4] - Gimple range PHI analyzer and testcases

2023-05-24 Thread Andrew MacLeod via Gcc-patches

This patch provide the framework for a gimple-range phi analyzer.

Currently, the  primary purpose is to give better initial values for 
members of a "phi group"


a PHI group is defined as a a group of PHI nodes whose arguments are all 
either members of the same PHI group, or one of 2 other values:

 - An initializer, (typically a constant), but not necessarily,
 - A modifier, which is always of the form:   member_ssa = member_ssa 
OP op2


When the analyzer finds a group which matches this pattern, it tries to 
evaluate the modifier using the initial value and project a range for 
the entire group.


This initial version is fairly simplistic.  It looks for 2 things:

1) if there is a relation between LHS and the other ssa_name in the 
modifier, then we can project a range. ie,

    a_3 = a_2 + 1
if there is a relation generated by the stmt which say a_3 > a_2, and 
the initial value is 0, we can project a range of [0, +INF] as the 
moifier will cause the value to always increase, and not wrap.


Likewise, for a_3 = a_2 - 1,  we can project a range of [-INF, 0] based 
on the "<" relationship between a_3 and a_2.


2) If there is no relationship, then we use the initial range and 
"simulate" the modifier statement a set number of times looking to see 
if the value converges.
Currently I have arbitrarily hard coded 10 attempts, but intend to 
change this down the road with a --param, as well as to perhaps 
influence it with any known values from SCEV regarding known iterations 
of the loop and possibly change it based on optimization levels.


I also suspect something like one more than the number of bits in the 
type might help with any bitmasking tricks.


Theres a lot of additinal things we can do to enhance this, but this 
framework provides a start.  These 2 initial evaluations fix 107822, and 
part of 107986.


 There is about a 1.5% slowdown to VRP to invoke and utilize the 
analyzer in all 3 passes of VRP.  overall compile time is 0.06% slower.


Bootstraps on x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew




From 64e844c1182198e49d33f9fa138b9a782371225d Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 24 May 2023 09:52:26 -0400
Subject: [PATCH 4/4] Gimple range PHI analyzer and testcases

Provide a PHI analyzer framework to provive better initial values for
PHI nodes which formk groups with initial values and single statements
which modify the PHI values in some predicatable way.

	PR tree-optimization/107822
	PR tree-optimization/107986
	gcc/
	* Makefile.in (OBJS): Add gimple-range-phi.o.
	* gimple-range-cache.h (ranger_cache::m_estimate): New
	phi_analyzer pointer member.
	* gimple-range-fold.cc (fold_using_range::range_of_phi): Use
	phi_analyzer if no loop info is available.
	* gimple-range-phi.cc: New file.
	* gimple-range-phi.h: New file.
	* tree-vrp.cc (execute_ranger_vrp): Utililze a phi_analyzer.

	gcc/testsuite/
	* gcc.dg/pr107822.c: New.
	* gcc.dg/pr107986-1.c: New.
---
 gcc/Makefile.in   |   1 +
 gcc/gimple-range-cache.h  |   2 +
 gcc/gimple-range-fold.cc  |  27 ++
 gcc/gimple-range-phi.cc   | 518 ++
 gcc/gimple-range-phi.h| 109 +++
 gcc/testsuite/gcc.dg/pr107822.c   |  20 ++
 gcc/testsuite/gcc.dg/pr107986-1.c |  16 +
 gcc/tree-vrp.cc   |   7 +-
 8 files changed, 699 insertions(+), 1 deletion(-)
 create mode 100644 gcc/gimple-range-phi.cc
 create mode 100644 gcc/gimple-range-phi.h
 create mode 100644 gcc/testsuite/gcc.dg/pr107822.c
 create mode 100644 gcc/testsuite/gcc.dg/pr107986-1.c

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index bb63b5c501d..1d39e6dd3f8 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1454,6 +1454,7 @@ OBJS = \
 	gimple-range-gori.o \
 	gimple-range-infer.o \
 	gimple-range-op.o \
+	gimple-range-phi.o \
 	gimple-range-trace.o \
 	gimple-ssa-backprop.o \
 	gimple-ssa-isolate-paths.o \
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index afcf8d7de7b..93d16294d2e 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -23,6 +23,7 @@ along with GCC; see the file COPYING3.  If not see
 
 #include "gimple-range-gori.h" 
 #include "gimple-range-infer.h"
+#include "gimple-range-phi.h"
 
 // This class manages a vector of pointers to ssa_block ranges.  It
 // provides the basis for the "range on entry" cache for all
@@ -136,6 +137,7 @@ private:
   void exit_range (vrange , tree expr, basic_block bb, enum rfd_mode);
   bool edge_range (vrange , edge e, tree name, enum rfd_mode);
 
+  phi_analyzer *m_estimate;
   vec m_workback;
   class update_list *m_update;
 };
diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 4df065c8a6e..173d9f386c5 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -934,6 +934,7 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	  }
   }
 
+  bool loop_info_p = false;
   // If SCEV is available, query if this PHI has any 

[COMMITTED 3/4] Provide relation queries for a stmt.

2023-05-24 Thread Andrew MacLeod via Gcc-patches
This tweaks someof the fold_stmt routines and helpers.. in particular 
the ones which you provide a vector of ranges to to satisfy any ssa-names.


Previously, once the vector was depleted, any remaining values were 
picked up from the default get_global_range_query() query. It is useful 
to be able to speiocyf your own range_query to these routines, as most 
fo the other fold_stmt routines allow.


This patch changes it so the default doesnt change, but you can 
optionally specify your own range_query to the routines.


It also provides a new routine:

    relation_trio fold_relations (gimple *s, range_query *q)

Which instead of folding a stmt, will return a relation trio based on 
folding the stmt with the range_query.  The relation trio will let you 
know if the statement causes a relation between LHS-OP1,  LHS_OP2, or 
OP1_OP2...  so for something like

   a_3 = b_4 + 6
based on known ranges and types, we might get back (LHS  > OP1)

It just provides  a generic interface into what relations a statement 
may provide based on what a range_query returns for values and the stmt 
itself.


There is no performance impact.

Bootstraps on x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew


From 933e14dc613269641ffe3613bf4792ac50590275 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 24 May 2023 09:17:32 -0400
Subject: [PATCH 3/4] Provide relation queries for a stmt.

Allow fur_list and fold_stmt to be provided a range_query rather than
always defaultsing to NULL (which becomes a global query).
Also provide a fold_relations () routine which can provide a range_trio
for an arbitrary statement using any range_query

	* gimple-range-fold.cc (fur_list::fur_list): Add range_query param
	to contructors.
	(fold_range): Add range_query parameter.
	(fur_relation::fur_relation): New.
	(fur_relation::trio): New.
	(fur_relation::register_relation): New.
	(fold_relations): New.
	* gimple-range-fold.h (fold_range): Adjust prototypes.
	(fold_relations): New.
---
 gcc/gimple-range-fold.cc | 128 +++
 gcc/gimple-range-fold.h  |  11 +++-
 2 files changed, 124 insertions(+), 15 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 96cbd799488..4df065c8a6e 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -214,9 +214,9 @@ fur_depend::register_relation (edge e, relation_kind k, tree op1, tree op2)
 class fur_list : public fur_source
 {
 public:
-  fur_list (vrange );
-  fur_list (vrange , vrange );
-  fur_list (unsigned num, vrange **list);
+  fur_list (vrange , range_query *q = NULL);
+  fur_list (vrange , vrange , range_query *q = NULL);
+  fur_list (unsigned num, vrange **list, range_query *q = NULL);
   virtual bool get_operand (vrange , tree expr) override;
   virtual bool get_phi_operand (vrange , tree expr, edge e) override;
 private:
@@ -228,7 +228,7 @@ private:
 
 // One range supplied for unary operations.
 
-fur_list::fur_list (vrange ) : fur_source (NULL)
+fur_list::fur_list (vrange , range_query *q) : fur_source (q)
 {
   m_list = m_local;
   m_index = 0;
@@ -238,7 +238,7 @@ fur_list::fur_list (vrange ) : fur_source (NULL)
 
 // Two ranges supplied for binary operations.
 
-fur_list::fur_list (vrange , vrange ) : fur_source (NULL)
+fur_list::fur_list (vrange , vrange , range_query *q) : fur_source (q)
 {
   m_list = m_local;
   m_index = 0;
@@ -249,7 +249,8 @@ fur_list::fur_list (vrange , vrange ) : fur_source (NULL)
 
 // Arbitrary number of ranges in a vector.
 
-fur_list::fur_list (unsigned num, vrange **list) : fur_source (NULL)
+fur_list::fur_list (unsigned num, vrange **list, range_query *q)
+  : fur_source (q)
 {
   m_list = list;
   m_index = 0;
@@ -278,20 +279,20 @@ fur_list::get_phi_operand (vrange , tree expr, edge e ATTRIBUTE_UNUSED)
 // Fold stmt S into range R using R1 as the first operand.
 
 bool
-fold_range (vrange , gimple *s, vrange )
+fold_range (vrange , gimple *s, vrange , range_query *q)
 {
   fold_using_range f;
-  fur_list src (r1);
+  fur_list src (r1, q);
   return f.fold_stmt (r, s, src);
 }
 
 // Fold stmt S into range R using R1  and R2 as the first two operands.
 
 bool
-fold_range (vrange , gimple *s, vrange , vrange )
+fold_range (vrange , gimple *s, vrange , vrange , range_query *q)
 {
   fold_using_range f;
-  fur_list src (r1, r2);
+  fur_list src (r1, r2, q);
   return f.fold_stmt (r, s, src);
 }
 
@@ -299,10 +300,11 @@ fold_range (vrange , gimple *s, vrange , vrange )
 // operands encountered.
 
 bool
-fold_range (vrange , gimple *s, unsigned num_elements, vrange **vector)
+fold_range (vrange , gimple *s, unsigned num_elements, vrange **vector,
+	range_query *q)
 {
   fold_using_range f;
-  fur_list src (num_elements, vector);
+  fur_list src (num_elements, vector, q);
   return f.fold_stmt (r, s, src);
 }
 
@@ -326,6 +328,108 @@ fold_range (vrange , gimple *s, edge on_edge, range_query *q)
   return f.fold_stmt (r, s, src);
 }
 
+// Provide a fur_source which can be used 

[COMMITTED 1/4] - Make ssa_cache and ssa_lazy_cache virtual.

2023-05-24 Thread Andrew MacLeod via Gcc-patches
I originally implemented the lazy ssa cache by inheriting from an 
ssa_cache in protected mode and providing the required routines. This 
makes it a little awkward to do various things, and they also become not 
quite as interchangeable as I'd like.   Making the routines virtual and 
using proper inheritance will avoid an inevitable issue down the road, 
and allows me to remove the printing hack which provided a protected 
output routine.


Overall performance impact is pretty negligible, so lets just clean it up.

Bootstraps on x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew

From 3079056d0b779b907f8adc01d48a8aa495b8a661 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 24 May 2023 08:49:30 -0400
Subject: [PATCH 1/4] Make ssa_cache and ssa_lazy_cache virtual.

Making them virtual allows us to interchangebly use the caches.

	* gimple-range-cache.cc (ssa_cache::dump): Use get_range.
	(ssa_cache::dump_range_query): Delete.
	(ssa_lazy_cache::dump_range_query): Delete.
	(ssa_lazy_cache::get_range): Move from header file.
	(ssa_lazy_cache::clear_range): ditto.
	(ssa_lazy_cache::clear): Ditto.
	* gimple-range-cache.h (class ssa_cache): Virtualize.
	(class ssa_lazy_cache): Inherit and virtualize.
---
 gcc/gimple-range-cache.cc | 43 +++
 gcc/gimple-range-cache.h  | 37 ++---
 2 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index e069241bc9d..f25abaffd34 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -626,7 +626,7 @@ ssa_cache::dump (FILE *f)
   // Invoke dump_range_query which is a private virtual version of
   // get_range.   This avoids performance impacts on general queries,
   // but allows sharing of the dump routine.
-  if (dump_range_query (r, ssa_name (x)) && !r.varying_p ())
+  if (get_range (r, ssa_name (x)) && !r.varying_p ())
 	{
 	  if (print_header)
 	{
@@ -648,23 +648,14 @@ ssa_cache::dump (FILE *f)
 fputc ('\n', f);
 }
 
-// Virtual private get_range query for dumping.
+// Return true if NAME has an active range in the cache.
 
 bool
-ssa_cache::dump_range_query (vrange , tree name) const
+ssa_lazy_cache::has_range (tree name) const
 {
-  return get_range (r, name);
+  return bitmap_bit_p (active_p, SSA_NAME_VERSION (name));
 }
 
-// Virtual private get_range query for dumping.
-
-bool
-ssa_lazy_cache::dump_range_query (vrange , tree name) const
-{
-  return get_range (r, name);
-}
-
-
 // Set range of NAME to R in a lazy cache.  Return FALSE if it did not already
 // have a range.
 
@@ -684,6 +675,32 @@ ssa_lazy_cache::set_range (tree name, const vrange )
   return false;
 }
 
+// Return TRUE if NAME has a range, and return it in R.
+
+bool
+ssa_lazy_cache::get_range (vrange , tree name) const
+{
+  if (!bitmap_bit_p (active_p, SSA_NAME_VERSION (name)))
+return false;
+  return ssa_cache::get_range (r, name);
+}
+
+// Remove NAME from the active range list.
+
+void
+ssa_lazy_cache::clear_range (tree name)
+{
+  bitmap_clear_bit (active_p, SSA_NAME_VERSION (name));
+}
+
+// Remove all ranges from the active range list.
+
+void
+ssa_lazy_cache::clear ()
+{
+  bitmap_clear (active_p);
+}
+
 // --
 
 
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 871255a8116..4fc98230430 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -57,14 +57,13 @@ class ssa_cache
 public:
   ssa_cache ();
   ~ssa_cache ();
-  bool has_range (tree name) const;
-  bool get_range (vrange , tree name) const;
-  bool set_range (tree name, const vrange );
-  void clear_range (tree name);
-  void clear ();
+  virtual bool has_range (tree name) const;
+  virtual bool get_range (vrange , tree name) const;
+  virtual bool set_range (tree name, const vrange );
+  virtual void clear_range (tree name);
+  virtual void clear ();
   void dump (FILE *f = stderr);
 protected:
-  virtual bool dump_range_query (vrange , tree name) const;
   vec m_tab;
   vrange_allocator *m_range_allocator;
 };
@@ -72,35 +71,21 @@ protected:
 // This is the same as global cache, except it maintains an active bitmap
 // rather than depending on a zero'd out vector of pointers.  This is better
 // for sparsely/lightly used caches.
-// It could be made a fully derived class, but at this point there doesnt seem
-// to be a need to take the performance hit for it.
 
-class ssa_lazy_cache : protected ssa_cache
+class ssa_lazy_cache : public ssa_cache
 {
 public:
   inline ssa_lazy_cache () { active_p = BITMAP_ALLOC (NULL); }
   inline ~ssa_lazy_cache () { BITMAP_FREE (active_p); }
-  bool set_range (tree name, const vrange );
-  inline bool get_range (vrange , tree name) const;
-  inline void clear_range (tree name)
-{ bitmap_clear_bit (active_p, SSA_NAME_VERSION (name)); } ;
-  inline void clear () { bitmap_clear (active_p); }
-  inline 

[COMMITTED 2/3] PR tree-optimization/109695 - Use negative values to reflect always_current in the, temporal cache.

2023-05-24 Thread Andrew MacLeod via Gcc-patches

This implements suggestion 3) from the PR:

   3) When we first set the intial value for _1947 and give it the
   ALWAYS_CURRENT timestamp, we lose the context of when the initial
   value was set.  So even with 1) & 2) implemented, we are *still*
   need to set a timestamp for it when its finally calculated, even
   though it is the same as the original.  This will cause any names
   already evaluated using its range to become stale because we can't
   leave it as ALWAYS_CURRENT.    (There are other places where we do
   need to be able to re-evaluate.. there are 2 testsuite failures
   caused by this if we just leave it as always_current)

   TODO: Alter the implementation of ALWAYS_CURRENT such that a name is
   also given a timestamp at the time of setting the initial value.  
   Then set_global_range() will clear the ALWAYS_CURRENT tag
   unconditionally, but leave the original timestamp if the value
   hasn't changed.  This will then provide an accurate timestamp for
   the initial_value.

Instead of using 0, I changed the timestamp from unsigned to an integer, 
and used a negative value to indicate it is always current.  This has 
very little performance impact.


 Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew


From 5ed159fdda4d898bdda49469073b9202f5a349bf Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 23 May 2023 15:20:56 -0400
Subject: [PATCH 2/3] Use negative values to reflect always_current in the
 temporal cache.

Instead of using 0, use negative timestamps to reflect always_current state.
If the value doesn't change, keep the timestamp rather than creating a new
one and invalidating any dependencies.

	PR tree-optimization/109695
	* gimple-range-cache.cc (temporal_cache::temporal_value): Return
	a positive int.
	(temporal_cache::current_p): Check always_current method.
	(temporal_cache::set_always_current): Add param and set value
	appropriately.
	(temporal_cache::always_current_p): New.
	(ranger_cache::get_global_range): Adjust.
	(ranger_cache::set_global_range): set always current first.
---
 gcc/gimple-range-cache.cc | 43 +++
 1 file changed, 30 insertions(+), 13 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 8ddfd9426c0..db7ee8eab4e 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -701,12 +701,12 @@ public:
   ~temporal_cache ();
   bool current_p (tree name, tree dep1, tree dep2) const;
   void set_timestamp (tree name);
-  void set_always_current (tree name);
+  void set_always_current (tree name, bool value);
+  bool always_current_p (tree name) const;
 private:
-  unsigned temporal_value (unsigned ssa) const;
-
-  unsigned m_current_time;
-  vec  m_timestamp;
+  int temporal_value (unsigned ssa) const;
+  int m_current_time;
+  vec  m_timestamp;
 };
 
 inline
@@ -725,12 +725,12 @@ temporal_cache::~temporal_cache ()
 
 // Return the timestamp value for SSA, or 0 if there isn't one.
 
-inline unsigned
+inline int
 temporal_cache::temporal_value (unsigned ssa) const
 {
   if (ssa >= m_timestamp.length ())
 return 0;
-  return m_timestamp[ssa];
+  return abs (m_timestamp[ssa]);
 }
 
 // Return TRUE if the timestamp for NAME is newer than any of its dependents.
@@ -739,13 +739,12 @@ temporal_cache::temporal_value (unsigned ssa) const
 bool
 temporal_cache::current_p (tree name, tree dep1, tree dep2) const
 {
-  unsigned ts = temporal_value (SSA_NAME_VERSION (name));
-  if (ts == 0)
+  if (always_current_p (name))
 return true;
 
   // Any non-registered dependencies will have a value of 0 and thus be older.
   // Return true if time is newer than either dependent.
-
+  int ts = temporal_value (SSA_NAME_VERSION (name));
   if (dep1 && ts < temporal_value (SSA_NAME_VERSION (dep1)))
 return false;
   if (dep2 && ts < temporal_value (SSA_NAME_VERSION (dep2)))
@@ -768,12 +767,28 @@ temporal_cache::set_timestamp (tree name)
 // Set the timestamp to 0, marking it as "always up to date".
 
 inline void
-temporal_cache::set_always_current (tree name)
+temporal_cache::set_always_current (tree name, bool value)
 {
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_timestamp.length ())
 m_timestamp.safe_grow_cleared (num_ssa_names + 20);
-  m_timestamp[v] = 0;
+
+  int ts = abs (m_timestamp[v]);
+  // If this does not have a timestamp, create one.
+  if (ts == 0)
+ts = ++m_current_time;
+  m_timestamp[v] = value ? -ts : ts;
+}
+
+// Return true if NAME is always current.
+
+inline bool
+temporal_cache::always_current_p (tree name) const
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_timestamp.length ())
+return false;
+  return m_timestamp[v] <= 0;
 }
 
 // --
@@ -970,7 +985,7 @@ ranger_cache::get_global_range (vrange , tree name, bool _p)
 
   // If the existing value was not current, mark it as always current.
   if (!current_p)
-

[COMMITTED 3/3] PR tree-optimization/109695 - Only update global value if it changes.

2023-05-24 Thread Andrew MacLeod via Gcc-patches

This patch implements suggestion 1) from the PR:

   1) We unconditionally write the new value calculated to the global
   cache once the dependencies are resolved.  This gives it a new
   timestamp, and thus makes any other values which used it out of date
   when they really aren't.   This causes a lot of extra churn.

   TODO: This should be changed to only update it when it actually
   changes.  Client code shouldn't have to do this, it should be
   handled right int the cache's set_global_value ().

It turns out it is about a 3% compilation speed hit to compare the 
ranges every time we set them, which loses any gains we see. As such, I 
changed it so that set_global_range takes an extra parameter which 
indicates whether the value has changed or not. In all cases, we have 
the result of intersection which gives us the information for free, so 
we might as well take advantage of it.  instead we get about a 2.7% 
improvement in speed in VRP and another 0.7% in threading.


set_global_range now checks the changed flag, and if it hasnt changed, 
checks to see if the value is current or not, and only gives the result 
a new timestamp if its out of date.  I found many cases where we

  1) initally calculate the result, give it a timestamp,
  2) then evaluate the dependencies.. which get fresher timestamps than 
the result
  3) the initial result turns out to still be right, so we dont have to 
propagate the value or change it.


However, if we do not give it a fresh time stamp in this case, it wil be 
out of date if we ever check is sine the dependencies are fresher.   So 
in this case, we give it a new timestamp so we wont re-evaluate it.


The 3 patches together result in VRP being just 0.15% slower, threading 
being 0.6% faster, and overall compilation improves by 0.05%


It will also compile the testcase from the PR with issues after 
reverting Aldy's memory work and using int_range_max as int_range<255> 
again.. so that is also an indication the results are worthwhile.


At this point, I don't think its worth pursuing suggestion 4 from the 
PR.. it is wrought with dependency issues that I don't think we need to 
deal with at this moment.  When I have more time I will give it more 
consideration.


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From c3c1499498ff8f465ec7eacce6681c5c2da03a92 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 23 May 2023 15:41:03 -0400
Subject: [PATCH 3/3] Only update global value if it changes.

Do not update and propagate a global value if it hasn't changed.

	PR tree-optimization/109695
	* gimple-range-cache.cc (ranger_cache::get_global_range): Add
	changed param.
	* gimple-range-cache.h (ranger_cache::get_global_range): Ditto.
	* gimple-range.cc (gimple_ranger::range_of_stmt): Pass changed
	flag to set_global_range.
	(gimple_ranger::prefill_stmt_dependencies): Ditto.
---
 gcc/gimple-range-cache.cc | 10 +-
 gcc/gimple-range-cache.h  |  2 +-
 gcc/gimple-range.cc   |  8 
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index db7ee8eab4e..e069241bc9d 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -992,10 +992,18 @@ ranger_cache::get_global_range (vrange , tree name, bool _p)
 //  Set the global range of NAME to R and give it a timestamp.
 
 void
-ranger_cache::set_global_range (tree name, const vrange )
+ranger_cache::set_global_range (tree name, const vrange , bool changed)
 {
   // Setting a range always clears the always_current flag.
   m_temporal->set_always_current (name, false);
+  if (!changed)
+{
+  // If there are dependencies, make sure this is not out of date.
+  if (!m_temporal->current_p (name, m_gori.depend1 (name),
+ m_gori.depend2 (name)))
+	m_temporal->set_timestamp (name);
+  return;
+}
   if (m_globals.set_range (name, r))
 {
   // If there was already a range set, propagate the new value.
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 946fbc51465..871255a8116 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -117,7 +117,7 @@ public:
 
   bool get_global_range (vrange , tree name) const;
   bool get_global_range (vrange , tree name, bool _p);
-  void set_global_range (tree name, const vrange );
+  void set_global_range (tree name, const vrange , bool changed = true);
 
   void propagate_updated_value (tree name, basic_block bb);
 
diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index a275c090e4b..4fae3f95e6a 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -320,8 +320,8 @@ gimple_ranger::range_of_stmt (vrange , gimple *s, tree name)
   // Combine the new value with the old value.  This is required because
   // the way value propagation works, when the IL changes on the fly we
   // can sometimes get different results.  See PR 97741.
-  r.intersect (tmp);
-  m_cache.set_global_range 

[COMMITTED 1/3] PR tree-optimization/109695 - Choose better initial values for ranger.

2023-05-24 Thread Andrew MacLeod via Gcc-patches
Instead of defaulting to an initial value of VARYING before resolving 
cycles, try folding the statement using available global values 
instead.  THis can give us a much better initial approximation, 
especially in cases where there are no dependencies, ie

   f_45 = 77

This implements suggestion 2) in comment 22 of the PR:

   2) The initial value we choose is simply VARYING. This is why 1)
   alone won't solve this problem.  when we push _1947 on the stack, we
   set it to VARYING..  then proceed down along chain of other
   dependencies Driven by _1011 which are resolved first. When we get
   back to _1947 finally, we see:
  _1947 = 77;
   which evaluated to [77, 77], and is this different than VARYING, and
   thus would cause a new timestamp to be created even if (1) were
   implemented.

   TODO: When setting the initial value in the cache, rather than being
   lazy and using varying, we should invoke fold_stmt using
   get_global_range_query ().   This will fold the stmt and produce a
   result which resolved any ssa-names just using known global values.
   THis should not be expensive, and gives us a reasonable first
   approximation.  And for cases like _1947, the final result as well.

I stop doing this after inlining because there are some statements which 
change their evaluation (ie, BUILTIN_IN_CONSTANT)  which causes 
headaches... and then we  just default to VARYING again or anything 
which doesn't have a  global SSA range set..


There is a 2.7% hit to VRP to evaluate each statement this additional 
time, but only 0.09% to overall compile time. Besides, we get it back 
later in the patch set.. :-)


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew




From 3a20e1a33277bcb16d681b4f3633fcf8cce5a852 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 23 May 2023 15:11:44 -0400
Subject: [PATCH 1/3] Choose better initial values for ranger.

Instead of defaulting to VARYING, fold the stmt using just global ranges.

	PR tree-optimization/109695
	* gimple-range-cache.cc (ranger_cache::get_global_range): Call
	fold_range with global query to choose an initial value.
---
 gcc/gimple-range-cache.cc | 17 -
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 07c69ef858a..8ddfd9426c0 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -951,7 +951,22 @@ ranger_cache::get_global_range (vrange , tree name, bool _p)
 		|| m_temporal->current_p (name, m_gori.depend1 (name),
 	  m_gori.depend2 (name));
   else
-m_globals.set_range (name, r);
+{
+  // If no global value has been set and value is VARYING, fold the stmt
+  // using just global ranges to get a better initial value.
+  // After inlining we tend to decide some things are constant, so
+  // so not do this evaluation after inlining.
+  if (r.varying_p () && !cfun->after_inlining)
+	{
+	  gimple *s = SSA_NAME_DEF_STMT (name);
+	  if (gimple_get_lhs (s) == name)
+	{
+	  if (!fold_range (r, s, get_global_range_query ()))
+		gimple_range_global (r, name);
+	}
+	}
+  m_globals.set_range (name, r);
+}
 
   // If the existing value was not current, mark it as always current.
   if (!current_p)
-- 
2.40.1



[COMMITTED 4/5] Rename ssa_global_cache to ssa_cache and add has_range

2023-04-26 Thread Andrew MacLeod via Gcc-patches
The original ssa_global_cache was intended to simply be the global cache 
for ranger, but uses of it have since percolated such that it is really 
just a range acche for a list of ssa-names. This patch renames it from 
"ssa_global_cache" to "ssa_cache".


It also adds a method called "has_range" which didnt exist before which 
simply indicates if a range is set or not.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew

From bf07de561197559304c67bd46c7bea3da9eb63f9 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 28 Mar 2023 11:32:21 -0400
Subject: [PATCH 4/5] Rename ssa_global_cache to ssa_cache and add has_range

This renames the ssa_global_cache to be ssa_cache.  The original use was
to function as a global cache, but its uses have expanded.  Remove all mention
of "global" from the class and methods.  Also add a has_range method.

	* gimple-range-cache.cc (ssa_cache::ssa_cache): Rename.
	(ssa_cache::~ssa_cache): Rename.
	(ssa_cache::has_range): New.
	(ssa_cache::get_range): Rename.
	(ssa_cache::set_range): Rename.
	(ssa_cache::clear_range): Rename.
	(ssa_cache::clear): Rename.
	(ssa_cache::dump): Rename and use get_range.
	(ranger_cache::get_global_range): Use get_range and set_range.
	(ranger_cache::range_of_def): Use get_range.
	* gimple-range-cache.h (class ssa_cache): Rename class and methods.
	(class ranger_cache): Use ssa_cache.
	* gimple-range-path.cc (path_range_query::path_range_query): Use
	ssa_cache.
	(path_range_query::get_cache): Use get_range.
	(path_range_query::set_cache): Use set_range.
	* gimple-range-path.h (class path_range_query): Use ssa_cache.
	* gimple-range.cc (assume_query::assume_range_p): Use get_range.
	(assume_query::range_of_expr): Use get_range.
	(assume_query::assume_query): Use set_range.
	(assume_query::calculate_op): Use get_range and set_range.
	* gimple-range.h (class assume_query): Use ssa_cache.
---
 gcc/gimple-range-cache.cc | 45 ---
 gcc/gimple-range-cache.h  | 15 +++--
 gcc/gimple-range-path.cc  |  8 +++
 gcc/gimple-range-path.h   |  2 +-
 gcc/gimple-range.cc   | 14 ++--
 gcc/gimple-range.h|  2 +-
 6 files changed, 49 insertions(+), 37 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 868d2dda424..6de96f6b8a9 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -530,27 +530,38 @@ block_range_cache::dump (FILE *f, basic_block bb, bool print_varying)
 
 // -
 
-// Initialize a global cache.
+// Initialize an ssa cache.
 
-ssa_global_cache::ssa_global_cache ()
+ssa_cache::ssa_cache ()
 {
   m_tab.create (0);
   m_range_allocator = new obstack_vrange_allocator;
 }
 
-// Deconstruct a global cache.
+// Deconstruct an ssa cache.
 
-ssa_global_cache::~ssa_global_cache ()
+ssa_cache::~ssa_cache ()
 {
   m_tab.release ();
   delete m_range_allocator;
 }
 
+// Return TRUE if the global range of NAME has a cache entry.
+
+bool
+ssa_cache::has_range (tree name) const
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+return false;
+  return m_tab[v] != NULL;
+}
+
 // Retrieve the global range of NAME from cache memory if it exists. 
 // Return the value in R.
 
 bool
-ssa_global_cache::get_global_range (vrange , tree name) const
+ssa_cache::get_range (vrange , tree name) const
 {
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_tab.length ())
@@ -563,11 +574,11 @@ ssa_global_cache::get_global_range (vrange , tree name) const
   return true;
 }
 
-// Set the range for NAME to R in the global cache.
+// Set the range for NAME to R in the ssa cache.
 // Return TRUE if there was already a range set, otherwise false.
 
 bool
-ssa_global_cache::set_global_range (tree name, const vrange )
+ssa_cache::set_range (tree name, const vrange )
 {
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_tab.length ())
@@ -584,7 +595,7 @@ ssa_global_cache::set_global_range (tree name, const vrange )
 // Set the range for NAME to R in the global cache.
 
 void
-ssa_global_cache::clear_global_range (tree name)
+ssa_cache::clear_range (tree name)
 {
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_tab.length ())
@@ -592,19 +603,19 @@ ssa_global_cache::clear_global_range (tree name)
   m_tab[v] = NULL;
 }
 
-// Clear the global cache.
+// Clear the ssa cache.
 
 void
-ssa_global_cache::clear ()
+ssa_cache::clear ()
 {
   if (m_tab.address ())
 memset (m_tab.address(), 0, m_tab.length () * sizeof (vrange *));
 }
 
-// Dump the contents of the global cache to F.
+// Dump the contents of the ssa cache to F.
 
 void
-ssa_global_cache::dump (FILE *f)
+ssa_cache::dump (FILE *f)
 {
   /* Cleared after the table header has been printed.  */
   bool print_header = true;
@@ -613,7 +624,7 @@ ssa_global_cache::dump (FILE *f)
   if (!gimple_range_ssa_p (ssa_name (x)))
 	continue;
   Value_Range r (TREE_TYPE (ssa_name (x)));
-

[COMMITTED 5/5] PR tree-optimization/108697 - Create a lazy ssa_cache.

2023-04-26 Thread Andrew MacLeod via Gcc-patches
Sparsely used ssa caches can benefit from using a bitmap to determine if 
a name already has an entry.  The path_query class was already managing 
something like this internally, but there is benefit to making it 
generally available.


ssa_lazy_cache inherits from ssa_cache and adds management of the 
bitmap.   The self-managed version in path_query has been removed, 
cleaned up, and replaced with this lazy version.  It is also now used in 
"assume_query" processing.


All 5 patches combined produce about
 - a 0.4% speedup in total compilation time,
 - about a 1% speedup in VRP
 - and threading picks up a more impressive 13% improvement.

This patch has the previous one as a prerequisite to rename 
ssa_global_cache to ssa_cache.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From b3c81a4a6b7ff5adce6b5891729b79a0d6e4e54a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 28 Mar 2023 11:35:26 -0400
Subject: [PATCH 5/5] Create a lazy ssa_cache.

Sparsely used ssa caches can benefit from using a bitmap to
determine if a name already has an entry.  Utilize it in the path query
and remove its private bitmap for tracking the same info.
Also use it in the "assume" query class.

	PR tree-optimization/108697
	* gimple-range-cache.cc (ssa_global_cache::clear_range): Do
	not clear the vector on an out of range query.
	(ssa_cache::dump): Use dump_range_query instead of get_range.
	(ssa_cache::dump_range_query): New.
	(ssa_lazy_cache::dump_range_query): New.
	(ssa_lazy_cache::set_range): New.
	* gimple-range-cache.h (ssa_cache::dump_range_query): New.
	(class ssa_lazy_cache): New.
	(ssa_lazy_cache::ssa_lazy_cache): New.
	(ssa_lazy_cache::~ssa_lazy_cache): New.
	(ssa_lazy_cache::get_range): New.
	(ssa_lazy_cache::clear_range): New.
	(ssa_lazy_cache::clear): New.
	(ssa_lazy_cache::dump): New.
	* gimple-range-path.cc (path_range_query::path_range_query): Do
	not allocate a ssa_cache object nor has_cache bitmap.
	(path_range_query::~path_range_query): Do not free objects.
	(path_range_query::clear_cache): Remove.
	(path_range_query::get_cache): Adjust.
	(path_range_query::set_cache): Remove.
	(path_range_query::dump): Don't call through a pointer.
	(path_range_query::internal_range_of_expr): Set cache directly.
	(path_range_query::reset_path): Clear cache directly.
	(path_range_query::ssa_range_in_phi): Fold with globals only.
	(path_range_query::compute_ranges_in_phis): Simply set range.
	(path_range_query::compute_ranges_in_block): Call cache directly.
	* gimple-range-path.h (class path_range_query): Replace bitmap
	and cache pointer with lazy cache object.
	* gimple-range.h (class assume_query): Use ssa_lazy_cache.
---
 gcc/gimple-range-cache.cc | 45 +--
 gcc/gimple-range-cache.h  | 35 -
 gcc/gimple-range-path.cc  | 65 +--
 gcc/gimple-range-path.h   |  7 +
 gcc/gimple-range.h|  2 +-
 5 files changed, 92 insertions(+), 62 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 6de96f6b8a9..5510efba1ca 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -592,14 +592,14 @@ ssa_cache::set_range (tree name, const vrange )
   return m != NULL;
 }
 
-// Set the range for NAME to R in the global cache.
+// Set the range for NAME to R in the ssa cache.
 
 void
 ssa_cache::clear_range (tree name)
 {
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_tab.length ())
-m_tab.safe_grow_cleared (num_ssa_names + 1);
+return;
   m_tab[v] = NULL;
 }
 
@@ -624,7 +624,10 @@ ssa_cache::dump (FILE *f)
   if (!gimple_range_ssa_p (ssa_name (x)))
 	continue;
   Value_Range r (TREE_TYPE (ssa_name (x)));
-  if (get_range (r, ssa_name (x)) && !r.varying_p ())
+  // Invoke dump_range_query which is a private virtual version of
+  // get_range.   This avoids performance impacts on general queries,
+  // but allows sharing of the dump routine.
+  if (dump_range_query (r, ssa_name (x)) && !r.varying_p ())
 	{
 	  if (print_header)
 	{
@@ -646,6 +649,42 @@ ssa_cache::dump (FILE *f)
 fputc ('\n', f);
 }
 
+// Virtual private get_range query for dumping.
+
+bool
+ssa_cache::dump_range_query (vrange , tree name) const
+{
+  return get_range (r, name);
+}
+
+// Virtual private get_range query for dumping.
+
+bool
+ssa_lazy_cache::dump_range_query (vrange , tree name) const
+{
+  return get_range (r, name);
+}
+
+
+// Set range of NAME to R in a lazy cache.  Return FALSE if it did not already
+// have a range.
+
+bool
+ssa_lazy_cache::set_range (tree name, const vrange )
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (!bitmap_set_bit (active_p, v))
+{
+  // There is already an entry, simply set it.
+  gcc_checking_assert (v < m_tab.length ());
+  return ssa_cache::set_range (name, r);
+}
+  if (v >= m_tab.length ())
+m_tab.safe_grow (num_ssa_names + 1);
+  m_tab[v] = m_range_allocator->clone (r);
+  return 

[COMMITTED 3/5] Add sbr_lazy_vector and adjust (e)vrp sparse cache

2023-04-26 Thread Andrew MacLeod via Gcc-patches
This implements a sparse vector class for rangers cache and uses it bey 
default except when the CFG is very small, in qhich case the original 
full vectors are faster.  It works like a normal vector cache (in fact 
it inherits from it), but uses a sparse bitmap to determine whether a 
vector element is set or not.  This provide better performance for 
clearing the vector, as well as during initialization.


A new param is added for this transition "vrp_vector_threshold" which 
defaults to 250.  Anything function with fewer than 250 basic blocks 
will use the simple vectors.  Various timing runs have indicated this is 
about the sweet spot where using the sparse bitmap overtakes the time 
required to clear the vector initially. Should we make ranger live 
across functions in the future, we'll probably want to lower this value 
again as clearing is significantly cheaper.


This patch also rename the "evrp_*" params to "vrp_*" as there really is 
not a serperate EVRP pass any more, its all one vrp pass.   Eventually 
we'll probably want to change it to vrp1, vrp2 and vrp3 rather than 
evrp, vrp1  and vrp2.    But thats a task for later, perhaps when we 
reconsider pass orderings..


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6a3babfbd9a2b18b9e86d3d3a91564fcb9b8f9d7 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 13 Apr 2023 14:47:47 -0400
Subject: [PATCH 3/5] Add sbr_lazy_vector and adjust (e)vrp sparse cache

Add a sparse vector class for cache and use if by default.
Rename the evrp_* params to vrp_*, and add a param for small CFGS which use
just the original basic vector.

	* gimple-range-cache.cc (sbr_vector::sbr_vector): Add parameter
	and local to optionally zero memory.
	(br_vector::grow): Only zero memory if flag is set.
	(class sbr_lazy_vector): New.
	(sbr_lazy_vector::sbr_lazy_vector): New.
	(sbr_lazy_vector::set_bb_range): New.
	(sbr_lazy_vector::get_bb_range): New.
	(sbr_lazy_vector::bb_range_p): New.
	(block_range_cache::set_bb_range): Check flags and Use sbr_lazy_vector.
	* gimple-range-gori.cc (gori_map::calculate_gori): Use
	param_vrp_switch_limit.
	(gori_compute::gori_compute): Use param_vrp_switch_limit.
	* params.opt (vrp_sparse_threshold): Rename from evrp_sparse_threshold.
	(vrp_switch_limit): Rename from evrp_switch_limit.
	(vrp_vector_threshold): New.
---
 gcc/gimple-range-cache.cc | 72 ++-
 gcc/gimple-range-gori.cc  |  4 +--
 gcc/params.opt| 20 ++-
 3 files changed, 78 insertions(+), 18 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 2314478d558..868d2dda424 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -79,7 +79,7 @@ ssa_block_ranges::dump (FILE *f)
 class sbr_vector : public ssa_block_ranges
 {
 public:
-  sbr_vector (tree t, vrange_allocator *allocator);
+  sbr_vector (tree t, vrange_allocator *allocator, bool zero_p = true);
 
   virtual bool set_bb_range (const_basic_block bb, const vrange ) override;
   virtual bool get_bb_range (vrange , const_basic_block bb) override;
@@ -91,22 +91,25 @@ protected:
   vrange *m_undefined;
   tree m_type;
   vrange_allocator *m_range_allocator;
+  bool m_zero_p;
   void grow ();
 };
 
 
 // Initialize a block cache for an ssa_name of type T.
 
-sbr_vector::sbr_vector (tree t, vrange_allocator *allocator)
+sbr_vector::sbr_vector (tree t, vrange_allocator *allocator, bool zero_p)
   : ssa_block_ranges (t)
 {
   gcc_checking_assert (TYPE_P (t));
   m_type = t;
+  m_zero_p = zero_p;
   m_range_allocator = allocator;
   m_tab_size = last_basic_block_for_fn (cfun) + 1;
   m_tab = static_cast 
 (allocator->alloc (m_tab_size * sizeof (vrange *)));
-  memset (m_tab, 0, m_tab_size * sizeof (vrange *));
+  if (zero_p)
+memset (m_tab, 0, m_tab_size * sizeof (vrange *));
 
   // Create the cached type range.
   m_varying = m_range_allocator->alloc_vrange (t);
@@ -132,7 +135,8 @@ sbr_vector::grow ()
   vrange **t = static_cast 
 (m_range_allocator->alloc (new_size * sizeof (vrange *)));
   memcpy (t, m_tab, m_tab_size * sizeof (vrange *));
-  memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange *));
+  if (m_zero_p)
+memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange *));
 
   m_tab = t;
   m_tab_size = new_size;
@@ -183,6 +187,50 @@ sbr_vector::bb_range_p (const_basic_block bb)
   return false;
 }
 
+// Like an sbr_vector, except it uses a bitmap to manage whetehr  vale is set
+// or not rather than cleared memory.
+
+class sbr_lazy_vector : public sbr_vector
+{
+public:
+  sbr_lazy_vector (tree t, vrange_allocator *allocator, bitmap_obstack *bm);
+
+  virtual bool set_bb_range (const_basic_block bb, const vrange ) override;
+  virtual bool get_bb_range (vrange , const_basic_block bb) override;
+  virtual bool bb_range_p (const_basic_block bb) override;
+protected:
+  bitmap m_has_value;
+};
+
+sbr_lazy_vector::sbr_lazy_vector (tree t, 

[COMMITTED 2/5] Quicker relation check.

2023-04-26 Thread Andrew MacLeod via Gcc-patches
If either of the SSA names in a comparison do not have any equivalences 
or relations, we can short-circuit the check slightly and be a bit faster.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From ee03aca78fb5739f4cd76cb30332f8aff2c5243a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 8 Feb 2023 12:36:23 -0500
Subject: [PATCH 2/5] Quicker relation check.

If either of the SSA names in a comparison do not have any equivalences
or relations, we can short-circuit the check slightly.

	* value-relation.cc (dom_oracle::query_relation): Check early for lack
	of any relation.
	* value-relation.h (equiv_oracle::has_equiv_p): New.
---
 gcc/value-relation.cc | 6 ++
 gcc/value-relation.h  | 1 +
 2 files changed, 7 insertions(+)

diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 30a02d3c9d3..65cf7694d40 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -1374,6 +1374,12 @@ dom_oracle::query_relation (basic_block bb, tree ssa1, tree ssa2)
   if (v1 == v2)
 return VREL_EQ;
 
+  // If v1 or v2 do not have any relations or equivalences, a partial
+  // equivalence is the only possibility.
+  if ((!bitmap_bit_p (m_relation_set, v1) && !has_equiv_p (v1))
+  || (!bitmap_bit_p (m_relation_set, v2) && !has_equiv_p (v2)))
+return partial_equiv (ssa1, ssa2);
+
   // Check for equivalence first.  They must be in each equivalency set.
   const_bitmap equiv1 = equiv_set (ssa1, bb);
   const_bitmap equiv2 = equiv_set (ssa2, bb);
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index 3177ecb1ad0..be6e277421b 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -170,6 +170,7 @@ public:
   void dump (FILE *f) const override;
 
 protected:
+  inline bool has_equiv_p (unsigned v) { return bitmap_bit_p (m_equiv_set, v); }
   bitmap_obstack m_bitmaps;
   struct obstack m_chain_obstack;
 private:
-- 
2.39.2



[COMMITTED 1/5] PR tree-optimization/109417 - Don't save ssa-name pointer in dependency cache.

2023-04-26 Thread Andrew MacLeod via Gcc-patches


On 4/25/23 22:34, Jeff Law wrote:



On 4/24/23 07:51, Andrew MacLeod wrote:



Its not a real cache..  its merely a statement shortcut in dependency 
analysis to avoid re-parsing statements every time we look at them 
for dependency analysis


It is not suppose to be used for anything other than dependency 
checking.   ie, if an SSA_NAME changes, we can check if it matches 
either of the 2 "cached" names on this DEF, and if so, we know this 
name is stale.  we are never actually suppose to use the dependency 
cached values to drive anything, merely respond to the question if 
either matches a given name.   So it doesnt matter if the name here 
has been freed
OK.  I'll take your word for it.  Note that a free'd SSA_NAME may have 
an empty TREE_TYPE or an unexpected TREE_CHAIN field IIRC. So you have 
to be a bit careful if you're going to allow them.






We never re-use SSA names from within the pass releasing it.  But if
the ranger cache
persists across passes this could be a problem.  See



This particular valueswould never persist beyond a current pass.. its 
just the dependency chains and they would get rebuilt every time 
because the IL has changed.
Good.  THat would limit the concerns significantly.  I don't think we 
recycle names within a pass anymore (we used to within DOM due to the 
way threading worked eons ago, but we no longer take things out of SSA 
form to handle the CFG/SSA graph updates.  One could even argue we 
don't need to maintain the freelist and recycle names anymore.


Jeff

well, no worries.  taken care of thusly for the future. Its a hair 
slower, but nothing outrageous


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew




From a530eb642032da7ad4d30de51131421631055f72 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 25 Apr 2023 15:33:52 -0400
Subject: [PATCH 1/5] Don't save ssa-name pointer in dependency cache.

If the direct dependence fields point directly to an ssa-name,
its possible that an optimization frees an ssa-name, and the value
pointed to may now be in the free list.   Simply maintain the ssa
version number instead.

	PR tree-optimization/109417
	* gimple-range-gori.cc (range_def_chain::register_dependency):
	Save the ssa version number, not the pointer.
	(gori_compute::may_recompute_p): No need to check if a dependency
	is in the free list.
	* gimple-range-gori.h (class range_def_chain): Change ssa1 and ssa2
	fields to be unsigned int instead of trees.
	(ange_def_chain::depend1): Adjust.
	(ange_def_chain::depend2): Adjust.
	* gimple-range.h: Include "ssa.h" to inline ssa_name().
---
 gcc/gimple-range-gori.cc |  8 
 gcc/gimple-range-gori.h  | 14 ++
 gcc/gimple-range.h   |  1 +
 3 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index d77e1f51ac2..5bba77c7b7b 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -182,9 +182,9 @@ range_def_chain::register_dependency (tree name, tree dep, basic_block bb)
 
   // Set the direct dependency cache entries.
   if (!src.ssa1)
-src.ssa1 = dep;
-  else if (!src.ssa2 && src.ssa1 != dep)
-src.ssa2 = dep;
+src.ssa1 = SSA_NAME_VERSION (dep);
+  else if (!src.ssa2 && src.ssa1 != SSA_NAME_VERSION (dep))
+src.ssa2 = SSA_NAME_VERSION (dep);
 
   // Don't calculate imports or export/dep chains if BB is not provided.
   // This is usually the case for when the temporal cache wants the direct
@@ -1316,7 +1316,7 @@ gori_compute::may_recompute_p (tree name, basic_block bb, int depth)
   // If the first dependency is not set, there is no recomputation.
   // Dependencies reflect original IL, not current state.   Check if the
   // SSA_NAME is still valid as well.
-  if (!dep1 || SSA_NAME_IN_FREE_LIST (dep1))
+  if (!dep1)
 return false;
 
   // Don't recalculate PHIs or statements with side_effects.
diff --git a/gcc/gimple-range-gori.h b/gcc/gimple-range-gori.h
index 3ea4b45595b..526edc24b53 100644
--- a/gcc/gimple-range-gori.h
+++ b/gcc/gimple-range-gori.h
@@ -46,8 +46,8 @@ protected:
   bitmap_obstack m_bitmaps;
 private:
   struct rdc {
-   tree ssa1;		// First direct dependency
-   tree ssa2;		// Second direct dependency
+   unsigned int ssa1;		// First direct dependency
+   unsigned int ssa2;		// Second direct dependency
bitmap bm;		// All dependencies
bitmap m_import;
   };
@@ -66,7 +66,10 @@ range_def_chain::depend1 (tree name) const
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_def_chain.length ())
 return NULL_TREE;
-  return m_def_chain[v].ssa1;
+  unsigned v1 = m_def_chain[v].ssa1;
+  if (!v1)
+return NULL_TREE;
+  return ssa_name (v1);
 }
 
 // Return the second direct dependency for NAME, if there is one.
@@ -77,7 +80,10 @@ range_def_chain::depend2 (tree name) const
   unsigned v = SSA_NAME_VERSION (name);
   if (v >= m_def_chain.length ())
 return NULL_TREE;
-  return m_def_chain[v].ssa2;
+  unsigned v2 = m_def_chain[v].ssa2;
+  if 

Re: [PATCH] PR tree-optimization/109417 - Check if dependency is valid before using in may_recompute_p.

2023-04-24 Thread Andrew MacLeod via Gcc-patches



On 4/11/23 05:21, Richard Biener via Gcc-patches wrote:

On Wed, Apr 5, 2023 at 11:53 PM Jeff Law via Gcc-patches
 wrote:



On 4/5/23 14:10, Andrew MacLeod via Gcc-patches wrote:

When a statement is first processed, any SSA_NAMEs that are dependencies
are cached for quick future access.

if we ;later rewrite the statement (say propagate a constant into it),
its possible the ssa-name in this cache is no longer active.   Normally
this is not a problem, but the changed to may_recompute_p forgot to take
that into account, and was checking a dependency from the cache that was
in the SSA_NAME_FREE_LIST. It thus had no SSA_NAME_DEF_STMT when we were
expecting one.

This patch simply rejects dependencies from consideration if they are in
the free list.

Bootstrapping on x86_64-pc-linux-gnu  and presuming no regressio0ns, OK
for trunk?

eek.  So you've got a released name in the cache?  What happens if the
name gets released, then re-used?  Aren't you going to get bogus results
in that case?


Its not a real cache..  its merely a statement shortcut in dependency 
analysis to avoid re-parsing statements every time we look at them for 
dependency analysis


It is not suppose to be used for anything other than dependency 
checking.   ie, if an SSA_NAME changes, we can check if it matches 
either of the 2 "cached" names on this DEF, and if so, we know this name 
is stale.  we are never actually suppose to use the dependency cached 
values to drive anything, merely respond to the question if either 
matches a given name.   So it doesnt matter if the name here has been freed




We never re-use SSA names from within the pass releasing it.  But if
the ranger cache
persists across passes this could be a problem.  See



This particular valueswould never persist beyond a current pass.. its 
just the dependency chains and they would get rebuilt every time because 
the IL has changed.




flush_ssaname_freelist which
for example resets the SCEV hash table which otherwise would have the
same issue.

IIRC ranger never outlives a pass so the patch should be OK.

_But_ I wonder how ranger gets at the tree SSA name in the first place - usually
caches are indexed by SSA_NAME_VERSION (because that's cheaper and



Its stored when we process a statement the first time when building 
dependency chains.  All comparisons down the road for 
staleness/dependency chain existence are against a pointer.. but we 
could simple change it to be an "unsigned int",  we'd then just have to 
compare again SSA_NAME_VERSION(name) instead..




better than a pointer to the tree) and ssa_name (ver) will return NULL
for released
SSA names.  So range_def_chain::rdc could be just

   struct rdc {
int ssa1;   // First direct dependency
int ssa2;   // Second direct dependency
bitmap bm;   // All dependencies
bitmap m_import;
   };

and ::depend1 return ssa_name (m_def_chain[v].ssa1) and everything woul
if the ssa-name is no longer in existence, does ssa_name (x) it return 
NULL?

work automatically (and save 8 bytes of storage).

Richard.



jeff




[COMMITTED] PR tree-optimization/109546 - Do not fold ADDR_EXPR conditions leading to builtin_unreachable early.

2023-04-21 Thread Andrew MacLeod via Gcc-patches
We cant represent ADDR_EXPR in ranges, so when we are processing 
builtin_unreachable() we should not be removing comparisons that utilize 
ADDR_EXPR during the early phases, or we lose some important information.


It was just an oversight that we think its a comparison to a 
representable constant.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  pushed.

This would also be suitable for the next GCC13 release when the branch 
is open.


Andrew
commit 0afefd11e25a05dd4f8a8624e8fb046d9c85686a
Author: Andrew MacLeod 
Date:   Fri Apr 21 15:03:43 2023 -0400

Do not fold ADDR_EXPR conditions leading to builtin_unreachable early.

Ranges can not represent  globally yet, so we cannot fold these
expressions early or we lose the __builtin_unreachable information.

PR tree-optimization/109546
gcc/
* tree-vrp.cc (remove_unreachable::remove_and_update_globals): Do
not fold conditions with ADDR_EXPR early.

gcc/testsuite/
* gcc.dg/pr109546.c: New.

diff --git a/gcc/testsuite/gcc.dg/pr109546.c b/gcc/testsuite/gcc.dg/pr109546.c
new file mode 100644
index 000..ba8af0f31fa
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr109546.c
@@ -0,0 +1,24 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -fdump-tree-optimized" } */
+
+void foo(void);
+static int a, c;
+static int *b = 
+static int **d = 
+void assert_fail() __attribute__((__noreturn__));
+int main() {
+  int *e = *d;
+  if (e ==  || e == );
+  else {
+__builtin_unreachable();
+  assert_fail();
+  }
+  if (e ==  || e == );
+  else
+foo();
+}
+
+/* { dg-final { scan-tree-dump-not "assert_fail" "optimized" } } */
+/* { dg-final { scan-tree-dump-not "foo" "optimized" } } */
+
+
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index f4d484526c7..9b870640e23 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -150,7 +150,9 @@ remove_unreachable::remove_and_update_globals (bool final_p)
   // If this is already a constant condition, don't look either
   if (!lhs_p && !rhs_p)
 	continue;
-
+  // Do not remove addresses early. ie if (x == )
+  if (!final_p && lhs_p && TREE_CODE (gimple_cond_rhs (s)) == ADDR_EXPR)
+	continue;
   bool dominate_exit_p = true;
   FOR_EACH_GORI_EXPORT_NAME (m_ranger.gori (), e->src, name)
 	{


[PATCH] PR tee-optimization/109564 - Do not ignore UNDEFINED ranges when determining PHI equivalences.

2023-04-20 Thread Andrew MacLeod via Gcc-patches
This removes specal casing UNDEFINED ranges when we are checking to see 
if all arguments are the same and registering an equivalence.


previously if there were 2 different names, and one was undefined, we 
ignored it an created an equivaence with the other one.  as observed, 
this is not a 2 way relationship, and as such, we souldnt do it this 
way.   This removes the bypass for undefined ranges in chekcing if 
arguments are the same symbol.


Bootstrapped/regtested successfully on x86_64-linux and i686-linux.  OK 
for trunk?


Andrew


commit 26f20f4446531225b362b9ec7b473ce4f0822a0a
Author: Andrew MacLeod 
Date:   Thu Apr 20 13:10:40 2023 -0400

Do not ignore UNDEFINED ranges when determining PHI equivalences.

Do not ignore UNDEFINED name arguments when registering two-way equivalences
from PHIs.

PR tree-optimization/109564
gcc/
* gimple-range-fold.cc (fold_using_range::range_of_phi):
Do no ignore NUDEFINED range names when deciding if all the names
on a PHI are the same,

gcc/testsuite/
* gcc.dg/torture/pr109564-1.c: New testcase.
* gcc.dg/torture/pr109564-2.c: Likewise.
* gcc.dg/tree-ssa/evrp-ignore.c: XFAIL.
* gcc.dg/tree-ssa/vrp06.c: Likewise.
---

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 429734f954a..180f349eda9 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -771,16 +771,16 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 
 	  if (gimple_range_ssa_p (arg) && src.gori ())
 	src.gori ()->register_dependency (phi_def, arg);
+	}
 
-	  // Track if all arguments are the same.
-	  if (!seen_arg)
-	{
-	  seen_arg = true;
-	  single_arg = arg;
-	}
-	  else if (single_arg != arg)
-	single_arg = NULL_TREE;
+  // Track if all arguments are the same.
+  if (!seen_arg)
+	{
+	  seen_arg = true;
+	  single_arg = arg;
 	}
+  else if (single_arg != arg)
+	single_arg = NULL_TREE;
 
   // Once the value reaches varying, stop looking.
   if (r.varying_p () && single_arg == NULL_TREE)
diff --git a/gcc/testsuite/gcc.dg/torture/pr109564-1.c b/gcc/testsuite/gcc.dg/torture/pr109564-1.c
new file mode 100644
index 000..e7c855f1edf
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/pr109564-1.c
@@ -0,0 +1,74 @@
+/* { dg-do run } */
+
+struct libkeccak_spec {
+long int bitrate;
+};
+
+struct libkeccak_generalised_spec {
+long int bitrate;
+long int state_size;
+long int word_size;
+};
+
+int __attribute__((noipa))
+libkeccak_degeneralise_spec(struct libkeccak_generalised_spec *restrict spec,
+			struct libkeccak_spec *restrict output_spec)
+{
+  long int state_size, word_size, bitrate, output;
+  const int have_state_size = spec->state_size != (-65536L);
+  const int have_word_size = spec->word_size != (-65536L);
+  const int have_bitrate = spec->bitrate != (-65536L);
+
+  if (have_state_size)
+{
+  state_size = spec->state_size;
+  if (state_size <= 0)
+	return 1;
+  if (state_size > 1600)
+	return 2;
+}
+
+  if (have_word_size)
+{
+  word_size = spec->word_size;
+  if (word_size <= 0)
+	return 4;
+  if (word_size > 64)
+	return 5;
+  if (have_state_size && state_size != word_size * 25)
+	return 6;
+  else if (!have_state_size) {
+	  spec->state_size = 1;
+	  state_size = word_size * 25;
+  }
+}
+
+  if (have_bitrate)
+bitrate = spec->bitrate;
+
+  if (!have_bitrate)
+{
+  state_size = (have_state_size ? state_size : (1600L));
+  output = ((state_size << 5) / 100L + 7L) & ~0x07L;
+  bitrate = output << 1;
+}
+
+  output_spec->bitrate = bitrate;
+
+  return 0;
+}
+
+int main ()
+{
+  struct libkeccak_generalised_spec gspec;
+  struct libkeccak_spec spec;
+  spec.bitrate = -1;
+  gspec.bitrate = -65536;
+  gspec.state_size = -65536;
+  gspec.word_size = -65536;
+  if (libkeccak_degeneralise_spec(, ))
+__builtin_abort ();
+  if (spec.bitrate != 1024)
+__builtin_abort ();
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.dg/torture/pr109564-2.c b/gcc/testsuite/gcc.dg/torture/pr109564-2.c
new file mode 100644
index 000..eeab437c0b3
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/pr109564-2.c
@@ -0,0 +1,33 @@
+/* { dg-do run } */
+
+struct libkeccak_generalised_spec {
+  int state_size;
+  int word_size;
+} main_gspec;
+
+long gvar;
+
+int libkeccak_degeneralise_spec(struct libkeccak_generalised_spec *spec)
+{
+  int state_size;
+  int have_state_size = spec->state_size != -1;
+  int have_word_size = spec->word_size;
+
+  if (have_state_size)
+state_size = spec->state_size;
+  if (have_word_size)
+gvar = 12345;
+  if (have_state_size && state_size != spec->word_size)
+return 1;
+  if (spec)
+gvar++;
+  return 0;
+}
+
+int main()
+{
+  main_gspec.state_size = -1;
+  if (libkeccak_degeneralise_spec(_gspec))
+__builtin_abort();
+  return 

Re: [PATCH] Return true from operator== for two identical ranges containing NAN.

2023-04-18 Thread Andrew MacLeod via Gcc-patches



On 4/18/23 04:52, Aldy Hernandez wrote:

[Andrew, we talked about this a few months ago.  Just making sure we're
on the same page so I can push it.  Also, a heads-up for Jakub.]

The == operator for ranges signifies that two ranges contain the same
thing, not that they are ultimately equal.  So [2,4] == [2,4], even
though one may be a 2 and the other may be a 3.  Similarly with two
VARYING ranges.

There is an oversight in frange::operator== where we are returning
false for two identical NANs.  This is causing us to never cache NANs
in sbr_sparse_bitmap::set_bb_range.


yes, this is correct.

Andrew



Re: [PATCH] PR tree-optimization/109462 - Don't use ANY PHI equivalences in range-on-entry.

2023-04-13 Thread Andrew MacLeod via Gcc-patches


On 4/13/23 09:56, Richard Biener wrote:

On Wed, Apr 12, 2023 at 10:55 PM Andrew MacLeod  wrote:


On 4/12/23 07:01, Richard Biener wrote:

On Wed, Apr 12, 2023 at 12:59 PM Jakub Jelinek  wrote:

Would be nice.

Though, I'm afraid it still wouldn't fix the PR101912 testcase, because
it has exactly what happens in this PR, undefined phi arg from the
pre-header and uses of the previous iteration's value (i.e. across
backedge).

Well yes, that's what's not allowed.  So when the PHI dominates the
to-be-equivalenced argument edge src then the equivalence isn't
valid because there's a place (that very source block for example) a use of the
PHI lhs could appear and where we'd then mixup iterations.


If we want to implement this cleaner, then as you say, we don't create
the equivalence if the PHI node dominates the argument edge.  The
attached patch does just that, removing the both the "fix" for 108139
and the just committed one for 109462, replacing them with catching this
at the time of equivalence registering.

It bootstraps and passes all regressions tests.
Do you want me to check this into trunk?

Uh, it looks a bit convoluted.  Wouldn't the following be enough?  OK
if that works
(or fixed if I messed up trivially)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index e81f6b3699e..9c29012e160 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -776,7 +776,11 @@ fold_using_range::range_of_phi (vrange , gphi
*phi, fur_source )
   if (!seen_arg)
 {
   seen_arg = true;
- single_arg = arg;
+ // Avoid registering an equivalence if the PHI dominates the
+ // argument edge.  See PR 108139/109462.
+ if (dom_info_available_p (CDI_DOMINATORS)
+ && !dominated_by_p (CDI_DOMINATORS, e->src, gimple_bb (phi)))
+   single_arg = arg;
 }
   else if (single_arg != arg)
 single_arg = NULL_TREE;


It would exposes a slight hole.. in cases where there is more than one 
copy of the name, ie:


for a_2 = PHI   we currently will create an equivalence 
between  a_2 and c_3 because its considered a single argument.  Not a 
big deal for this case since all arguments are c_3, but the hole would 
be when we have something like:


a_2 = PHI    if d_4 is undefined, then with the above 
patch we would only check the dominance of the first edge with c_3. we'd 
need to check all of them.


The patch is slightly convoluted because we always defer checking the 
edge/processing single arguments until we think there is a reason to 
(for performance).  My patch simple does the deferred check on the 
previous edge and sets the new one so that we would check both edges are 
valid before setting the equivalence.  Even as it is with this deferred 
check we're about 0.4% slower in VRP. IF we didnt do this deferring, 
then every PHI is going to have a check.


And along the way, remove the boolean seen_arg because having 
single_arg_edge set produces the same information.


Perhaps it would be cleaner to simply defer the entire thing to the end, 
like so.

Performance is pretty much identical in the end.

Bootstraps on x86_64-pc-linux-gnu, regressions are running. Assuming no 
regressions pop up,   OK for trunk?


Andrew





commit 9e16ef8e4de26bdc6e570bd327bbe15845491169
Author: Andrew MacLeod 
Date:   Wed Apr 12 13:10:55 2023 -0400

Ensure PHI equivalencies do not dominate the argument edge.

When we create an equivalency between a PHI definition and an argument,
ensure the definition does not dominate the incoming argument edge.

PR tree-optimziation/108139
PR tree-optimziation/109462
* gimple-range-cache.cc (ranger_cache::fill_block_cache): Remove
equivalency check for PHI nodes.
* gimple-range-fold.cc (fold_using_range::range_of_phi): Ensure def
does not dominate single-arg equivalency edges.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 3b52f1e734c..2314478d558 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1220,7 +1220,7 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
   // See if any equivalences can refine it.
   // PR 109462, like 108139 below, a one way equivalence introduced
   // by a PHI node can also be through the definition side.  Disallow it.
-  if (m_oracle && !is_a (SSA_NAME_DEF_STMT (name)))
+  if (m_oracle)
 	{
 	  tree equiv_name;
 	  relation_kind rel;
@@ -1237,13 +1237,6 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 	  if (!m_gori.has_edge_range_p (equiv_name))
 		continue;
 
-	  // PR 108139. It is hazardous to assume an equivalence with
-	  // a PHI is the same value.  The PHI may be an equivalence
-	  // via UNDEFINED arguments which is really a one way equivalence.
-	  // PHIDEF == name, but 

Re: [PATCH] PR tree-optimization/109462 - Don't use ANY PHI equivalences in range-on-entry.

2023-04-12 Thread Andrew MacLeod via Gcc-patches


On 4/12/23 07:01, Richard Biener wrote:

On Wed, Apr 12, 2023 at 12:59 PM Jakub Jelinek  wrote:


Would be nice.

Though, I'm afraid it still wouldn't fix the PR101912 testcase, because
it has exactly what happens in this PR, undefined phi arg from the
pre-header and uses of the previous iteration's value (i.e. across
backedge).

Well yes, that's what's not allowed.  So when the PHI dominates the
to-be-equivalenced argument edge src then the equivalence isn't
valid because there's a place (that very source block for example) a use of the
PHI lhs could appear and where we'd then mixup iterations.

If we want to implement this cleaner, then as you say, we don't create 
the equivalence if the PHI node dominates the argument edge.  The 
attached patch does just that, removing the both the "fix" for 108139 
and the just committed one for 109462, replacing them with catching this 
at the time of equivalence registering.


It bootstraps and passes all regressions tests.
Do you want me to check this into trunk?

Andrew

PS    Of course, we still fail 101912.   The only way I see us being 
able to do anything with that is to effectively peel the first iteration 
off, either physically,  or logically with the path ranger to determine 
if a given use  is actually reachable by the undefined value.




   :
  # prevcorr_7 = PHI 
  # leapcnt_8 = PHI <0(2), leapcnt_26(8)>
  if (leapcnt_8 < n_16)   // 0 < n_16
    goto ; [INV]

   :
  corr_22 = getint ();
  if (corr_22 <= 0)
    goto ; [INV]
  else
    goto ; [INV]

   :
  _1 = corr_22 == 1;
  _2 = leapcnt_8 != 0;  // [0, 0] = 0 != 0
  _3 = _1 & _2; // [0, 0] = 0 & _2
  if (_3 != 0)    // 4->5 is not taken on the path starting 
2->9

    goto ; [INV]
  else
    goto ; [INV]

   : // We know this path is not taken when 
prevcorr_7  == prevcorr_19(D)(2)

  if (prevcorr_7 != 1)
    goto ; [INV]
  else
    goto ; [INV]

   :
  _5 = prevcorr_7 + -1;
  if (prevcorr_7 != 2)
    goto ; [INV]
  else
    goto ; [INV]

Using the path ranger (Would it even need tweaks aldy?) , before issuing 
the warning the uninit code could easily start at each use, construct 
the path(s) to that use from the unitialized value, and determine  when 
prevcorr is uninitialized, 2->9->3->4->5 will not be executed  and of 
course,neither will 2->9->3->4->5->6


  I think threading already does something similar?



commit 79b13320cf739c965bc8c7ceb8b27903271a3f6e
Author: Andrew MacLeod 
Date:   Wed Apr 12 13:10:55 2023 -0400

Ensure PHI equivalencies do not dominate the argument edge.

When we create an equivalency between a PHI definition and an argument,
ensure the defintion does not dominate the incoming argument edge.

PR tree-optimziation/108139
PR tree-optimziation/109462
* gimple-range-cache.cc (ranger_cache::fill_block_cache): Remove
equivalency check for PHI nodes.
* gimple-range-fold.cc (fold_using_range::range_of_phi): Ensure def
does not dominate single-arg equivalency edges.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 3b52f1e734c..2314478d558 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1220,7 +1220,7 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
   // See if any equivalences can refine it.
   // PR 109462, like 108139 below, a one way equivalence introduced
   // by a PHI node can also be through the definition side.  Disallow it.
-  if (m_oracle && !is_a (SSA_NAME_DEF_STMT (name)))
+  if (m_oracle)
 	{
 	  tree equiv_name;
 	  relation_kind rel;
@@ -1237,13 +1237,6 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 	  if (!m_gori.has_edge_range_p (equiv_name))
 		continue;
 
-	  // PR 108139. It is hazardous to assume an equivalence with
-	  // a PHI is the same value.  The PHI may be an equivalence
-	  // via UNDEFINED arguments which is really a one way equivalence.
-	  // PHIDEF == name, but name may not be == PHIDEF.
-	  if (is_a (SSA_NAME_DEF_STMT (equiv_name)))
-		continue;
-
 	  // Check if the equiv definition dominates this block
 	  if (equiv_bb == bb ||
 		  (equiv_bb && !dominated_by_p (CDI_DOMINATORS, bb, equiv_bb)))
diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index e81f6b3699e..8860152d3a0 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -742,7 +742,8 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 
   // Track if all executable arguments are the same.
   tree single_arg = NULL_TREE;
-  bool seen_arg = false;
+  edge single_arg_edge = NULL;
+  basic_block bb = gimple_bb (phi);
 
   // Start with an empty range, unioning in each argument's range.
   r.set_undefined ();
@@ -773,13 +774,23 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	src.gori ()->register_dependency 

Re: [PATCH] PR tree-optimization/109462 - Don't use ANY PHI equivalences in range-on-entry.

2023-04-12 Thread Andrew MacLeod via Gcc-patches



On 4/12/23 04:20, Jakub Jelinek wrote:

On Tue, Apr 11, 2023 at 07:52:29PM -0400, Andrew MacLeod wrote:

This bootstraps on x86_64-pc-linux-gnu  with that single regression, which I
have XFAILed for now.  OK for trunk?

Yes.


    Once Jakub verifies it actually fixes
the execution problem.   we have no executable test . yet.

I have verified this fix both on the original clang testcase, and
on a self-contained testcase I've reduced overnight and this morning.

Ok to commit it to trunk incrementally after your commit?



Sure. I just pushed it.

Andrew




[PATCH] PR tree-optimization/109462 - Don't use ANY PHI equivalences in range-on-entry.

2023-04-11 Thread Andrew MacLeod via Gcc-patches

This is a carry over from PR 108139.

When we have a PHI node which has 2 arguments and one is undefined, we 
create an equivalence between the LHS and the non-undefined PHI 
argument.  THis allows us to perform certain optimizations.


The problem is, when we are evaluating range-on-entry in the cache, its 
depends on where that equivalence is made, from where we have no context.


a_3 = 

if c_3 is undefined,  then a_3 is equivalent to b_2... but b_2 is not 
equivalence to a_3 everywhere..   its a one way thing.


108139 fixed this by not evaluating any equivalences if the equivalence 
was the LHS.


What it missed, was it possible we are calculating the range of a_3.   
b_2 is not defined in a phi node, so it happily used the equivalence.  
This PR demonstrates that we can't always use that equivlence either 
without more context.  There can be places in the IL where a_3 is used, 
but b_2 has moved to a new value within a loop.


So we can't do this if either NAME or the equivalence is equal via a PHI 
node with an undefined argument.


Unfortunately, this unsafe assumption is why PR 101912 is fixed.   
Fixing this issue properly is going to cause that to reopen as it is 
unsafe. (That PR is  a false uninitialized warning issue, rather than an 
wrong-code issue)


This bootstraps on x86_64-pc-linux-gnu  with that single regression, 
which I have XFAILed for now.  OK for trunk?   Once Jakub verifies it 
actually fixes the execution problem.   we have no executable test . yet.


Andrew



commit 90848fb75cf91a45edd355d2b1485ef835099609
Author: Andrew MacLeod 
Date:   Tue Apr 11 17:29:03 2023 -0400

Don't use ANY PHI equivalences in range-on-entry.

PR 108139 dissallows PHI equivalencies in the on-entry calculator, but
it was only checking if the equivlaence was a PHI.  In this case, NAME
itself is a PHI with an equivlaence caused by an undefined value, so we
also need to check that case.  Unfortunately this un-fixes 101912.

PR tree-optimization/109462
gcc/
* gimple-range-cache.cc (ranger_cache::fill_block_cache): Don't
check for equivalences if NAME is a phi node.

gcc/testsuite/
* gcc.dg/uninit-pr101912.c: XFAIL the warning.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 6a098d8ec28..3b52f1e734c 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1218,7 +1218,9 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 	  fprintf (dump_file, "\n");
 	}
   // See if any equivalences can refine it.
-  if (m_oracle)
+  // PR 109462, like 108139 below, a one way equivalence introduced
+  // by a PHI node can also be through the definition side.  Disallow it.
+  if (m_oracle && !is_a (SSA_NAME_DEF_STMT (name)))
 	{
 	  tree equiv_name;
 	  relation_kind rel;
diff --git a/gcc/testsuite/gcc.dg/uninit-pr101912.c b/gcc/testsuite/gcc.dg/uninit-pr101912.c
index 1550c03436d..62cd2a0c73e 100644
--- a/gcc/testsuite/gcc.dg/uninit-pr101912.c
+++ b/gcc/testsuite/gcc.dg/uninit-pr101912.c
@@ -11,7 +11,7 @@ tzloadbody (void)
   for (int i = 0; i < n; i++)
 {
   int corr = getint ();
-  if (corr < 1 || (corr == 1 && !(leapcnt == 0 || (prevcorr < corr ? corr == prevcorr + 1 : (corr == prevcorr || corr == prevcorr - 1) /* { dg-bogus "uninitialized" } */
+  if (corr < 1 || (corr == 1 && !(leapcnt == 0 || (prevcorr < corr ? corr == prevcorr + 1 : (corr == prevcorr || corr == prevcorr - 1) /* { dg-bogus "uninitialized" "pr101912" { xfail *-*-* } } */
 	return -1;
 
   prevcorr = corr;


Re: [RFC PATCH] range-op-float: Fix up op1_op2_relation of comparisons

2023-04-11 Thread Andrew MacLeod via Gcc-patches


On 4/11/23 04:21, Jakub Jelinek wrote:

Hi!

This patch was what I've tried first before the currently committed
PR109386 fix.  Still, I think it is the right thing until we have proper
full set of VREL_* relations for NANs (though it would be really nice
if op1_op2_relation could be passed either type of the comparison
operands, or even better ranges of the two operands, such that
we could choose if inversion of say VREL_LT is VREL_GE (if !MODE_HONOR_NANS
(TYPE_MODE (type))) or rhs1/rhs2 ranges are guaranteed not to include
NANs (!known_isnan && !maybe_isnan for both), or VREL_UNGE, etc.
Anyway, the current state is that for the LE/LT/GE/GT comparisons
we pretend the inverse is like for integral comparisons, which is
true only if NANs can't appear in operands, while for UNLE/UNLT/UNGE/UNGT
we don't override op1_op2_relation (so it always yields VREL_VARYING).

Though, this patch regresses the
FAIL: gcc.dg/tree-ssa/vrp-float-6.c scan-tree-dump-times evrp "Folding predicate x_.* 
<= y_.* to 1" 1
test, so am not sure what to do with it.  The test has explicit
!isnan tests around it, so e.g. having the ranges passed to op1_op2_relation
would also fix it.



I see no reason op1_op2_relation can't have ranges provided to it for 
op1 and op2.  There was no need originally.  There are times when we 
don't have a range handy and we want the simple answer, but if the 
ranges are available, we could utilize them.


Attached is a patch which added op1 and op2 ranges to the routine.  GORI 
will utilize and pass on real ranges (which I think is the core part you 
want), but the consumers in fold_using_range at this point will simply 
pass in varying.  There are 2 consumers in fold_using_range.. one is a 
combiner for logicals, and the other is for export outgoing relations 
that are not on the branch condition.  The combiner could use real 
ranges, but until I fix dispatch up it is very awkward to get them.  The 
export one simply doesn't have them without going to an calculating 
them.. which would probably be expensive..


Regardless, you can at least try your enhancement using real ranges and 
see if this works for you.


This bootstraps and has no regressions, and is fine by me if you want to 
use it.,


Andrew

commit 3715234f2cba21f2b9ec6c609b6f058d1d8af500
Author: Andrew MacLeod 
Date:   Tue Apr 11 12:25:49 2023 -0400

Add op1 and op2 ranges to op1_op2_relation.

* gimple-range-fold.cc (fold_using_range::relation_fold_and_or):
Provide VARYING for op1 and op2 when calling op1_op2_relation.
(fur_source::register_outgoing_edges): Ditto.
* gimple-range-gori.cc (gori_compute::compute_operand1_range):
Pass op1 and op2 ranges to op1_op2_relation.
(gori_compute::compute_operand2_range): Ditto.
* range-op-float.cc (*::op1_op2_relation): Adjust params.
* range-op.cc (*::op1_op2_relation): Adjust params.
* range-op.h (*::op1_op2_relation): Adjust params.

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index e81f6b3699e..3170b1e71a1 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -1051,9 +1051,11 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 return;
 
   int_range<2> bool_one (boolean_true_node, boolean_true_node);
+  Value_Range op (TREE_TYPE (ssa1));
+  op.set_varying (TREE_TYPE (ssa1));
 
-  relation_kind relation1 = handler1.op1_op2_relation (bool_one);
-  relation_kind relation2 = handler2.op1_op2_relation (bool_one);
+  relation_kind relation1 = handler1.op1_op2_relation (bool_one, op, op);
+  relation_kind relation2 = handler2.op1_op2_relation (bool_one, op, op);
   if (relation1 == VREL_VARYING || relation2 == VREL_VARYING)
 return;
 
@@ -1125,15 +1127,17 @@ fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge
   tree ssa2 = gimple_range_ssa_p (handler.operand2 ());
   if (ssa1 && ssa2)
 {
+  Value_Range op (TREE_TYPE (ssa1));
+  op.set_varying (TREE_TYPE (ssa1));
   if (e0)
 	{
-	  relation_kind relation = handler.op1_op2_relation (e0_range);
+	  relation_kind relation = handler.op1_op2_relation (e0_range, op, op);
 	  if (relation != VREL_VARYING)
 	register_relation (e0, relation, ssa1, ssa2);
 	}
   if (e1)
 	{
-	  relation_kind relation = handler.op1_op2_relation (e1_range);
+	  relation_kind relation = handler.op1_op2_relation (e1_range, op, op);
 	  if (relation != VREL_VARYING)
 	register_relation (e1, relation, ssa1, ssa2);
 	}
@@ -1160,17 +1164,19 @@ fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge
   Value_Range r (TREE_TYPE (name));
   if (ssa1 && ssa2)
 	{
+	  Value_Range op (TREE_TYPE (ssa1));
+	  op.set_varying (TREE_TYPE (ssa1));
 	  if (e0 && gori ()->outgoing_edge_range_p (r, e0, name, *m_query)
 	  && r.singleton_p ())
 	{
-	  relation_kind relation = handler.op1_op2_relation (r);
+	  relation_kind 

[PATCH] PR tree-optimization/109417 - Check if dependency is valid before using in may_recompute_p.

2023-04-05 Thread Andrew MacLeod via Gcc-patches
When a statement is first processed, any SSA_NAMEs that are dependencies 
are cached for quick future access.


if we ;later rewrite the statement (say propagate a constant into it), 
its possible the ssa-name in this cache is no longer active.   Normally 
this is not a problem, but the changed to may_recompute_p forgot to take 
that into account, and was checking a dependency from the cache that was 
in the SSA_NAME_FREE_LIST. It thus had no SSA_NAME_DEF_STMT when we were 
expecting one.


This patch simply rejects dependencies from consideration if they are in 
the free list.


Bootstrapping on x86_64-pc-linux-gnu  and presuming no regressio0ns, OK 
for trunk?


Andrew
commit ecd86e159e8499feb387bc4d99bd37a5fd6a0d68
Author: Andrew MacLeod 
Date:   Wed Apr 5 15:59:38 2023 -0400

Check if dependency is valid before using in may_recompute_p.

When the IL is rewritten after a statement has been processed and
dependencies cached, its possible that an ssa-name in the dependency
cache is no longer in the IL.  Check this before trying to recompute.

PR tree-optimization/109417
gcc/
* gimple-range-gori.cc (gori_compute::may_recompute_p): Check if
dependency is in SSA_NAME_FREE_LIST.

gcc/testsuite/
* gcc.dg/pr109417.c: New.

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 5f4313b27dd..6e2f9533038 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1314,7 +1314,9 @@ gori_compute::may_recompute_p (tree name, basic_block bb, int depth)
   tree dep2 = depend2 (name);
 
   // If the first dependency is not set, there is no recomputation.
-  if (!dep1)
+  // Dependencies reflect original IL, not current state.   Check if the
+  // SSA_NAME is still valid as well.
+  if (!dep1 || SSA_NAME_IN_FREE_LIST (dep1))
 return false;
 
   // Don't recalculate PHIs or statements with side_effects.
diff --git a/gcc/testsuite/gcc.dg/pr109417.c b/gcc/testsuite/gcc.dg/pr109417.c
new file mode 100644
index 000..15711dbbafe
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr109417.c
@@ -0,0 +1,24 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+int printf(const char *, ...);
+int c, d, *e, f[1][2], g;
+int main() {
+  int h = 0, *a = , **b[1] = {};
+  while (e)
+while (g) {
+L:
+  for (h = 0; h < 2; h++) {
+while (d)
+  for (*e = 0; *e < 1;)
+printf("0");
+while (c)
+  ;
+f[g][h] = 0;
+  }
+}
+  if (h)
+goto L;
+  return 0;
+}
+


Re: [PATCH (pushed)] param: document ranger-recompute-depth

2023-04-03 Thread Andrew MacLeod via Gcc-patches

Bah.. forgot that.. thanks :-)

Andrew

On 4/3/23 04:04, Martin Liška wrote:

gcc/ChangeLog:

* doc/invoke.texi: Document new param.
---
 gcc/doc/invoke.texi | 4 
 1 file changed, 4 insertions(+)

diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index def2df4584b..c9482886c5a 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -16170,6 +16170,10 @@ per supernode, before terminating analysis.
 Maximum depth of logical expression evaluation ranger will look through
 when evaluating outgoing edge ranges.

+@item ranger-recompute-depth
+Maximum depth of instruction chains to consider for recomputation
+in the outgoing range calculator.
+
 @item relation-block-limit
 Maximum number of relations the oracle will register in a basic block.





Re: Regression with "recomputation and PR 109154"

2023-03-31 Thread Andrew MacLeod via Gcc-patches



On 3/31/23 19:31, Hans-Peter Nilsson wrote:

Date: Fri, 31 Mar 2023 15:48:22 -0400
From: Andrew MacLeod via Gcc-patches 
Reply-To: Andrew MacLeod 
commit 55bf4f0d443e5adbacfcdbbebf4b2e0c74d1dcc8
Author: Andrew MacLeod 
Date:   Fri Mar 31 15:42:43 2023 -0400

 Adjust testcases to not produce errors..
 
 tree-optimization/109363

 gcc/testsuite/
 * g++.dg/warn/Wstringop-overflow-4.C: Always cehck bogus message.
 * gcc.dg/tree-ssa/pr23109.c: Disable better recomputations.


(Needs to be spelled "PR tree-optimization/109363" for the
bugzilla-marker hook to react.  I'll mark manually though.)



Oops!  thanks :-)

Andrew



Re: Regression with "recomputation and PR 109154"

2023-03-31 Thread Andrew MacLeod via Gcc-patches



On 3/31/23 15:59, Jeff Law wrote:



On 3/31/23 13:48, Andrew MacLeod wrote:
should we do something like this to tweak the testcases?   or does 
someone have something else in mind?
Go ahead and tweak the testcase.  Unless you want to revamp it per 
Jakub's suggestions.


not particularly  :-)

pushed.

hopefully this smooths things a little...  probably causes them to fail 
elsewhere now. ha!   Im out for most of the rest of the weekend.. any 
other tweaks someone else should do...   I will check in once in a  while.


Andrew




Re: Regression with "recomputation and PR 109154"

2023-03-31 Thread Andrew MacLeod via Gcc-patches
should we do something like this to tweak the testcases?   or does 
someone have something else in mind?


Richi opened a PR for the STL failure (109350)

Andrew





On 3/31/23 13:37, Jakub Jelinek wrote:

On Fri, Mar 31, 2023 at 01:02:18PM -0400, Andrew MacLeod wrote:

I guess it figures the recip is safe to put in, there will not be a divide
by zero.

I think the problem was that 1/d was hoisted before the loop; as long as
it is guarded with the d > 0.01 or e > 0.005 condition, it is fine.
The test probably should have been a runtime test, doing the main stuff
in some other noipa function and doing fetestexcept after it or something
similar.


I guess the test is no longer testing what it should be?

And yes, we could set he param back to 1 for the test...
add   --param=ranger-recompute-depth=1   makes the "issue" go away :-)  for
now.

That looks reasonable unless we rewrite the test into runtime one (but we'd
then need to double check that it was really miscompiled and would fail back
then in 4.0).

Jakub
commit 55bf4f0d443e5adbacfcdbbebf4b2e0c74d1dcc8
Author: Andrew MacLeod 
Date:   Fri Mar 31 15:42:43 2023 -0400

Adjust testcases to not produce errors..

tree-optimization/109363
gcc/testsuite/
* g++.dg/warn/Wstringop-overflow-4.C: Always cehck bogus message.
* gcc.dg/tree-ssa/pr23109.c: Disable better recomputations.

diff --git a/gcc/testsuite/g++.dg/warn/Wstringop-overflow-4.C b/gcc/testsuite/g++.dg/warn/Wstringop-overflow-4.C
index 35fb59e0232..faad5bed074 100644
--- a/gcc/testsuite/g++.dg/warn/Wstringop-overflow-4.C
+++ b/gcc/testsuite/g++.dg/warn/Wstringop-overflow-4.C
@@ -141,7 +141,7 @@ void test_strcpy_new_int16_t (size_t n, const size_t vals[])
 
   int r_imin_imax = SR (INT_MIN, INT_MAX);
   T (S (1), new int16_t[r_imin_imax]);
-  T (S (2), new int16_t[r_imin_imax + 1]); // { dg-bogus "into a region of size" "pr106120" { xfail { ilp32 && c++98_only } } }
+  T (S (2), new int16_t[r_imin_imax + 1]); // { dg-bogus "into a region of size" "pr106120" { xfail { c++98_only } } }
   T (S (9), new int16_t[r_imin_imax * 2 + 1]);
 
   int r_0_imax = SR (0, INT_MAX);
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr23109.c b/gcc/testsuite/gcc.dg/tree-ssa/pr23109.c
index 7cdf1d05ee7..059f658ea20 100644
--- a/gcc/testsuite/gcc.dg/tree-ssa/pr23109.c
+++ b/gcc/testsuite/gcc.dg/tree-ssa/pr23109.c
@@ -1,6 +1,7 @@
 /* { dg-do compile } */
-/* { dg-options "-O2 -funsafe-math-optimizations -ftrapping-math -fdump-tree-recip -fdump-tree-lim2" } */
+/* { dg-options "-O2 -funsafe-math-optimizations -ftrapping-math -fdump-tree-recip -fdump-tree-lim2 --param=ranger-recompute-depth=1" } */
 /* { dg-warning "'-fassociative-math' disabled" "" { target *-*-* } 0 } */
+/* ranger-recompute-depth prevents the optimizers from being too smart.  */
 
 double F[2] = { 0., 0. }, e = 0.;
 


  1   2   3   4   5   6   7   >