Re: PING^1 [PATCH] range: Workaround different type precision issue between _Float128 and long double [PR112788]

2023-12-12 Thread Andrew MacLeod
I leave this for the release managers, but I am not opposed to it for 
this release... It would be nice to remove it for the next release


Andrew



On 12/12/23 01:07, Kewen.Lin wrote:

Hi,

Gentle ping this:

https://gcc.gnu.org/pipermail/gcc-patches/2023-December/639140.html

BR,
Kewen

on 2023/12/4 17:49, Kewen.Lin wrote:

Hi,

As PR112788 shows, on rs6000 with -mabi=ieeelongdouble type _Float128
has the different type precision (128) from that (127) of type long
double, but actually they has the same underlying mode, so they have
the same precision as the mode indicates the same real type format
ieee_quad_format.

It's not sensible to have such two types which have the same mode but
different type precisions, some fix attempt was posted at [1].
As the discussion there, there are some historical reasons and
practical issues.  Considering we passed stage 1 and it also affected
the build as reported, this patch is trying to temporarily workaround
it.  I thought to introduce a hookpod but that seems a bit overkill,
assuming scalar float type with the same mode should have the same
precision looks sensible.

Bootstrapped and regtested on powerpc64-linux-gnu P7/P8/P9 and
powerpc64le-linux-gnu P9/P10.

Is it ok for trunk?

[1] 
https://inbox.sourceware.org/gcc-patches/718677e7-614d-7977-312d-05a75e1fd...@linux.ibm.com/

BR,
Kewen

PR tree-optimization/112788

gcc/ChangeLog:

* value-range.h (range_compatible_p): Workaround same type mode but
different type precision issue for rs6000 scalar float types
_Float128 and long double.
---
  gcc/value-range.h | 10 --
  1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/gcc/value-range.h b/gcc/value-range.h
index 33f204a7171..d0a84754a10 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -1558,7 +1558,13 @@ range_compatible_p (tree type1, tree type2)
// types_compatible_p requires conversion in both directions to be useless.
// GIMPLE only requires a cast one way in order to be compatible.
// Ranges really only need the sign and precision to be the same.
-  return (TYPE_PRECISION (type1) == TYPE_PRECISION (type2)
- && TYPE_SIGN (type1) == TYPE_SIGN (type2));
+  return TYPE_SIGN (type1) == TYPE_SIGN (type2)
+&& (TYPE_PRECISION (type1) == TYPE_PRECISION (type2)
+// FIXME: As PR112788 shows, for now on rs6000 _Float128 has
+// type precision 128 while long double has type precision 127
+// but both have the same mode so their precision is actually
+// the same, workaround it temporarily.
+|| (SCALAR_FLOAT_TYPE_P (type1)
+&& TYPE_MODE (type1) == TYPE_MODE (type2)));
  }
  #endif // GCC_VALUE_RANGE_H
--
2.42.0





Re: [Patch] OpenMP: Minor '!$omp allocators' cleanup - and still: Re: [patch] OpenMP/Fortran: Implement omp allocators/allocate for ptr/allocatables

2023-12-11 Thread Andrew MacLeod



On 12/11/23 17:12, Thomas Schwinge wrote:

Hi!

This issue would've been prevented if we'd actually use a distinct C++
data type for GCC types, checkable at compile time -- I'm thus CCing
Andrew MacLeod for amusement or crying, "one more for the list!".  ;-\


Perhaps the time has come  It is definitely under re-consideration 
for next stage 1...


Andrew


(See
<https://inbox.sourceware.org/1acd7994-2440-4092-897f-97f14d3fb...@redhat.com>
"[TTYPE] Strongly typed tree project. Original document circa 2017".)

On 2023-12-11T12:45:27+0100, Tobias Burnus  wrote:

I included a minor cleanup patch [...]

I intent to commit that patch as obvious, unless there are further comments.
OpenMP: Minor '!$omp allocators' cleanup
--- a/gcc/fortran/trans-openmp.cc
+++ b/gcc/fortran/trans-openmp.cc
@@ -8361,8 +8361,10 @@ gfc_omp_call_add_alloc (tree ptr)
if (fn == NULL_TREE)
  {
fn = build_function_type_list (void_type_node, ptr_type_node, 
NULL_TREE);
+  tree att = build_tree_list (NULL_TREE, build_string (4, ". R "));
+  att = tree_cons (get_identifier ("fn spec"), att, TYPE_ATTRIBUTES (fn));
+  fn = build_type_attribute_variant (fn, att);
fn = build_fn_decl ("GOMP_add_alloc", fn);
-/* FIXME: attributes.  */
  }
return build_call_expr_loc (input_location, fn, 1, ptr);
  }
@@ -8380,7 +8382,9 @@ gfc_omp_call_is_alloc (tree ptr)
fn = build_function_type_list (boolean_type_node, ptr_type_node,
NULL_TREE);
fn = build_fn_decl ("GOMP_is_alloc", fn);
-/* FIXME: attributes.  */
+  tree att = build_tree_list (NULL_TREE, build_string (4, ". R "));
+  att = tree_cons (get_identifier ("fn spec"), att, TYPE_ATTRIBUTES (fn));
+  fn = build_type_attribute_variant (fn, att);
  }
return build_call_expr_loc (input_location, fn, 1, ptr);
  }

Pushed to master branch commit 453e0f45a49f425992bc47ff8909ed8affc29d2e
"Resolve ICE in 'gcc/fortran/trans-openmp.cc:gfc_omp_call_is_alloc'", see
attached.


Grüße
  Thomas


-
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 
München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas 
Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht 
München, HRB 106955




Re: [PATCH] tree-optimization/112843 - update_stmt doing wrong things

2023-12-05 Thread Andrew MacLeod

On 12/5/23 03:27, Richard Biener wrote:

The following removes range_query::update_stmt and its single
invocation from update_stmt_operands.  That function is not
supposed to look beyond the raw stmt contents of the passed
stmt since there's no guarantee about the rest of the IL.

I've successfully bootstrapped & tested the update_stmt_operands
hunk, now testing removal of the actual routine as well.  The
testcase that was added when introducing range_query::update_stmt
still passes.

OK to remove the implementation?  I don't see any way around
removing the call though.


Im ok removing it.  Now that we are enabling ranger during a lot of IL 
updating, it probably doesn't make sense for the few cases it use to 
help with with, and may well be dangerous.


the testcase in question that was added appears to be threaded now which 
it wasn't before.  If a similar situation occurs and we need some sort 
of updating, I'll just mark the ssa-name on the LHS as out-of-date, and 
then it'll get lazily updated if need be.


Thanks.

Andrew


Thanks,
Richard.

PR tree-optimization/112843
* tree-ssa-operands.cc (update_stmt_operands): Do not call
update_stmt from ranger.
* value-query.h (range_query::update_stmt): Remove.
* gimple-range.h (gimple_ranger::update_stmt): Likewise.
* gimple-range.cc (gimple_ranger::update_stmt): Likewise.
---
  gcc/gimple-range.cc  | 34 --
  gcc/gimple-range.h   |  1 -
  gcc/tree-ssa-operands.cc |  3 ---
  gcc/value-query.h|  3 ---
  4 files changed, 41 deletions(-)

diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 5e9bb397a20..84d2c7516e6 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -544,40 +544,6 @@ gimple_ranger::register_transitive_inferred_ranges 
(basic_block bb)
  }
  }
  
-// When a statement S has changed since the result was cached, re-evaluate

-// and update the global cache.
-
-void
-gimple_ranger::update_stmt (gimple *s)
-{
-  tree lhs = gimple_get_lhs (s);
-  if (!lhs || !gimple_range_ssa_p (lhs))
-return;
-  Value_Range r (TREE_TYPE (lhs));
-  // Only update if it already had a value.
-  if (m_cache.get_global_range (r, lhs))
-{
-  // Re-calculate a new value using just cache values.
-  Value_Range tmp (TREE_TYPE (lhs));
-  fold_using_range f;
-  fur_stmt src (s, _cache);
-  f.fold_stmt (tmp, s, src, lhs);
-
-  // Combine the new value with the old value to check for a change.
-  if (r.intersect (tmp))
-   {
- if (dump_file && (dump_flags & TDF_DETAILS))
-   {
- print_generic_expr (dump_file, lhs, TDF_SLIM);
- fprintf (dump_file, " : global value re-evaluated to ");
- r.dump (dump_file);
- fputc ('\n', dump_file);
-   }
- m_cache.set_global_range (lhs, r);
-   }
-}
-}
-
  // This routine will export whatever global ranges are known to GCC
  // SSA_RANGE_NAME_INFO and SSA_NAME_PTR_INFO fields.
  
diff --git a/gcc/gimple-range.h b/gcc/gimple-range.h

index 5807a2b80e5..6b0835c4ca1 100644
--- a/gcc/gimple-range.h
+++ b/gcc/gimple-range.h
@@ -52,7 +52,6 @@ public:
virtual bool range_of_stmt (vrange , gimple *, tree name = NULL) override;
virtual bool range_of_expr (vrange , tree name, gimple * = NULL) override;
virtual bool range_on_edge (vrange , edge e, tree name) override;
-  virtual void update_stmt (gimple *) override;
void range_on_entry (vrange , basic_block bb, tree name);
void range_on_exit (vrange , basic_block bb, tree name);
void export_global_ranges ();
diff --git a/gcc/tree-ssa-operands.cc b/gcc/tree-ssa-operands.cc
index 57e393ae164..b0516a00d64 100644
--- a/gcc/tree-ssa-operands.cc
+++ b/gcc/tree-ssa-operands.cc
@@ -30,7 +30,6 @@ along with GCC; see the file COPYING3.  If not see
  #include "stmt.h"
  #include "print-tree.h"
  #include "dumpfile.h"
-#include "value-query.h"
  
  
  /* This file contains the code required to manage the operands cache of the

@@ -1146,8 +1145,6 @@ update_stmt_operands (struct function *fn, gimple *stmt)
gcc_assert (gimple_modified_p (stmt));
operands_scanner (fn, stmt).build_ssa_operands ();
gimple_set_modified (stmt, false);
-  // Inform the active range query an update has happened.
-  get_range_query (fn)->update_stmt (stmt);
  
timevar_pop (TV_TREE_OPS);

  }
diff --git a/gcc/value-query.h b/gcc/value-query.h
index 429446b32eb..0a6f18b03f6 100644
--- a/gcc/value-query.h
+++ b/gcc/value-query.h
@@ -71,9 +71,6 @@ public:
virtual bool range_on_edge (vrange , edge, tree expr);
virtual bool range_of_stmt (vrange , gimple *, tree name = NULL);
  
-  // When the IL in a stmt is changed, call this for better results.

-  virtual void update_stmt (gimple *) { }
-
// Query if there is any relation between SSA1 and SSA2.
relation_kind query_relation (gimple *s, tree ssa1, tree ssa2,
bool get_range 

[COMMITTED] Use range_compatible_p in check_operands_p.

2023-12-01 Thread Andrew MacLeod
Comments in PR 112788 correctly brought up that the new 
check_operands_p() routine is directly checking precision rather than 
calling range_compatible_p().


Most earlier iterations of the original patch had ranges as arguments, 
and it wasn't primarily a CHECKING_P only call then...   Regardless, it 
makes total sense to call range_compatible_p so this patch does exactly 
that.  It required moving range_compatible_p() into value-range.h and 
then adjusting each check_operands_p() routine.


Now range type compatibility is centralized again :-P

Bootstraps on x86_64-pc-linux-gnu with no new regressions.

Andrew

From c6bb413eeb9d13412e8101e3029099d7fd746708 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 1 Dec 2023 11:15:33 -0500
Subject: [PATCH] Use range_compatible_p in check_operands_p.

Instead of directly checking type precision, check_operands_p should
invoke range_compatible_p to keep the range checking centralized.

	* gimple-range-fold.h (range_compatible_p): Relocate.
	* value-range.h (range_compatible_p): Here.
	* range-op-mixed.h (operand_equal::operand_check_p): Call
	range_compatible_p rather than comparing precision.
	(operand_not_equal::operand_check_p): Ditto.
	(operand_not_lt::operand_check_p): Ditto.
	(operand_not_le::operand_check_p): Ditto.
	(operand_not_gt::operand_check_p): Ditto.
	(operand_not_ge::operand_check_p): Ditto.
	(operand_plus::operand_check_p): Ditto.
	(operand_abs::operand_check_p): Ditto.
	(operand_minus::operand_check_p): Ditto.
	(operand_negate::operand_check_p): Ditto.
	(operand_mult::operand_check_p): Ditto.
	(operand_bitwise_not::operand_check_p): Ditto.
	(operand_bitwise_xor::operand_check_p): Ditto.
	(operand_bitwise_and::operand_check_p): Ditto.
	(operand_bitwise_or::operand_check_p): Ditto.
	(operand_min::operand_check_p): Ditto.
	(operand_max::operand_check_p): Ditto.
	* range-op.cc (operand_lshift::operand_check_p): Ditto.
	(operand_rshift::operand_check_p): Ditto.
	(operand_logical_and::operand_check_p): Ditto.
	(operand_logical_or::operand_check_p): Ditto.
	(operand_logical_not::operand_check_p): Ditto.
---
 gcc/gimple-range-fold.h | 12 
 gcc/range-op-mixed.h| 43 -
 gcc/range-op.cc | 12 +---
 gcc/value-range.h   | 11 +++
 4 files changed, 33 insertions(+), 45 deletions(-)

diff --git a/gcc/gimple-range-fold.h b/gcc/gimple-range-fold.h
index fcbe1626790..0094b4e3f35 100644
--- a/gcc/gimple-range-fold.h
+++ b/gcc/gimple-range-fold.h
@@ -89,18 +89,6 @@ gimple_range_ssa_p (tree exp)
   return NULL_TREE;
 }
 
-// Return true if TYPE1 and TYPE2 are compatible range types.
-
-inline bool
-range_compatible_p (tree type1, tree type2)
-{
-  // types_compatible_p requires conversion in both directions to be useless.
-  // GIMPLE only requires a cast one way in order to be compatible.
-  // Ranges really only need the sign and precision to be the same.
-  return (TYPE_PRECISION (type1) == TYPE_PRECISION (type2)
-	  && TYPE_SIGN (type1) == TYPE_SIGN (type2));
-}
-
 // Source of all operands for fold_using_range and gori_compute.
 // It abstracts out the source of an operand so it can come from a stmt or
 // and edge or anywhere a derived class of fur_source wants.
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 4386a68e946..7e3ee17ccbd 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -140,7 +140,7 @@ public:
 		   const irange ) const final override;
   // Check op1 and op2 for compatibility.
   bool operand_check_p (tree, tree t1, tree t2) const final override
-{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
+{ return range_compatible_p (t1, t2); }
 };
 
 class operator_not_equal : public range_operator
@@ -179,7 +179,7 @@ public:
 		   const irange ) const final override;
   // Check op1 and op2 for compatibility.
   bool operand_check_p (tree, tree t1, tree t2) const final override
-{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
+{ return range_compatible_p (t1, t2); }
 };
 
 class operator_lt :  public range_operator
@@ -215,7 +215,7 @@ public:
 		   const irange ) const final override;
   // Check op1 and op2 for compatibility.
   bool operand_check_p (tree, tree t1, tree t2) const final override
-{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
+{ return range_compatible_p (t1, t2); }
 };
 
 class operator_le :  public range_operator
@@ -254,7 +254,7 @@ public:
 		   const irange ) const final override;
   // Check op1 and op2 for compatibility.
   bool operand_check_p (tree, tree t1, tree t2) const final override
-{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
+{ return range_compatible_p (t1, t2); }
 };
 
 class operator_gt :  public range_operator
@@ -292,7 +292,7 @@ public:
 		   const irange ) const final override;
   // Check op1 and op2 for compatibility.
   bool operand_check_p (tree, tree t1, tree t2) const final override
-{ return TYPE_P

[COMMITTED 2/2] PR tree-optimization/111922 - Check operands before invoking fold_range.

2023-11-29 Thread Andrew MacLeod
This patch utilizes the new check_operands_p() routine in range-ops to 
verify the operands are compatible before IPA tries to call 
fold_range().   I do not know if there are other places in IPA that 
should be checking this, but we have a bug report for this place at least.


The other option would be to have fold_range simply return false when 
operands don't match, but then we lose the compile time checking that 
everything is as it should be and bugs may sneak thru.


Bootstraps on x86_64-pc-linux-gnu with  no regressions. Committed.

Andrew


From 5f0c0f02702eba568374a7d82ec9463edd1a905c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 28 Nov 2023 13:02:35 -0500
Subject: [PATCH 2/2] Check operands before invoking fold_range.

Call check_operands_p before fold_range to make sure it is a valid operation.

	PR tree-optimization/111922
	gcc/
	* ipa-cp.cc (ipa_vr_operation_and_type_effects): Check the
	operands are valid before calling fold_range.

	gcc/testsuite/
	* gcc.dg/pr111922.c: New.
---
 gcc/ipa-cp.cc   |  3 ++-
 gcc/testsuite/gcc.dg/pr111922.c | 29 +
 2 files changed, 31 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr111922.c

diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc
index 34fae065454..649ad536161 100644
--- a/gcc/ipa-cp.cc
+++ b/gcc/ipa-cp.cc
@@ -1926,7 +1926,8 @@ ipa_vr_operation_and_type_effects (vrange _vr,
   Value_Range varying (dst_type);
   varying.set_varying (dst_type);
 
-  return (handler.fold_range (dst_vr, dst_type, src_vr, varying)
+  return (handler.operand_check_p (dst_type, src_type, dst_type)
+	  && handler.fold_range (dst_vr, dst_type, src_vr, varying)
 	  && !dst_vr.varying_p ()
 	  && !dst_vr.undefined_p ());
 }
diff --git a/gcc/testsuite/gcc.dg/pr111922.c b/gcc/testsuite/gcc.dg/pr111922.c
new file mode 100644
index 000..4f429d741c7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111922.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-tree-fre" } */
+
+void f2 (void);
+void f4 (int, int, int);
+struct A { int a; };
+struct B { struct A *b; int c; } v;
+
+static int
+f1 (x, y)
+  struct C *x;
+  struct A *y;
+{
+  (v.c = v.b->a) || (v.c = v.b->a);
+  f2 ();
+}
+
+static void
+f3 (int x, int y)
+{
+  int b = f1 (0, ~x);
+  f4 (0, 0, v.c);
+}
+
+void
+f5 (void)
+{
+  f3 (0, 0);
+}
-- 
2.41.0



[COMMITTED 1/2] Add operand_check_p to range-ops.

2023-11-29 Thread Andrew MacLeod
I've been going back and forth with this for the past week, and finally 
settled on a solution


This patch adds an operand_check_p() (lhs_type, op1_type, op2_type) 
method to range_ops which will confirm whether the types of the operands 
being passed to fold_range, op1_range, and op2_range  are properly 
compatible.   For range-ops this basically means the precision matches.


It was a bit tricky to do it any other way because various operations 
allow different precision or even different types in some operand positions.


This patch sets up the operand_check_p to return true by default, which 
means there is no variation from what we do today.  However, I have gone 
in to all the integral/mixed range operators, and added checks for 
things like  X = Y + Z requiring the precision to be the same for all 3 
operands.   however x = y && z only requires OP1 and OP2 to be the same 
precision, and  x = ~y only requires the LHS and OP1 to match.


This call is utilized in a gcc_assert when CHECKING_P is on for 
fold_range(), op1_range() and op2_range() to provide compilation time 
verification while not costing anything for a release build.


Bootstraps on x86_64-pc-linux-gnu with no regressions. committed.

Andrew

From 9f1149ef823b64ead6115f79f99ddf8eead1c2f4 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 28 Nov 2023 09:39:30 -0500
Subject: [PATCH 1/2] Add operand_check_p to range-ops.

Add an optional method to verify operands are compatible, and check
the operands before all range operations.

	* range-op-mixed.h (operator_equal::operand_check_p): New.
	(operator_not_equal::operand_check_p): New.
	(operator_lt::operand_check_p): New.
	(operator_le::operand_check_p): New.
	(operator_gt::operand_check_p): New.
	(operator_ge::operand_check_p): New.
	(operator_plus::operand_check_p): New.
	(operator_abs::operand_check_p): New.
	(operator_minus::operand_check_p): New.
	(operator_negate::operand_check_p): New.
	(operator_mult::operand_check_p): New.
	(operator_bitwise_not::operand_check_p): New.
	(operator_bitwise_xor::operand_check_p): New.
	(operator_bitwise_and::operand_check_p): New.
	(operator_bitwise_or::operand_check_p): New.
	(operator_min::operand_check_p): New.
	(operator_max::operand_check_p): New.
	* range-op.cc (range_op_handler::fold_range): Check operand
	parameter types.
	(range_op_handler::op1_range): Ditto.
	(range_op_handler::op2_range): Ditto.
	(range_op_handler::operand_check_p): New.
	(range_operator::operand_check_p): New.
	(operator_lshift::operand_check_p): New.
	(operator_rshift::operand_check_p): New.
	(operator_logical_and::operand_check_p): New.
	(operator_logical_or::operand_check_p): New.
	(operator_logical_not::operand_check_p): New.
	* range-op.h (range_operator::operand_check_p): New.
	(range_op_handler::operand_check_p): New.
---
 gcc/range-op-mixed.h | 63 +---
 gcc/range-op.cc  | 53 ++---
 gcc/range-op.h   |  5 
 3 files changed, 114 insertions(+), 7 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 45e11df57df..4386a68e946 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -138,6 +138,9 @@ public:
   const frange &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
+  // Check op1 and op2 for compatibility.
+  bool operand_check_p (tree, tree t1, tree t2) const final override
+{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
 };
 
 class operator_not_equal : public range_operator
@@ -174,6 +177,9 @@ public:
   const frange &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
+  // Check op1 and op2 for compatibility.
+  bool operand_check_p (tree, tree t1, tree t2) const final override
+{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
 };
 
 class operator_lt :  public range_operator
@@ -207,6 +213,9 @@ public:
   const frange &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
+  // Check op1 and op2 for compatibility.
+  bool operand_check_p (tree, tree t1, tree t2) const final override
+{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
 };
 
 class operator_le :  public range_operator
@@ -243,6 +252,9 @@ public:
   const frange &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
+  // Check op1 and op2 for compatibility.
+  bool operand_check_p (tree, tree t1, tree t2) const final override
+{ return TYPE_PRECISION (t1) == TYPE_PRECISION (t2); }
 };
 
 class operator_gt :  public range_operator
@@ -278,6 +290,9 @@ public:
   const frange &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
+  // Check op1 and op2 for compatibil

[PATCH] PR tree-optimization/111922 - Ensure wi_fold arguments match precisions.

2023-11-24 Thread Andrew MacLeod
This problem here is that IPA is calling something like operator_minus 
with 2 operands, one with precision 32 (int) and one with precision 64 
(pointer). There are various ways this can happen as mentioned in the PR.


Regardless of whether IPA should be doing promoting types or not calling 
into range-ops,  range-ops does not support mis-matched precision in its 
arguments and it does not have to context to know what should be 
promoted/changed.   It is expected that the caller will ensure the 
operands are compatible.


However, It is not really practical for the caller to know this with 
more context. Some operations support different precision or even 
types.. ie, shifts, or casts, etc.    It seems silly to require IPA to 
have a big switch to see what the tree code is and match up/promote/or 
bail if operands don't match...


Range-ops routines probably shouldn't crash when this happens either, so 
this patch takes the conservative approach  and returns VARYING if there 
is a mismatch in the arguments precision.


Fixes the problem and bootstraps on x86_64-pc-linux-gnu with no new 
regressions.


OK for trunk?

Andrew

PS  If you would rather we trap in these cases and fix the callers, then 
I'd suggest we change these to checking_asserts instead.  I have also 
prepared a version that does a gcc_checking_assert instead of returning 
varying and done a bootstrap/testrun.    Of course, the callers will 
have to be changed..


It bootstraps fine in that variation too, and all the testcases (except  
this one of course) pass.   Its clearly not a common occurrence, and my 
inclination is to apply this patch so we silently move on and simply 
don't provide useful range info.. that is all the callers in these cases 
are likely to do anyway...





From f9cddb4cf931826f09197ed0fc2d6d64e6ccc3c3 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 22 Nov 2023 17:24:42 -0500
Subject: [PATCH] Ensure wi_fold arguments match precisions.

Return VARYING if any of the required operands or types to wi_fold do
not match expected precisions.

	PR tree-optimization/111922
	gcc/
	* range-op.cc (operator_plus::wi_fold): Check that precisions of
	arguments and result type match.
	(operator_widen_plus_signed::wi_fold): Ditto.
	(operator_widen_plus_unsigned::wi_fold): Ditto.
	(operator_minus::wi_fold): Ditto.
	(operator_min::wi_fold): Ditto.
	(operator_max::wi_fold): Ditto.
	(operator_mult::wi_fold): Ditto.
	(operator_widen_mult_signed::wi_fold): Ditto.
	(operator_widen_mult_unsigned::wi_fold): Ditto.
	(operator_div::wi_fold): Ditto.
	(operator_lshift::wi_fold): Ditto.
	(operator_rshift::wi_fold): Ditto.
	(operator_bitwise_and::wi_fold): Ditto.
	(operator_bitwise_or::wi_fold): Ditto.
	(operator_bitwise_xor::wi_fold): Ditto.
	(operator_trunc_mod::wi_fold): Ditto.
	(operator_abs::wi_fold): Ditto.
	(operator_absu::wi_fold): Ditto.

	gcc/testsuite/
	* gcc.dg/pr111922.c: New.
---
 gcc/range-op.cc | 119 
 gcc/testsuite/gcc.dg/pr111922.c |  29 
 2 files changed, 148 insertions(+)
 create mode 100644 gcc/testsuite/gcc.dg/pr111922.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 6137f2aeed3..ddb7339c075 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -1651,6 +1651,13 @@ operator_plus::wi_fold (irange , tree type,
 			const wide_int _lb, const wide_int _ub,
 			const wide_int _lb, const wide_int _ub) const
 {
+  // This operation requires all types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+  || lh_lb.get_precision () != TYPE_PRECISION (type))
+{
+  r.set_varying (type);
+  return;
+}
   wi::overflow_type ov_lb, ov_ub;
   signop s = TYPE_SIGN (type);
   wide_int new_lb = wi::add (lh_lb, rh_lb, s, _lb);
@@ -1797,6 +1804,12 @@ operator_widen_plus_signed::wi_fold (irange , tree type,
  const wide_int _lb,
  const wide_int _ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+{
+  r.set_varying (type);
+  return;
+}
wi::overflow_type ov_lb, ov_ub;
signop s = TYPE_SIGN (type);
 
@@ -1830,6 +1843,12 @@ operator_widen_plus_unsigned::wi_fold (irange , tree type,
    const wide_int _lb,
    const wide_int _ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+{
+  r.set_varying (type);
+  return;
+}
wi::overflow_type ov_lb, ov_ub;
signop s = TYPE_SIGN (type);
 
@@ -1858,6 +1877,14 @@ operator_minus::wi_fold (irange , tree type,
 			 const wide_int _lb, const wide_int _ub,
 			 const wide_int _lb, const wide_int _ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+  || lh_lb.get_precision () != TYPE_PRECISION (type))
+{
+  r.set_varying (type);
+  return

Re: Propagate value ranges of return values

2023-11-20 Thread Andrew MacLeod



On 11/18/23 20:21, Jan Hubicka wrote:

Hi,
this patch implements very basic propaation of return value ranges from VRP
pass.  This helps std::vector's push_back since we work out value range of
allocated block.  This propagates only within single translation unit.  I hoped
we will also do the propagation at WPA stage, but that needs more work on
ipa-cp side.

I also added code auto-detecting return_nonnull and corresponding 
-Wsuggest-attribute

Variant of this patch bootstrapped/regtested x86_64-linux, testing with
this version is running.  I plan to commit the patch at Monday provided
there are no issues.


I see no obvious issues with the ranger/vrp changes...

My only comment is that execute_ranger_vrp is called 3 times... EVRP,  
VRP1 and VRP2.. perhaps more someday.  As long as that is OK with the 
call to warn_function_returns_nonnull().


Andrew




[COMMITTED] PR tree-optimization/112509 - Use case label type to create case range.

2023-11-14 Thread Andrew MacLeod
We should create a range from the case labels directly, and then cast it 
to the type we care about rather than trying to convert it to the switch 
index type and then the type we care about.


Bootstraps on x86_64-pc-linux-gnu  with no regressions.   Pushed.

Andrew
From 4553a0496458a712dfd2f04b9803b611fdc777cc Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 13 Nov 2023 09:58:10 -0500
Subject: [PATCH] Use case label type to create case range.

Create a range from the label type, and cast it to the required type.

	PR tree-optimization/112509
	gcc/
	* tree-vrp.cc (find_case_label_range): Create range from case labels.

	gcc/testsuite/
	* gcc.dg/pr112509.c: New.
---
 gcc/testsuite/gcc.dg/pr112509.c | 22 ++
 gcc/tree-vrp.cc |  6 +-
 2 files changed, 23 insertions(+), 5 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr112509.c

diff --git a/gcc/testsuite/gcc.dg/pr112509.c b/gcc/testsuite/gcc.dg/pr112509.c
new file mode 100644
index 000..b733780bdc7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr112509.c
@@ -0,0 +1,22 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-tree-vrp -fno-tree-fre -fno-tree-forwprop" } */
+
+struct S {
+  unsigned j : 3;
+};
+int k, l, m_1 = {0};
+void f(int l, struct S x) {
+  unsigned int k_1;
+  while (m_1 % 8) switch (x.j) {
+case 1:
+case 3:
+case 4:
+case 6:
+case 2:
+case 5: l = m_1;
+case 7:
+case 0: k_1 = 0;
+default: break;
+}
+}
+void foo(struct S x) { f(l, x); }
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index 19d8f995d70..917fa873714 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -886,8 +886,6 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
   size_t i, j;
   tree op = gimple_switch_index (switch_stmt);
   tree type = TREE_TYPE (op);
-  unsigned prec = TYPE_PRECISION (type);
-  signop sign = TYPE_SIGN (type);
   tree tmin = wide_int_to_tree (type, range_of_op->lower_bound ());
   tree tmax = wide_int_to_tree (type, range_of_op->upper_bound ());
   find_case_label_range (switch_stmt, tmin, tmax, , );
@@ -900,9 +898,7 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
 	= CASE_HIGH (label) ? CASE_HIGH (label) : CASE_LOW (label);
   wide_int wlow = wi::to_wide (CASE_LOW (label));
   wide_int whigh = wi::to_wide (case_high);
-  int_range_max label_range (type,
- wide_int::from (wlow, prec, sign),
- wide_int::from (whigh, prec, sign));
+  int_range_max label_range (TREE_TYPE (case_high), wlow, whigh);
   if (!types_compatible_p (label_range.type (), range_of_op->type ()))
 	range_cast (label_range, range_of_op->type ());
   label_range.intersect (*range_of_op);
-- 
2.41.0



[PATCH][GCC13] PR tree-optimization/105834 - Choose better initial values for ranger.

2023-11-06 Thread Andrew MacLeod

As requested porting this patch from trunk resolves this PR in GCC 13.

Bootstraps on x86_64-pc-linux-gnu with no regressions.  OK for the gcc 
13 branch?


Andrew



From 0182a25607fa353274c27ec57ca497c00f1d1b76 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 6 Nov 2023 11:33:32 -0500
Subject: [PATCH] Choose better initial values for ranger.

Instead of defaulting to VARYING, fold the stmt using just global ranges.

	PR tree-optimization/105834
	gcc/
	* gimple-range-cache.cc (ranger_cache::get_global_range): Call
	fold_range with global query to choose an initial value.

	gcc/testsuite/
	* gcc.dg/pr105834.c
---
 gcc/gimple-range-cache.cc   | 17 -
 gcc/testsuite/gcc.dg/pr105834.c | 17 +
 2 files changed, 33 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr105834.c

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index e4e75943632..b09df6c81bf 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -846,7 +846,22 @@ ranger_cache::get_global_range (vrange , tree name, bool _p)
 		|| m_temporal->current_p (name, m_gori.depend1 (name),
 	  m_gori.depend2 (name));
   else
-m_globals.set_global_range (name, r);
+{
+  // If no global value has been set and value is VARYING, fold the stmt
+  // using just global ranges to get a better initial value.
+  // After inlining we tend to decide some things are constant, so
+  // do not do this evaluation after inlining.
+  if (r.varying_p () && !cfun->after_inlining)
+	{
+	  gimple *s = SSA_NAME_DEF_STMT (name);
+	  if (gimple_get_lhs (s) == name)
+	{
+	  if (!fold_range (r, s, get_global_range_query ()))
+		gimple_range_global (r, name);
+	}
+	}
+  m_globals.set_global_range (name, r);
+}
 
   // If the existing value was not current, mark it as always current.
   if (!current_p)
diff --git a/gcc/testsuite/gcc.dg/pr105834.c b/gcc/testsuite/gcc.dg/pr105834.c
new file mode 100644
index 000..d0eda03ef8b
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr105834.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+static int a, b;
+
+void foo();
+
+int main() {
+for (int c = 0; c < 2; c = c + (unsigned)3)
+if (a)
+for (;;)
+if (c > 0)
+b = 0;
+if (b)
+foo();
+}
+/* { dg-final { scan-tree-dump-not "foo" "optimized" } }  */
-- 
2.41.0



[COMMITTED 2/2] PR tree-optimization/111766 - Adjust operators equal and not_equal to check bitmasks against constants

2023-11-03 Thread Andrew MacLeod
When we compare a range against a constant for equality or inequality, 
there is currently no attempt made to utilize the known bits.


This patch adds a method to the irange_bitmask class to ask if a 
specific value satisfies the known bit pattern.  Operators equal and 
not_equal then utilize it when comparing to a constant eliiminating a 
class of cases we don;t currently get. ie.


if (x & 1) return;
if (x == 97657) foo()

will eliminate the call to foo, even though we do not remove all the odd 
numbers from the range.  THe bit pattern comparison for
  [irange] unsigned int [0, 0] [2, +INF] MASK 0xfffe VALUE 0x1  
will indicate that any even constants will be false.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew

From eb899fee35b8326b2105c04f58fd58bbdeca9d3b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 25 Oct 2023 09:46:50 -0400
Subject: [PATCH 2/2] Adjust operators equal and not_equal to check bitmasks
 against constants

Check to see if a comparison to a constant can be determined to always
be not-equal based on the bitmask.

	PR tree-optimization/111766
	gcc/
	* range-op.cc (operator_equal::fold_range): Check constants
	against the bitmask.
	(operator_not_equal::fold_range): Ditto.
	* value-range.h (irange_bitmask::member_p): New.

	gcc/testsuite/
	* gcc.dg/pr111766.c: New.
---
 gcc/range-op.cc | 20 
 gcc/testsuite/gcc.dg/pr111766.c | 13 +
 gcc/value-range.h   | 14 ++
 3 files changed, 43 insertions(+), 4 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr111766.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 33b193be7d0..6137f2aeed3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -931,8 +931,9 @@ operator_equal::fold_range (irange , tree type,
 
   // We can be sure the values are always equal or not if both ranges
   // consist of a single value, and then compare them.
-  if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
-  && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+  bool op1_const = wi::eq_p (op1.lower_bound (), op1.upper_bound ());
+  bool op2_const = wi::eq_p (op2.lower_bound (), op2.upper_bound ());
+  if (op1_const && op2_const)
 {
   if (wi::eq_p (op1.lower_bound (), op2.upper_bound()))
 	r = range_true (type);
@@ -947,6 +948,11 @@ operator_equal::fold_range (irange , tree type,
   tmp.intersect (op2);
   if (tmp.undefined_p ())
 	r = range_false (type);
+  // Check if a constant cannot satisfy the bitmask requirements.
+  else if (op2_const && !op1.get_bitmask ().member_p (op2.lower_bound ()))
+	 r = range_false (type);
+  else if (op1_const && !op2.get_bitmask ().member_p (op1.lower_bound ()))
+	 r = range_false (type);
   else
 	r = range_true_and_false (type);
 }
@@ -1033,8 +1039,9 @@ operator_not_equal::fold_range (irange , tree type,
 
   // We can be sure the values are always equal or not if both ranges
   // consist of a single value, and then compare them.
-  if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
-  && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+  bool op1_const = wi::eq_p (op1.lower_bound (), op1.upper_bound ());
+  bool op2_const = wi::eq_p (op2.lower_bound (), op2.upper_bound ());
+  if (op1_const && op2_const)
 {
   if (wi::ne_p (op1.lower_bound (), op2.upper_bound()))
 	r = range_true (type);
@@ -1049,6 +1056,11 @@ operator_not_equal::fold_range (irange , tree type,
   tmp.intersect (op2);
   if (tmp.undefined_p ())
 	r = range_true (type);
+  // Check if a constant cannot satisfy the bitmask requirements.
+  else if (op2_const && !op1.get_bitmask ().member_p (op2.lower_bound ()))
+	 r = range_true (type);
+  else if (op1_const && !op2.get_bitmask ().member_p (op1.lower_bound ()))
+	 r = range_true (type);
   else
 	r = range_true_and_false (type);
 }
diff --git a/gcc/testsuite/gcc.dg/pr111766.c b/gcc/testsuite/gcc.dg/pr111766.c
new file mode 100644
index 000..c27a029c772
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111766.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-evrp" } */
+
+int
+foo3n(int c, int bb)
+{
+  if ((bb & ~3)!=0) __builtin_unreachable(); // bb = [0,3]
+  if ((bb & 1)==0) __builtin_unreachable(); // bb&1 == 0 // [0],[3]
+  if(bb == 2) __builtin_trap();
+  return bb;
+}
+
+/* { dg-final { scan-tree-dump-not "trap" "evrp" } } */
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 84f65ffb591..330e6f70c6b 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -139,6 +139,7 @@ public:
   void verify_mask () const;
   void dump (FILE *) const;
 
+  bool member_p (const wide_int ) const;
   void adjust_range (irange ) const;
 
   // Convenience functions for nonzero bitmask compatibility.
@@ -202,6 +203,19 @@ irange_bitmask::set_nonzero_bits (const wide_i

[COMMITTED 1/2] Remove simple ranges from trailing zero bitmasks.

2023-11-03 Thread Andrew MacLeod
WHen we set bitmasks indicating known zero or one bits, we see some 
"obvious" things once in a while that are easy to prevent. ie


unsigned int [2, +INF] MASK 0xfffe VALUE 0x1

the range [2, 2] is obviously impossible since the final bit must be a 
one.   This doesn't usually cause us too much trouble, but the 
subsequent patch triggers some more interesting situations in which it 
helps to remove the obvious ranges when we have  mask that is trailing 
zeros.


Its too much of a performance impact to constantly be checking the range 
every time we set the bitmask, but it turns out that if we simply try to 
take care of it during intersection operations (which happen at most key 
times, like changing an existing value), the impact is pretty minimal.. 
like 0.6% of VRP.


This patch looks for trailing zeros in the mask, and replaces the low 
end range covered by those bits with those bits from the value field.


Bootstraps on build-x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew



From b20f1dce46fb8bb1b142e9087530e546a40edec8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 31 Oct 2023 11:51:34 -0400
Subject: [PATCH 1/2] Remove simple ranges from trailing zero bitmasks.

During the intersection operation, it can be helpful to remove any
low-end ranges when the bitmask has trailing zeros.  This prevents
obviously incorrect ranges from appearing without requiring a bitmask
check.

	* value-range.cc (irange_bitmask::adjust_range): New.
	(irange::intersect_bitmask): Call adjust_range.
	* value-range.h (irange_bitmask::adjust_range): New prototype.
---
 gcc/value-range.cc | 30 ++
 gcc/value-range.h  |  2 ++
 2 files changed, 32 insertions(+)

diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index fcf53efa1dd..a1e72c78f8b 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -1857,6 +1857,35 @@ irange::get_bitmask_from_range () const
   return irange_bitmask (wi::zero (prec), min | xorv);
 }
 
+// Remove trailing ranges that this bitmask indicates can't exist.
+
+void
+irange_bitmask::adjust_range (irange ) const
+{
+  if (unknown_p () || r.undefined_p ())
+return;
+
+  int_range_max range;
+  tree type = r.type ();
+  int prec = TYPE_PRECISION (type);
+  // If there are trailing zeros, create a range representing those bits.
+  gcc_checking_assert (m_mask != 0);
+  int z = wi::ctz (m_mask);
+  if (z)
+{
+  wide_int ub = (wi::one (prec) << z) - 1;
+  range = int_range<5> (type, wi::zero (prec), ub);
+  // Then remove the specific value these bits contain from the range.
+  wide_int value = m_value & ub;
+  range.intersect (int_range<2> (type, value, value, VR_ANTI_RANGE));
+  // Inverting produces a list of ranges which can be valid.
+  range.invert ();
+  // And finally select R from only those valid values.
+  r.intersect (range);
+  return;
+}
+}
+
 // If the the mask can be trivially converted to a range, do so and
 // return TRUE.
 
@@ -2002,6 +2031,7 @@ irange::intersect_bitmask (const irange )
 
   if (!set_range_from_bitmask ())
 normalize_kind ();
+  m_bitmask.adjust_range (*this);
   if (flag_checking)
 verify_range ();
   return true;
diff --git a/gcc/value-range.h b/gcc/value-range.h
index e9d81d22cd0..84f65ffb591 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -139,6 +139,8 @@ public:
   void verify_mask () const;
   void dump (FILE *) const;
 
+  void adjust_range (irange ) const;
+
   // Convenience functions for nonzero bitmask compatibility.
   wide_int get_nonzero_bits () const;
   void set_nonzero_bits (const wide_int );
-- 
2.41.0



[COMMITTED] Faster irange union for appending ranges.

2023-10-25 Thread Andrew MacLeod
Its a common idiom to build a range by unioning other ranges into 
another one.  If this is done sequentially, those new ranges can be 
simply appended to the end of the existing range, avoiding some 
expensive processing fro the general case.


This patch identifies and optimizes this situation.  The result is a 
2.1% speedup in VRP and a 0.8% speedup in threading, with a overall 
compile time improvement of 0.14% across the GCC build.


Bootstrapped on  x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew
commit f7dbf6230453c76a19921607601eff968bb70169
Author: Andrew MacLeod 
Date:   Mon Oct 23 14:52:45 2023 -0400

Faster irange union for appending ranges.

A common pattern to to append a range to an existing range via union.
This optimizes that process.

* value-range.cc (irange::union_append): New.
(irange::union_): Call union_append when appropriate.
* value-range.h (irange::union_append): New prototype.

diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index f507ec57536..fcf53efa1dd 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -1291,6 +1291,45 @@ irange::irange_single_pair_union (const irange )
   return true;
 }
 
+// Append R to this range, knowing that R occurs after all of these subranges.
+// Return TRUE as something must have changed.
+
+bool
+irange::union_append (const irange )
+{
+  // Check if the first range in R is an immmediate successor to the last
+  // range, ths requiring a merge.
+  signop sign = TYPE_SIGN (m_type);
+  wide_int lb = r.lower_bound ();
+  wide_int ub = upper_bound ();
+  unsigned start = 0;
+  if (widest_int::from (ub, sign) + 1
+  == widest_int::from (lb, sign))
+{
+  m_base[m_num_ranges * 2 - 1] = r.m_base[1];
+  start = 1;
+}
+  maybe_resize (m_num_ranges + r.m_num_ranges - start);
+  for ( ; start < r.m_num_ranges; start++)
+{
+  // Merge the last ranges if it exceeds the maximum size.
+  if (m_num_ranges + 1 > m_max_ranges)
+   {
+ m_base[m_max_ranges * 2 - 1] = r.m_base[r.m_num_ranges * 2 - 1];
+ break;
+   }
+  m_base[m_num_ranges * 2] = r.m_base[start * 2];
+  m_base[m_num_ranges * 2 + 1] = r.m_base[start * 2 + 1];
+  m_num_ranges++;
+}
+
+  if (!union_bitmask (r))
+normalize_kind ();
+  if (flag_checking)
+verify_range ();
+  return true;
+}
+
 // Return TRUE if anything changes.
 
 bool
@@ -1322,6 +1361,11 @@ irange::union_ (const vrange )
   if (m_num_ranges == 1 && r.m_num_ranges == 1)
 return irange_single_pair_union (r);
 
+  signop sign = TYPE_SIGN (m_type);
+  // Check for an append to the end.
+  if (m_kind == VR_RANGE && wi::gt_p (r.lower_bound (), upper_bound (), sign))
+return union_append (r);
+
   // If this ranges fully contains R, then we need do nothing.
   if (irange_contains_p (r))
 return union_bitmask (r);
@@ -1340,7 +1384,6 @@ irange::union_ (const vrange )
   // [Xi,Yi]..[Xn,Yn]  U  [Xj,Yj]..[Xm,Ym]   -->  [Xk,Yk]..[Xp,Yp]
   auto_vec res (m_num_ranges * 2 + r.m_num_ranges * 2);
   unsigned i = 0, j = 0, k = 0;
-  signop sign = TYPE_SIGN (m_type);
 
   while (i < m_num_ranges * 2 && j < r.m_num_ranges * 2)
 {
diff --git a/gcc/value-range.h b/gcc/value-range.h
index c00b15194c4..e9d81d22cd0 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -339,6 +339,7 @@ private:
   bool set_range_from_bitmask ();
 
   bool intersect (const wide_int& lb, const wide_int& ub);
+  bool union_append (const irange );
   unsigned char m_num_ranges;
   bool m_resizable;
   unsigned char m_max_ranges;


Re: [COMMITTED] PR tree-optimization/111622 - Do not add partial equivalences with no uses.

2023-10-13 Thread Andrew MacLeod

of course the patch would be handy...


On 10/13/23 09:23, Andrew MacLeod wrote:
Technically PR 111622 exposes a bug in GCC 13, but its been papered 
over on trunk by this:


commit 9ea74d235c7e7816b996a17c61288f02ef767985
Author: Richard Biener 
Date:   Thu Sep 14 09:31:23 2023 +0200
        tree-optimization/111294 - better DCE after forwprop

This removes a lot of dead statements, but those statements were being 
added to the list of partial equivalences and causing some serious 
compile time issues.


Rangers cache loops through equivalences when its propagating on-entry 
values, so if the partial equivalence list is very large, it can 
consume a lot of time.  Typically, partial equivalence lists are 
small.   In this case, a lot of dead stmts were not removed, so there 
was no redundancy elimination and it was causing an issue.   This 
patch actually speeds things up a hair in the normal case too.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  pushed.

Andrew


From 4eea3c1872a941089cafa105a11d8e40b1a55929 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 12 Oct 2023 17:06:36 -0400
Subject: [PATCH] Do not add partial equivalences with no uses.

	PR tree-optimization/111622
	* value-relation.cc (equiv_oracle::add_partial_equiv): Do not
	register a partial equivalence if an operand has no uses.
---
 gcc/value-relation.cc | 9 +
 1 file changed, 9 insertions(+)

diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 0326fe7cde6..c0f513a0eb1 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -392,6 +392,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
   // In either case, if PE2 has an entry, we simply do nothing.
   if (pe2.members)
 	return;
+  // If there are no uses of op2, do not register.
+  if (has_zero_uses (op2))
+	return;
   // PE1 is the LHS and already has members, so everything in the set
   // should be a slice of PE2 rather than PE1.
   pe2.code = pe_min (r, pe1.code);
@@ -409,6 +412,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
 }
   if (pe2.members)
 {
+  // If there are no uses of op1, do not register.
+  if (has_zero_uses (op1))
+	return;
   pe1.ssa_base = pe2.ssa_base;
   // If pe2 is a 16 bit value, but only an 8 bit copy, we can't be any
   // more than an 8 bit equivalence here, so choose MIN value.
@@ -418,6 +424,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
 }
   else
 {
+  // If there are no uses of either operand, do not register.
+  if (has_zero_uses (op1) || has_zero_uses (op2))
+	return;
   // Neither name has an entry, simply create op1 as slice of op2.
   pe2.code = bits_to_pe (TYPE_PRECISION (TREE_TYPE (op2)));
   if (pe2.code == VREL_VARYING)
-- 
2.41.0



[COMMITTED] [GCC13] PR tree-optimization/111622 - Do not add partial equivalences with no uses.

2023-10-13 Thread Andrew MacLeod
There are a lot of dead statements in this testcase which a casts. These 
were being added to the list of partial equivalences and causing some 
serious compile time issues.


Rangers cache loops through equivalences when its propagating on-entry 
values, so if the partial equivalence list is very large, it can consume 
a lot of time.  Typically, partial equivalence lists are small.   In 
this case, a lot of dead stmts were not removed, so there was no 
redundancy elimination and it was causing an issue.


Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew
From 425964b77ab5b9631e914965a7397303215c77a1 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 12 Oct 2023 17:06:36 -0400
Subject: [PATCH] Do not add partial equivalences with no uses.

	PR tree-optimization/111622
	* value-relation.cc (equiv_oracle::add_partial_equiv): Do not
	register a partial equivalence if an operand has no uses.
---
 gcc/value-relation.cc | 9 +
 1 file changed, 9 insertions(+)

diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index fc792a4d5bc..0ed5f93d184 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -389,6 +389,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
   // In either case, if PE2 has an entry, we simply do nothing.
   if (pe2.members)
 	return;
+  // If there are no uses of op2, do not register.
+  if (has_zero_uses (op2))
+	return;
   // PE1 is the LHS and already has members, so everything in the set
   // should be a slice of PE2 rather than PE1.
   pe2.code = pe_min (r, pe1.code);
@@ -406,6 +409,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
 }
   if (pe2.members)
 {
+  // If there are no uses of op1, do not register.
+  if (has_zero_uses (op1))
+	return;
   pe1.ssa_base = pe2.ssa_base;
   // If pe2 is a 16 bit value, but only an 8 bit copy, we can't be any
   // more than an 8 bit equivalence here, so choose MIN value.
@@ -415,6 +421,9 @@ equiv_oracle::add_partial_equiv (relation_kind r, tree op1, tree op2)
 }
   else
 {
+  // If there are no uses of either operand, do not register.
+  if (has_zero_uses (op1) || has_zero_uses (op2))
+	return;
   // Neither name has an entry, simply create op1 as slice of op2.
   pe2.code = bits_to_pe (TYPE_PRECISION (TREE_TYPE (op2)));
   if (pe2.code == VREL_VARYING)
-- 
2.41.0



[COMMITTED] PR tree-optimization/111622 - Do not add partial equivalences with no uses.

2023-10-13 Thread Andrew MacLeod
Technically PR 111622 exposes a bug in GCC 13, but its been papered over 
on trunk by this:


commit 9ea74d235c7e7816b996a17c61288f02ef767985
Author: Richard Biener 
Date:   Thu Sep 14 09:31:23 2023 +0200

tree-optimization/111294 - better DCE after forwprop


This removes a lot of dead statements, but those statements were being 
added to the list of partial equivalences and causing some serious 
compile time issues.


Rangers cache loops through equivalences when its propagating on-entry 
values, so if the partial equivalence list is very large, it can consume 
a lot of time.  Typically, partial equivalence lists are small.   In 
this case, a lot of dead stmts were not removed, so there was no 
redundancy elimination and it was causing an issue.   This patch 
actually speeds things up a hair in the normal case too.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  pushed.

Andrew





[COMMITTED][GCC13] PR tree-optimization/111694 - Ensure float equivalences include + and - zero.

2023-10-11 Thread Andrew MacLeod
Similar patch which was checked into trunk last week.   slight tweak 
needed as dconstm0 was not exported in gcc 13, otherwise functionally 
the same


Bootstrapped on x86_64-pc-linux-gnu.  pushed.

Andrew
commit f0efc4b25cba1bd35b08b7dfbab0f8fc81b55c66
Author: Andrew MacLeod 
Date:   Mon Oct 9 13:40:15 2023 -0400

Ensure float equivalences include + and - zero.

A floating point equivalence may not properly reflect both signs of
zero, so be pessimsitic and ensure both signs are included.

PR tree-optimization/111694
gcc/
* gimple-range-cache.cc (ranger_cache::fill_block_cache): Adjust
equivalence range.
* value-relation.cc (adjust_equivalence_range): New.
* value-relation.h (adjust_equivalence_range): New prototype.

gcc/testsuite/
* gcc.dg/pr111694.c: New.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 2314478d558..e4e75943632 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1258,6 +1258,9 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 		{
 		  if (rel != VREL_EQ)
 		range_cast (equiv_range, type);
+		  else
+		adjust_equivalence_range (equiv_range);
+
 		  if (block_result.intersect (equiv_range))
 		{
 		  if (DEBUG_RANGE_CACHE)
diff --git a/gcc/testsuite/gcc.dg/pr111694.c b/gcc/testsuite/gcc.dg/pr111694.c
new file mode 100644
index 000..a70b03069dc
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111694.c
@@ -0,0 +1,19 @@
+/* PR tree-optimization/111009 */
+/* { dg-do run } */
+/* { dg-options "-O2" } */
+
+#define signbit(x) __builtin_signbit(x)
+
+static void test(double l, double r)
+{
+  if (l == r && (signbit(l) || signbit(r)))
+;
+  else
+__builtin_abort();
+}
+
+int main()
+{
+  test(0.0, -0.0);
+}
+
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 30a02d3c9d3..fc792a4d5bc 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -183,6 +183,25 @@ relation_transitive (relation_kind r1, relation_kind r2)
   return relation_kind (rr_transitive_table[r1][r2]);
 }
 
+// When one name is an equivalence of another, ensure the equivalence
+// range is correct.  Specifically for floating point, a +0 is also
+// equivalent to a -0 which may not be reflected.  See PR 111694.
+
+void
+adjust_equivalence_range (vrange )
+{
+  if (range.undefined_p () || !is_a (range))
+return;
+
+  frange fr = as_a (range);
+  REAL_VALUE_TYPE dconstm0 = dconst0;
+  dconstm0.sign = 1;
+  frange zeros (range.type (), dconstm0, dconst0);
+  // If range includes a 0 make sure both signs of zero are included.
+  if (fr.intersect (zeros) && !fr.undefined_p ())
+range.union_ (zeros);
+ }
+
 // This vector maps a relation to the equivalent tree code.
 
 static const tree_code relation_to_code [VREL_LAST] = {
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index 3177ecb1ad0..6412cbbe98b 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -91,6 +91,9 @@ inline bool relation_equiv_p (relation_kind r)
 
 void print_relation (FILE *f, relation_kind rel);
 
+// Adjust range as an equivalence.
+void adjust_equivalence_range (vrange );
+
 class relation_oracle
 {
 public:


[COMMITTED] PR tree-optimization/111694 - Ensure float equivalences include + and - zero.

2023-10-09 Thread Andrew MacLeod
When ranger propagates ranges in the on-entry cache, it also check for 
equivalences and incorporates the equivalence into the range for a name 
if it is known.


With floating point values, the equivalence that is generated by 
comparison must also take into account that if the equivalence contains 
zero, both positive and negative zeros could be in the range.


This PR demonstrates that once we establish an equivalence, even though 
we know one value may only have a positive zero, the equivalence may 
have been formed earlier and included a negative zero  This patch 
pessimistically assumes that if the equivalence contains zero, we should 
include both + and - 0 in the equivalence that we utilize.


I audited the other places, and found no other place where this issue 
might arise.  Cache propagation is the only place where we augment the 
range with random equivalences.


Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew
From b0892b1fc637fadf14d7016858983bc5776a1e69 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 9 Oct 2023 10:15:07 -0400
Subject: [PATCH 2/2] Ensure float equivalences include + and - zero.

A floating point equivalence may not properly reflect both signs of
zero, so be pessimsitic and ensure both signs are included.

	PR tree-optimization/111694
	gcc/
	* gimple-range-cache.cc (ranger_cache::fill_block_cache): Adjust
	equivalence range.
	* value-relation.cc (adjust_equivalence_range): New.
	* value-relation.h (adjust_equivalence_range): New prototype.

	gcc/testsuite/
	* gcc.dg/pr111694.c: New.
---
 gcc/gimple-range-cache.cc   |  3 +++
 gcc/testsuite/gcc.dg/pr111694.c | 19 +++
 gcc/value-relation.cc   | 19 +++
 gcc/value-relation.h|  3 +++
 4 files changed, 44 insertions(+)
 create mode 100644 gcc/testsuite/gcc.dg/pr111694.c

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 3c819933c4e..89c0845457d 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1470,6 +1470,9 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 		{
 		  if (rel != VREL_EQ)
 		range_cast (equiv_range, type);
+		  else
+		adjust_equivalence_range (equiv_range);
+
 		  if (block_result.intersect (equiv_range))
 		{
 		  if (DEBUG_RANGE_CACHE)
diff --git a/gcc/testsuite/gcc.dg/pr111694.c b/gcc/testsuite/gcc.dg/pr111694.c
new file mode 100644
index 000..a70b03069dc
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111694.c
@@ -0,0 +1,19 @@
+/* PR tree-optimization/111009 */
+/* { dg-do run } */
+/* { dg-options "-O2" } */
+
+#define signbit(x) __builtin_signbit(x)
+
+static void test(double l, double r)
+{
+  if (l == r && (signbit(l) || signbit(r)))
+;
+  else
+__builtin_abort();
+}
+
+int main()
+{
+  test(0.0, -0.0);
+}
+
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index a2ae39692a6..0326fe7cde6 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -183,6 +183,25 @@ relation_transitive (relation_kind r1, relation_kind r2)
   return relation_kind (rr_transitive_table[r1][r2]);
 }
 
+// When one name is an equivalence of another, ensure the equivalence
+// range is correct.  Specifically for floating point, a +0 is also
+// equivalent to a -0 which may not be reflected.  See PR 111694.
+
+void
+adjust_equivalence_range (vrange )
+{
+  if (range.undefined_p () || !is_a (range))
+return;
+
+  frange fr = as_a (range);
+  // If range includes 0 make sure both signs of zero are included.
+  if (fr.contains_p (dconst0) || fr.contains_p (dconstm0))
+{
+  frange zeros (range.type (), dconstm0, dconst0);
+  range.union_ (zeros);
+}
+ }
+
 // This vector maps a relation to the equivalent tree code.
 
 static const tree_code relation_to_code [VREL_LAST] = {
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index be6e277421b..31d48908678 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -91,6 +91,9 @@ inline bool relation_equiv_p (relation_kind r)
 
 void print_relation (FILE *f, relation_kind rel);
 
+// Adjust range as an equivalence.
+void adjust_equivalence_range (vrange );
+
 class relation_oracle
 {
 public:
-- 
2.41.0



[COMMITTED] Remove unused get_identity_relation.

2023-10-09 Thread Andrew MacLeod
I added this routine for Aldy when he thought we were going to have to 
add explicit versions for unordered relations.


It seems that with accurate tracking of NANs, we do not need the 
explicit versions in the oracle, so we will not need this identity 
routine to pick the appropriate version of VREL_EQ... as there is only 
one.  As it stands, always returns VREL_EQ, so simply use VREL_EQ in the 
2 calling locations.


Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew
From 5ee51119d1345f3f13af784455a4ae466766912b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 9 Oct 2023 10:01:11 -0400
Subject: [PATCH 1/2] Remove unused get_identity_relation.

Turns out we didnt need this as there is no unordered relations
managed by the oracle.

	* gimple-range-gori.cc (gori_compute::compute_operand1_range): Do
	not call get_identity_relation.
	(gori_compute::compute_operand2_range): Ditto.
	* value-relation.cc (get_identity_relation): Remove.
	* value-relation.h (get_identity_relation): Remove protyotype.
---
 gcc/gimple-range-gori.cc | 10 ++
 gcc/value-relation.cc| 14 --
 gcc/value-relation.h |  3 ---
 3 files changed, 2 insertions(+), 25 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 1b5eda43390..887da0ff094 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1146,10 +1146,7 @@ gori_compute::compute_operand1_range (vrange ,
 
   // If op1 == op2, create a new trio for just this call.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-	{
-	  relation_kind k = get_identity_relation (op1, op1_range);
-	  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
-	}
+	trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
   if (!handler.calc_op1 (r, lhs, op2_range, trio))
 	return false;
 }
@@ -1225,10 +1222,7 @@ gori_compute::compute_operand2_range (vrange ,
 
   // If op1 == op2, create a new trio for this stmt.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-{
-  relation_kind k = get_identity_relation (op1, op1_range);
-  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
-}
+trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
   // Intersect with range for op2 based on lhs and op1.
   if (!handler.calc_op2 (r, lhs, op1_range, trio))
 return false;
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 8fea4aad345..a2ae39692a6 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -183,20 +183,6 @@ relation_transitive (relation_kind r1, relation_kind r2)
   return relation_kind (rr_transitive_table[r1][r2]);
 }
 
-// When operands of a statement are identical ssa_names, return the
-// approriate relation between operands for NAME == NAME, given RANGE.
-//
-relation_kind
-get_identity_relation (tree name, vrange  ATTRIBUTE_UNUSED)
-{
-  // Return VREL_UNEQ when it is supported for floats as appropriate.
-  if (frange::supports_p (TREE_TYPE (name)))
-return VREL_EQ;
-
-  // Otherwise return VREL_EQ.
-  return VREL_EQ;
-}
-
 // This vector maps a relation to the equivalent tree code.
 
 static const tree_code relation_to_code [VREL_LAST] = {
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index f00f84f93b6..be6e277421b 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -91,9 +91,6 @@ inline bool relation_equiv_p (relation_kind r)
 
 void print_relation (FILE *f, relation_kind rel);
 
-// Return relation for NAME == NAME with RANGE.
-relation_kind get_identity_relation (tree name, vrange );
-
 class relation_oracle
 {
 public:
-- 
2.41.0



[COMMITTED 0/3] Add a FAST VRP pass.

2023-10-05 Thread Andrew MacLeod
the following set of 3 patches provide the infrastructure for a fast vrp 
pass.


The pass is currently not invoked anywhere, but I wanted to get the 
infrastructure bits in place now... just in case we want to use it 
somewhere.


It clearly bootstraps with no regressions since it isn't being invoked 
:-)   I have however bootstrapped it with calls to the new fast-vrp pass 
immediately following the EVRP, and as an EVRP replacement .  This is to 
primarily ensure it isn't doing anything harmful.  That is a test of 
sorts :-).


I also ran it instead of EVRP, and it bootstraps, but does trigger a few 
regressions, all related to relation processing, which it doesn't do.


Patch one provides a new API for GORI which simply provides a list of 
all the ranges that it can generate on an outgoing edge. It utilizes the 
sparse ssa-cache, and simply sets the outgoing range as determines by 
the edge.  Its very efficient, only walking up the chain once and not 
generating any other utillity structures.  This provides fats an easy 
access to any info an edge may provide.  There is a second API for 
querying a specific name instead of asking for all the ranges.   It 
should be pretty solid as is simply invokes ranges-ops and other 
components the same way the larger GORI engine does, it just puts them 
together in a different way


Patch 2 is the new DOM ranger.  It assumes it will be called in DOM 
order, and evaluates the statements, and tracks any ranges on outgoing 
edges.  Queries for ranges walk the dom tree looking for a range until 
it finds one on an edge or hits the definition block.   There are 
additional efficiencies that can be employed, and I'll eventually get 
back to them.


Patch 3 is the FAST VRP pass and folder.  Its pretty straightforward, 
invokes the new DOM ranger, and enables  you to add  MAKE_PASS 
(pass_fast_vrp)  in passes. def.


Timewise, it is currently about twice as fast as EVRP.  It does basic 
range evaluation and fold PHIs, etc. It does *not* do relation 
processing or any of the fancier things we do (like statement side 
effects).   A little additional  work can reduce the memory footprint 
further too.  I have done no experiments as yet as to the cot of adding 
relations, but it would be pretty straightforward as it is just reusing 
all the same components the main ranger does


Andrew






[COMMITTED 3/3] Create a fast VRP pass

2023-10-05 Thread Andrew MacLeod
This patch adds a fast VRP pass.  It is not invoked from anywhere, so 
should cause no issues.


If you want to utilize it, simply add a new pass, ie:

--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -92,6 +92,7 @@ along with GCC; see the file COPYING3.  If not see
  NEXT_PASS (pass_phiprop);
  NEXT_PASS (pass_fre, true /* may_iterate */);
  NEXT_PASS (pass_early_vrp);
+ NEXT_PASS (pass_fast_vrp);
  NEXT_PASS (pass_merge_phi);
   NEXT_PASS (pass_dse);
  NEXT_PASS (pass_cd_dce, false /* update_address_taken_p */);

it will generate a dump file with the extension .fvrp.


pushed.

From f4e2dac53fd62fbf2af95e0bf26d24e929fa1f66 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 2 Oct 2023 18:32:49 -0400
Subject: [PATCH 3/3] Create a fast VRP pass

	* timevar.def (TV_TREE_FAST_VRP): New.
	* tree-pass.h (make_pass_fast_vrp): New prototype.
	* tree-vrp.cc (class fvrp_folder): New.
	(fvrp_folder::fvrp_folder): New.
	(fvrp_folder::~fvrp_folder): New.
	(fvrp_folder::value_of_expr): New.
	(fvrp_folder::value_on_edge): New.
	(fvrp_folder::value_of_stmt): New.
	(fvrp_folder::pre_fold_bb): New.
	(fvrp_folder::post_fold_bb): New.
	(fvrp_folder::pre_fold_stmt): New.
	(fvrp_folder::fold_stmt): New.
	(execute_fast_vrp): New.
	(pass_data_fast_vrp): New.
	(pass_vrp:execute): Check for fast VRP pass.
	(make_pass_fast_vrp): New.
---
 gcc/timevar.def |   1 +
 gcc/tree-pass.h |   1 +
 gcc/tree-vrp.cc | 124 
 3 files changed, 126 insertions(+)

diff --git a/gcc/timevar.def b/gcc/timevar.def
index 9523598f60e..d21b08c030d 100644
--- a/gcc/timevar.def
+++ b/gcc/timevar.def
@@ -160,6 +160,7 @@ DEFTIMEVAR (TV_TREE_TAIL_MERGE   , "tree tail merge")
 DEFTIMEVAR (TV_TREE_VRP  , "tree VRP")
 DEFTIMEVAR (TV_TREE_VRP_THREADER , "tree VRP threader")
 DEFTIMEVAR (TV_TREE_EARLY_VRP, "tree Early VRP")
+DEFTIMEVAR (TV_TREE_FAST_VRP , "tree Fast VRP")
 DEFTIMEVAR (TV_TREE_COPY_PROP, "tree copy propagation")
 DEFTIMEVAR (TV_FIND_REFERENCED_VARS  , "tree find ref. vars")
 DEFTIMEVAR (TV_TREE_PTA		 , "tree PTA")
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index eba2d54ac76..9c4b1e4185c 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -470,6 +470,7 @@ extern gimple_opt_pass *make_pass_check_data_deps (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_copy_prop (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_isolate_erroneous_paths (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_early_vrp (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_fast_vrp (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_vrp (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_assumptions (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_uncprop (gcc::context *ctxt);
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index 4f8c7745461..19d8f995d70 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -1092,6 +1092,106 @@ execute_ranger_vrp (struct function *fun, bool warn_array_bounds_p,
   return 0;
 }
 
+// Implement a Fast VRP folder.  Not quite as effective but faster.
+
+class fvrp_folder : public substitute_and_fold_engine
+{
+public:
+  fvrp_folder (dom_ranger *dr) : substitute_and_fold_engine (),
+ m_simplifier (dr)
+  { m_dom_ranger = dr; }
+
+  ~fvrp_folder () { }
+
+  tree value_of_expr (tree name, gimple *s = NULL) override
+  {
+// Shortcircuit subst_and_fold callbacks for abnormal ssa_names.
+if (TREE_CODE (name) == SSA_NAME && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (name))
+  return NULL;
+return m_dom_ranger->value_of_expr (name, s);
+  }
+
+  tree value_on_edge (edge e, tree name) override
+  {
+// Shortcircuit subst_and_fold callbacks for abnormal ssa_names.
+if (TREE_CODE (name) == SSA_NAME && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (name))
+  return NULL;
+return m_dom_ranger->value_on_edge (e, name);
+  }
+
+  tree value_of_stmt (gimple *s, tree name = NULL) override
+  {
+// Shortcircuit subst_and_fold callbacks for abnormal ssa_names.
+if (TREE_CODE (name) == SSA_NAME && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (name))
+  return NULL;
+return m_dom_ranger->value_of_stmt (s, name);
+  }
+
+  void pre_fold_bb (basic_block bb) override
+  {
+m_dom_ranger->pre_bb (bb);
+// Now process the PHIs in advance.
+gphi_iterator psi = gsi_start_phis (bb);
+for ( ; !gsi_end_p (psi); gsi_next ())
+  {
+	tree name = gimple_range_ssa_p (PHI_RESULT (psi.phi ()));
+	if (name)
+	  {
+	Value_Range vr(TREE_TYPE (name));
+	m_dom_ranger->range_of_stmt (vr, psi.phi (), name);
+	  }
+  }
+  }
+
+  void post_fold_bb (basic_block bb) override
+  {
+m_dom_ranger->post_bb (bb);
+  }
+
+  void pre_fold_stmt (gimple *s) override
+  {
+// Ensure range_of_stmt has been called.
+ 

[COMMITTED 1/3] Add outgoing range vector calculation API.

2023-10-05 Thread Andrew MacLeod

This patch adds 2 routine that can be called to generate GORI information.

The primar API is:
bool gori_on_edge (class ssa_cache , edge e, range_query *query = 
NULL, gimple_outgoing_range *ogr = NULL);


This will populate an ssa-cache R with any ranges that are generated by 
edge E.   It will use QUERY, if provided, to satisfy any incoming 
values.  if OGR is provided, it is used to pick up hard edge values.. 
like TRUE, FALSE, or switch edges.


It currently only works for TRUE/FALSE conditionals, and doesn't try to 
solve complex logical combinations.  ie (a <6 && b > 6) || (a>10 || b < 
3) as those can get exponential and require multiple evaluations of the 
IL to satisfy.  It will fully utilize range-ops however and so comes up 
with many ranges ranger does.


It also provides the "raw" ranges on the edge.. ie. it doesn't try to 
figure out anything outside the current basic block, but rather reflects 
exactly what the edge indicates.


ie:

   :
  x.0_1 = (unsigned int) x_20(D);
  _2 = x.0_1 + 4294967292;
  if (_2 > 4)
    goto ; [INV]
  else
    goto ; [INV]

produces

Edge ranges BB 2->3
x.0_1  : [irange] unsigned int [0, 3][9, +INF]
_2  : [irange] unsigned int [5, +INF]
x_20(D)  : [irange] int [-INF, 3][9, +INF]

Edge ranges BB 2->4
x.0_1  : [irange] unsigned int [4, 8] MASK 0xf VALUE 0x0
_2  : [irange] unsigned int [0, 4]
x_20(D)  : [irange] int [4, 8] MASK 0xf VALUE 0x0

It performs a linear walk through juts the required statements, so each 
of the the above vectors are generated by visiting each of the 3 
statements exactly once, so its pretty quick.



The other entry point is:
bool gori_name_on_edge (vrange , tree name, edge e, range_query *q);

This does basically the same thing, except it only looks at whether NAME 
has a range, and returns it if it does.  not other overhead.


Pushed.
From 52c1e2c805bc2fd7a30583dce3608b738f3a5ce4 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 15 Aug 2023 17:29:58 -0400
Subject: [PATCH 1/3] Add outgoing range vector calcualtion API

Provide a GORI API which can produce a range vector for all outgoing
ranges on an edge without any of the other infratructure.

	* gimple-range-gori.cc (gori_stmt_info::gori_stmt_info): New.
	(gori_calc_operands): New.
	(gori_on_edge): New.
	(gori_name_helper): New.
	(gori_name_on_edge): New.
	* gimple-range-gori.h (gori_on_edge): New prototype.
	(gori_name_on_edge): New prototype.
---
 gcc/gimple-range-gori.cc | 213 +++
 gcc/gimple-range-gori.h  |  15 +++
 2 files changed, 228 insertions(+)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 2694e551d73..1b5eda43390 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1605,3 +1605,216 @@ gori_export_iterator::get_name ()
 }
   return NULL_TREE;
 }
+
+// This is a helper class to set up STMT with a known LHS for further GORI
+// processing.
+
+class gori_stmt_info : public gimple_range_op_handler
+{
+public:
+  gori_stmt_info (vrange , gimple *stmt, range_query *q);
+  Value_Range op1_range;
+  Value_Range op2_range;
+  tree ssa1;
+  tree ssa2;
+};
+
+
+// Uses query Q to get the known ranges on STMT with a LHS range
+// for op1_range and op2_range and set ssa1 and ssa2 if either or both of
+// those operands are SSA_NAMES.
+
+gori_stmt_info::gori_stmt_info (vrange , gimple *stmt, range_query *q)
+  : gimple_range_op_handler (stmt)
+{
+  ssa1 = NULL;
+  ssa2 = NULL;
+  // Don't handle switches as yet for vector processing.
+  if (is_a (stmt))
+return;
+
+  // No frther processing for VARYING or undefined.
+  if (lhs.undefined_p () || lhs.varying_p ())
+return;
+
+  // If there is no range-op handler, we are also done.
+  if (!*this)
+return;
+
+  // Only evaluate logical cases if both operands must be the same as the LHS.
+  // Otherwise its becomes exponential in time, as well as more complicated.
+  if (is_gimple_logical_p (stmt))
+{
+  gcc_checking_assert (range_compatible_p (lhs.type (), boolean_type_node));
+  enum tree_code code = gimple_expr_code (stmt);
+  if (code == TRUTH_OR_EXPR ||  code == BIT_IOR_EXPR)
+	{
+	  // [0, 0] = x || y  means both x and y must be zero.
+	  if (!lhs.singleton_p () || !lhs.zero_p ())
+	return;
+	}
+  else if (code == TRUTH_AND_EXPR ||  code == BIT_AND_EXPR)
+	{
+	  // [1, 1] = x && y  means both x and y must be one.
+	  if (!lhs.singleton_p () || lhs.zero_p ())
+	return;
+	}
+}
+
+  tree op1 = operand1 ();
+  tree op2 = operand2 ();
+  ssa1 = gimple_range_ssa_p (op1);
+  ssa2 = gimple_range_ssa_p (op2);
+  // If both operands are the same, only process one of them.
+  if (ssa1 && ssa1 == ssa2)
+ssa2 = NULL_TREE;
+
+  // Extract current ranges for the operands.
+  fur_stmt src (stmt, q);
+  if (op1)
+{
+  op1_range.set_type (TREE_TYPE (op1));
+  src.get_operand (op1_range, op1);
+}
+
+  // And satisfy

[COMMITTED 2/3] Add a dom based ranger for fast VRP.

2023-10-05 Thread Andrew MacLeod
This patch adds a DOM based ranger that is intended to be used by a dom 
walk pass and provides basic ranges.


It utilizes the new GORI edge API to find outgoing ranges on edges, and 
combines these with any ranges calculated during the walk up to this 
point.  When a query is made for a range not defined in the current 
block, a quick dom walk is performed looking for a range either on a 
single-pred  incoming  edge or defined in the block.


Its about twice the speed of current EVRP, and although there is a bit 
of room to improve both memory usage and speed, I'll leave that until I 
either get around to it or we elect to use it and it becomes more 
important.   It also serves as a POC for anyone wanting to use the new 
GORI API to use edge ranges, as well as a potentially different fast VRP 
more similar to the old EVRP. This version performs more folding of PHI 
nodes as it has all the info on incoming edges, but at a slight cost, 
mostly memory.  It does no relation processing as yet.


It has been bootstrapped running right after EVRP, and as a replacement 
for EVRP, and since it uses existing machinery, should be reasonably 
solid.   It is currently not invoked from anywhere.


Pushed.

Andrew



From ad8cd713b4e489826e289551b8b8f8f708293a5b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 28 Jul 2023 13:18:15 -0400
Subject: [PATCH 2/3] Add a dom based ranger for fast VRP.

Provide a dominator based implementation of a range query.

	* gimple_range.cc (dom_ranger::dom_ranger): New.
	(dom_ranger::~dom_ranger): New.
	(dom_ranger::range_of_expr): New.
	(dom_ranger::edge_range): New.
	(dom_ranger::range_on_edge): New.
	(dom_ranger::range_in_bb): New.
	(dom_ranger::range_of_stmt): New.
	(dom_ranger::maybe_push_edge): New.
	(dom_ranger::pre_bb): New.
	(dom_ranger::post_bb): New.
	* gimple-range.h (class dom_ranger): New.
---
 gcc/gimple-range.cc | 300 
 gcc/gimple-range.h  |  28 +
 2 files changed, 328 insertions(+)

diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 13c3308d537..5e9bb397a20 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -928,3 +928,303 @@ assume_query::dump (FILE *f)
 }
   fprintf (f, "--\n");
 }
+
+// ---
+
+
+// Create a DOM based ranger for use by a DOM walk pass.
+
+dom_ranger::dom_ranger () : m_global (), m_out ()
+{
+  m_freelist.create (0);
+  m_freelist.truncate (0);
+  m_e0.create (0);
+  m_e0.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  m_e1.create (0);
+  m_e1.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  m_pop_list = BITMAP_ALLOC (NULL);
+  if (dump_file && (param_ranger_debug & RANGER_DEBUG_TRACE))
+tracer.enable_trace ();
+}
+
+// Dispose of a DOM ranger.
+
+dom_ranger::~dom_ranger ()
+{
+  if (dump_file && (dump_flags & TDF_DETAILS))
+{
+  fprintf (dump_file, "Non-varying global ranges:\n");
+  fprintf (dump_file, "=:\n");
+  m_global.dump (dump_file);
+}
+  BITMAP_FREE (m_pop_list);
+  m_e1.release ();
+  m_e0.release ();
+  m_freelist.release ();
+}
+
+// Implement range of EXPR on stmt S, and return it in R.
+// Return false if no range can be calculated.
+
+bool
+dom_ranger::range_of_expr (vrange , tree expr, gimple *s)
+{
+  unsigned idx;
+  if (!gimple_range_ssa_p (expr))
+return get_tree_range (r, expr, s);
+
+  if ((idx = tracer.header ("range_of_expr ")))
+{
+  print_generic_expr (dump_file, expr, TDF_SLIM);
+  if (s)
+	{
+	  fprintf (dump_file, " at ");
+	  print_gimple_stmt (dump_file, s, 0, TDF_SLIM);
+	}
+  else
+	  fprintf (dump_file, "\n");
+}
+
+  if (s)
+range_in_bb (r, gimple_bb (s), expr);
+  else
+m_global.range_of_expr (r, expr, s);
+
+  if (idx)
+tracer.trailer (idx, " ", true, expr, r);
+  return true;
+}
+
+
+// Return TRUE and the range if edge E has a range set for NAME in
+// block E->src.
+
+bool
+dom_ranger::edge_range (vrange , edge e, tree name)
+{
+  bool ret = false;
+  basic_block bb = e->src;
+
+  // Check if BB has any outgoing ranges on edge E.
+  ssa_lazy_cache *out = NULL;
+  if (EDGE_SUCC (bb, 0) == e)
+out = m_e0[bb->index];
+  else if (EDGE_SUCC (bb, 1) == e)
+out = m_e1[bb->index];
+
+  // If there is an edge vector and it has a range, pick it up.
+  if (out && out->has_range (name))
+ret = out->get_range (r, name);
+
+  return ret;
+}
+
+
+// Return the range of EXPR on edge E in R.
+// Return false if no range can be calculated.
+
+bool
+dom_ranger::range_on_edge (vrange , edge e, tree expr)
+{
+  basic_block bb = e->src;
+  unsigned idx;
+  if ((idx = tracer.header ("range_on_edge ")))
+{
+  fprintf (dump_file, "%d->%d for ",e->src->index, e->dest->index);
+  

[COMMITTED] Don't use range_info_get_range for pointers.

2023-10-03 Thread Andrew MacLeod

Properly check for pointers instead of just using range_info_get_range.

bootstrapped on 86_64-pc-linux-gnu (and presumably AIX too :-) with no 
regressions.


On 10/3/23 12:53, David Edelsohn wrote:

AIX bootstrap is happier with the patch.

Thanks, David

commit d8808c37d29110872fa51b98e71aef9e160b4692
Author: Andrew MacLeod 
Date:   Tue Oct 3 12:32:10 2023 -0400

Don't use range_info_get_range for pointers.

Pointers only track null and nonnull, so we need to handle them
specially.

* tree-ssanames.cc (set_range_info): Use get_ptr_info for
pointers rather than range_info_get_range.

diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 1eae411ac1c..0a32444fbdf 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -420,15 +420,11 @@ set_range_info (tree name, const vrange )
 
   // Pick up the current range, or VARYING if none.
   tree type = TREE_TYPE (name);
-  Value_Range tmp (type);
-  if (range_info_p (name))
-range_info_get_range (name, tmp);
-  else
-tmp.set_varying (type);
-
   if (POINTER_TYPE_P (type))
 {
-  if (r.nonzero_p () && !tmp.nonzero_p ())
+  struct ptr_info_def *pi = get_ptr_info (name);
+  // If R is nonnull and pi is not, set nonnull.
+  if (r.nonzero_p () && (!pi || pi->pt.null))
{
  set_ptr_nonnull (name);
  return true;
@@ -436,6 +432,11 @@ set_range_info (tree name, const vrange )
   return false;
 }
 
+  Value_Range tmp (type);
+  if (range_info_p (name))
+range_info_get_range (name, tmp);
+  else
+tmp.set_varying (type);
   // If the result doesn't change, or is undefined, return false.
   if (!tmp.intersect (r) || tmp.undefined_p ())
 return false;


Re: [COMMITTED] Remove pass counting in VRP.

2023-10-03 Thread Andrew MacLeod


On 10/3/23 13:02, David Malcolm wrote:

On Tue, 2023-10-03 at 10:32 -0400, Andrew MacLeod wrote:

Pass counting in VRP is used to decide when to call early VRP, pass
the
flag to enable warnings, and when the final pass is.

If you try to add additional passes, this becomes quite fragile. This
patch simply chooses the pass based on the data pointer passed in,
and
remove the pass counter.   The first FULL VRP pass invokes the
warning
code, and the flag passed in now represents the FINAL pass of VRP.
There is no longer a global flag which, as it turns out, wasn't
working
well with the JIT compiler, but when undetected.  (Thanks to dmalcolm
for helping me sort out what was going on there)


Bootstraps  on x86_64-pc-linux-gnu with no regressions.   Pushed.

[CCing jit mailing list]

I'm worried that this patch may have "papered over" an issue with
libgccjit.  Specifically:


well, that isnt the patch that was checked in :-P

Im not sure how the old version got into the commit note.

Attached is the version checked in.

commit 7eb5ce7f58ed4a48641e1786e4fdeb2f7fb8c5ff
Author: Andrew MacLeod 
Date:   Thu Sep 28 09:19:32 2023 -0400

Remove pass counting in VRP.

Rather than using a pass count to decide which parameters are passed to
VRP, makemit explicit.

* passes.def (pass_vrp): Pass "final pass" flag as parameter.
* tree-vrp.cc (vrp_pass_num): Remove.
(pass_vrp::my_pass): Remove.
(pass_vrp::pass_vrp): Add warn_p as a parameter.
(pass_vrp::final_p): New.
(pass_vrp::set_pass_param): Set final_p param.
(pass_vrp::execute): Call execute_range_vrp with no conditions.
(make_pass_vrp): Pass additional parameter.
(make_pass_early_vrp): Ditto.

diff --git a/gcc/passes.def b/gcc/passes.def
index 4110a472914..2bafd60bbfb 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -221,7 +221,7 @@ along with GCC; see the file COPYING3.  If not see
   NEXT_PASS (pass_fre, true /* may_iterate */);
   NEXT_PASS (pass_merge_phi);
   NEXT_PASS (pass_thread_jumps_full, /*first=*/true);
-  NEXT_PASS (pass_vrp, true /* warn_array_bounds_p */);
+  NEXT_PASS (pass_vrp, false /* final_p*/);
   NEXT_PASS (pass_dse);
   NEXT_PASS (pass_dce);
   /* pass_stdarg is always run and at this point we execute
@@ -348,7 +348,7 @@ along with GCC; see the file COPYING3.  If not see
   NEXT_PASS (pass_dominator, false /* may_peel_loop_headers_p */);
   NEXT_PASS (pass_strlen);
   NEXT_PASS (pass_thread_jumps_full, /*first=*/false);
-  NEXT_PASS (pass_vrp, false /* warn_array_bounds_p */);
+  NEXT_PASS (pass_vrp, true /* final_p */);
   /* Run CCP to compute alignment and nonzero bits.  */
   NEXT_PASS (pass_ccp, true /* nonzero_p */);
   NEXT_PASS (pass_warn_restrict);
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index d7b194f5904..4f8c7745461 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -1120,36 +1120,32 @@ const pass_data pass_data_early_vrp =
   ( TODO_cleanup_cfg | TODO_update_ssa | TODO_verify_all ),
 };
 
-static int vrp_pass_num = 0;
 class pass_vrp : public gimple_opt_pass
 {
 public:
-  pass_vrp (gcc::context *ctxt, const pass_data _)
-: gimple_opt_pass (data_, ctxt), data (data_), warn_array_bounds_p (false),
-  my_pass (vrp_pass_num++)
-  {}
+  pass_vrp (gcc::context *ctxt, const pass_data _, bool warn_p)
+: gimple_opt_pass (data_, ctxt), data (data_),
+  warn_array_bounds_p (warn_p), final_p (false)
+{ }
 
   /* opt_pass methods: */
-  opt_pass * clone () final override { return new pass_vrp (m_ctxt, data); }
+  opt_pass * clone () final override
+{ return new pass_vrp (m_ctxt, data, false); }
   void set_pass_param (unsigned int n, bool param) final override
 {
   gcc_assert (n == 0);
-  warn_array_bounds_p = param;
+  final_p = param;
 }
   bool gate (function *) final override { return flag_tree_vrp != 0; }
   unsigned int execute (function *fun) final override
 {
-  // Early VRP pass.
-  if (my_pass == 0)
-	return execute_ranger_vrp (fun, /*warn_array_bounds_p=*/false, false);
-
-  return execute_ranger_vrp (fun, warn_array_bounds_p, my_pass == 2);
+  return execute_ranger_vrp (fun, warn_array_bounds_p, final_p);
 }
 
  private:
   const pass_data 
   bool warn_array_bounds_p;
-  int my_pass;
+  bool final_p;
 }; // class pass_vrp
 
 const pass_data pass_data_assumptions =
@@ -1219,13 +1215,13 @@ public:
 gimple_opt_pass *
 make_pass_vrp (gcc::context *ctxt)
 {
-  return new pass_vrp (ctxt, pass_data_vrp);
+  return new pass_vrp (ctxt, pass_data_vrp, true);
 }
 
 gimple_opt_pass *
 make_pass_early_vrp (gcc::context *ctxt)
 {
-  return new pass_vrp (ctxt, pass_data_early_vrp);
+  return new pass_vrp (ctxt, pass_data_early_vrp, false);
 }
 
 gimple_opt_pass *


Re: [COMMITTED] Return TRUE only when a global value is updated.

2023-10-03 Thread Andrew MacLeod

perfect.  I'll check it in when my testrun is done.

Thanks  .. .  and sorry :-)

Andrew

On 10/3/23 12:53, David Edelsohn wrote:

AIX bootstrap is happier with the patch.

Thanks, David

On Tue, Oct 3, 2023 at 12:30 PM Andrew MacLeod  
wrote:


Give this a try..  I'm testing it here, but x86 doesn't seem to
show it
anyway for some reason :-P

I think i needed to handle pointers special since SSA_NAMES handle
pointer ranges different.

Andrew

On 10/3/23 11:47, David Edelsohn wrote:
> This patch caused a bootstrap failure on AIX.
>
> during GIMPLE pass: evrp
>
> /nasfarm/edelsohn/src/src/libgcc/libgcc2.c: In function
'__gcc_bcmp':
>
> /nasfarm/edelsohn/src/src/libgcc/libgcc2.c:2910:1: internal
compiler
> error: in get_irange, at value-range-storage.cc:343
>
> 2910 | }
>
> | ^
>
>
> 0x11b7f4b7 irange_storage::get_irange(irange&, tree_node*) const
>
> /nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:343
>
> 0x11b7e7af vrange_storage::get_vrange(vrange&, tree_node*) const
>
> /nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:178
>
> 0x139f3d77 range_info_get_range(tree_node const*, vrange&)
>
> /nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:118
>
> 0x1134b463 set_range_info(tree_node*, vrange const&)
>
> /nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:425
>
> 0x116a7333 gimple_ranger::register_inferred_ranges(gimple*)
>
> /nasfarm/edelsohn/src/src/gcc/gimple-range.cc:487
>
> 0x125cef27 rvrp_folder::fold_stmt(gimple_stmt_iterator*)
>
> /nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1033
>
> 0x123dd063
>
substitute_and_fold_dom_walker::before_dom_children(basic_block_def*)
>
> /nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:876
>
> 0x1176cc43 dom_walker::walk(basic_block_def*)
>
> /nasfarm/edelsohn/src/src/gcc/domwalk.cc:311
>
> 0x123dd733
> substitute_and_fold_engine::substitute_and_fold(basic_block_def*)
>
> /nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:999
>
> 0x123d0f5f execute_ranger_vrp(function*, bool, bool)
>
> /nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1062
>
> 0x123d14ef execute
>
> /nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1142
>





Re: [COMMITTED] Return TRUE only when a global value is updated.

2023-10-03 Thread Andrew MacLeod
Give this a try..  I'm testing it here, but x86 doesn't seem to show it 
anyway for some reason :-P


I think i needed to handle pointers special since SSA_NAMES handle 
pointer ranges different.


Andrew

On 10/3/23 11:47, David Edelsohn wrote:

This patch caused a bootstrap failure on AIX.

during GIMPLE pass: evrp

/nasfarm/edelsohn/src/src/libgcc/libgcc2.c: In function '__gcc_bcmp':

/nasfarm/edelsohn/src/src/libgcc/libgcc2.c:2910:1: internal compiler 
error: in get_irange, at value-range-storage.cc:343


2910 | }

| ^


0x11b7f4b7 irange_storage::get_irange(irange&, tree_node*) const

/nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:343

0x11b7e7af vrange_storage::get_vrange(vrange&, tree_node*) const

/nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:178

0x139f3d77 range_info_get_range(tree_node const*, vrange&)

/nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:118

0x1134b463 set_range_info(tree_node*, vrange const&)

/nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:425

0x116a7333 gimple_ranger::register_inferred_ranges(gimple*)

/nasfarm/edelsohn/src/src/gcc/gimple-range.cc:487

0x125cef27 rvrp_folder::fold_stmt(gimple_stmt_iterator*)

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1033

0x123dd063 
substitute_and_fold_dom_walker::before_dom_children(basic_block_def*)


/nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:876

0x1176cc43 dom_walker::walk(basic_block_def*)

/nasfarm/edelsohn/src/src/gcc/domwalk.cc:311

0x123dd733 
substitute_and_fold_engine::substitute_and_fold(basic_block_def*)


/nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:999

0x123d0f5f execute_ranger_vrp(function*, bool, bool)

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1062

0x123d14ef execute

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1142
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 1eae411ac1c..1401f67c781 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -420,15 +420,11 @@ set_range_info (tree name, const vrange )
 
   // Pick up the current range, or VARYING if none.
   tree type = TREE_TYPE (name);
-  Value_Range tmp (type);
-  if (range_info_p (name))
-range_info_get_range (name, tmp);
-  else
-tmp.set_varying (type);
-
   if (POINTER_TYPE_P (type))
 {
-  if (r.nonzero_p () && !tmp.nonzero_p ())
+  struct ptr_info_def *pi = get_ptr_info (name);
+  // If R is nonnull and pi is not, set nonnull.
+  if (r.nonzero_p () && (!pi || !pi->pt.null))
 	{
 	  set_ptr_nonnull (name);
 	  return true;
@@ -436,6 +432,11 @@ set_range_info (tree name, const vrange )
   return false;
 }
 
+  Value_Range tmp (type);
+  if (range_info_p (name))
+range_info_get_range (name, tmp);
+  else
+tmp.set_varying (type);
   // If the result doesn't change, or is undefined, return false.
   if (!tmp.intersect (r) || tmp.undefined_p ())
 return false;


Re: [COMMITTED] Return TRUE only when a global value is updated.

2023-10-03 Thread Andrew MacLeod

huh.  thanks,  I'll have a look.


Andrew

On 10/3/23 11:47, David Edelsohn wrote:

This patch caused a bootstrap failure on AIX.

during GIMPLE pass: evrp

/nasfarm/edelsohn/src/src/libgcc/libgcc2.c: In function '__gcc_bcmp':

/nasfarm/edelsohn/src/src/libgcc/libgcc2.c:2910:1: internal compiler 
error: in get_irange, at value-range-storage.cc:343


2910 | }

| ^


0x11b7f4b7 irange_storage::get_irange(irange&, tree_node*) const

/nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:343

0x11b7e7af vrange_storage::get_vrange(vrange&, tree_node*) const

/nasfarm/edelsohn/src/src/gcc/value-range-storage.cc:178

0x139f3d77 range_info_get_range(tree_node const*, vrange&)

/nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:118

0x1134b463 set_range_info(tree_node*, vrange const&)

/nasfarm/edelsohn/src/src/gcc/tree-ssanames.cc:425

0x116a7333 gimple_ranger::register_inferred_ranges(gimple*)

/nasfarm/edelsohn/src/src/gcc/gimple-range.cc:487

0x125cef27 rvrp_folder::fold_stmt(gimple_stmt_iterator*)

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1033

0x123dd063 
substitute_and_fold_dom_walker::before_dom_children(basic_block_def*)


/nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:876

0x1176cc43 dom_walker::walk(basic_block_def*)

/nasfarm/edelsohn/src/src/gcc/domwalk.cc:311

0x123dd733 
substitute_and_fold_engine::substitute_and_fold(basic_block_def*)


/nasfarm/edelsohn/src/src/gcc/tree-ssa-propagate.cc:999

0x123d0f5f execute_ranger_vrp(function*, bool, bool)

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1062

0x123d14ef execute

/nasfarm/edelsohn/src/src/gcc/tree-vrp.cc:1142





[COMMITTED] Remove pass counting in VRP.

2023-10-03 Thread Andrew MacLeod
Pass counting in VRP is used to decide when to call early VRP, pass the 
flag to enable warnings, and when the final pass is.


If you try to add additional passes, this becomes quite fragile. This 
patch simply chooses the pass based on the data pointer passed in, and 
remove the pass counter.   The first FULL VRP pass invokes the warning 
code, and the flag passed in now represents the FINAL pass of VRP.  
There is no longer a global flag which, as it turns out, wasn't working 
well with the JIT compiler, but when undetected.  (Thanks to dmalcolm 
for helping me sort out what was going on there)



Bootstraps  on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 29abc475a360ad14d5f692945f2805fba1fdc679 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 28 Sep 2023 09:19:32 -0400
Subject: [PATCH 2/5] Remove pass counting in VRP.

Rather than using a pass count to decide which parameters are passed to
VRP, makemit explicit.

	* passes.def (pass_vrp): Use parameter for final pass flag..
	* tree-vrp.cc (vrp_pass_num): Remove.
	(run_warning_pass): New.
	(pass_vrp::my_pass): Remove.
	(pass_vrp::final_p): New.
	(pass_vrp::set_pass_param): Set final_p param.
	(pass_vrp::execute): Choose specific pass based on data pointer.
---
 gcc/passes.def  |  4 ++--
 gcc/tree-vrp.cc | 26 +-
 2 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/gcc/passes.def b/gcc/passes.def
index 4110a472914..2bafd60bbfb 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -221,7 +221,7 @@ along with GCC; see the file COPYING3.  If not see
   NEXT_PASS (pass_fre, true /* may_iterate */);
   NEXT_PASS (pass_merge_phi);
   NEXT_PASS (pass_thread_jumps_full, /*first=*/true);
-  NEXT_PASS (pass_vrp, true /* warn_array_bounds_p */);
+  NEXT_PASS (pass_vrp, false /* final_p*/);
   NEXT_PASS (pass_dse);
   NEXT_PASS (pass_dce);
   /* pass_stdarg is always run and at this point we execute
@@ -348,7 +348,7 @@ along with GCC; see the file COPYING3.  If not see
   NEXT_PASS (pass_dominator, false /* may_peel_loop_headers_p */);
   NEXT_PASS (pass_strlen);
   NEXT_PASS (pass_thread_jumps_full, /*first=*/false);
-  NEXT_PASS (pass_vrp, false /* warn_array_bounds_p */);
+  NEXT_PASS (pass_vrp, true /* final_p */);
   /* Run CCP to compute alignment and nonzero bits.  */
   NEXT_PASS (pass_ccp, true /* nonzero_p */);
   NEXT_PASS (pass_warn_restrict);
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index d7b194f5904..05266dfe34a 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -1120,36 +1120,44 @@ const pass_data pass_data_early_vrp =
   ( TODO_cleanup_cfg | TODO_update_ssa | TODO_verify_all ),
 };
 
-static int vrp_pass_num = 0;
+static bool run_warning_pass = true;
 class pass_vrp : public gimple_opt_pass
 {
 public:
   pass_vrp (gcc::context *ctxt, const pass_data _)
-: gimple_opt_pass (data_, ctxt), data (data_), warn_array_bounds_p (false),
-  my_pass (vrp_pass_num++)
-  {}
+: gimple_opt_pass (data_, ctxt), data (data_),
+  warn_array_bounds_p (false), final_p (false)
+  {
+// Only the frst VRP pass should run warnings.
+if ( == _data_vrp)
+  {
+	warn_array_bounds_p = run_warning_pass;
+	run_warning_pass = false;
+  }
+  }
 
   /* opt_pass methods: */
   opt_pass * clone () final override { return new pass_vrp (m_ctxt, data); }
   void set_pass_param (unsigned int n, bool param) final override
 {
   gcc_assert (n == 0);
-  warn_array_bounds_p = param;
+  final_p = param;
 }
   bool gate (function *) final override { return flag_tree_vrp != 0; }
   unsigned int execute (function *fun) final override
 {
   // Early VRP pass.
-  if (my_pass == 0)
-	return execute_ranger_vrp (fun, /*warn_array_bounds_p=*/false, false);
+  if ( == _data_early_vrp)
+	return execute_ranger_vrp (fun, /*warn_array_bounds_p=*/false,
+   /*final_p=*/false);
 
-  return execute_ranger_vrp (fun, warn_array_bounds_p, my_pass == 2);
+  return execute_ranger_vrp (fun, warn_array_bounds_p, final_p);
 }
 
  private:
   const pass_data 
   bool warn_array_bounds_p;
-  int my_pass;
+  bool final_p;
 }; // class pass_vrp
 
 const pass_data pass_data_assumptions =
-- 
2.41.0



[COMMITTED] Return TRUE only when a global value is updated.

2023-10-03 Thread Andrew MacLeod
set_range_info should return TRUE only when it sets a new value. It was 
currently returning true whenever it set a value, whether it was 
different or not.


With this change,  VRP no longer overwrites global ranges DOM has set.  
2 testcases needed adjusting that were expecting VRP2 to set a range but 
turns out it was really being set in DOM2.   Instead they check for the 
range in the final listing...


Bootstrapped on  x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew
From dae5de2a2353b928cc7099a78d88a40473abefd2 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 27 Sep 2023 12:34:16 -0400
Subject: [PATCH 1/5] Return TRUE only when a global value is updated.

set_range_info should return TRUE only when it sets a new value.  VRP no
longer overwrites global ranges DOM has set.  Check for ranges in the
final listing.

	gcc/
	* tree-ssanames.cc (set_range_info): Return true only if the
	current value changes.

	gcc/testsuite/
	* gcc.dg/pr93917.c: Check for ranges in final optimized listing.
	* gcc.dg/tree-ssa/vrp-unreachable.c: Ditto.
---
 gcc/testsuite/gcc.dg/pr93917.c|  4 ++--
 .../gcc.dg/tree-ssa/vrp-unreachable.c |  4 ++--
 gcc/tree-ssanames.cc  | 24 +--
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/gcc/testsuite/gcc.dg/pr93917.c b/gcc/testsuite/gcc.dg/pr93917.c
index f09e1c41ae8..f636b77f45d 100644
--- a/gcc/testsuite/gcc.dg/pr93917.c
+++ b/gcc/testsuite/gcc.dg/pr93917.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O2 -fdump-tree-vrp1 -fdump-tree-vrp2" } */
+/* { dg-options "-O2 -fdump-tree-vrp1 -fdump-tree-vrp2 -fdump-tree-optimized-alias" } */
 
 void f3(int n);
 
@@ -19,5 +19,5 @@ void f2(int*n)
 
 /* { dg-final { scan-tree-dump-times "Global Export.*0, \\+INF" 1 "vrp1" } } */
 /* { dg-final { scan-tree-dump-times "__builtin_unreachable" 1 "vrp1" } } */
-/* { dg-final { scan-tree-dump-times "Global Export.*0, \\+INF" 1 "vrp2" } } */
 /* { dg-final { scan-tree-dump-times "__builtin_unreachable" 0 "vrp2" } } */
+/* { dg-final { scan-tree-dump-times "0, \\+INF" 2 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/vrp-unreachable.c b/gcc/testsuite/gcc.dg/tree-ssa/vrp-unreachable.c
index 5835dfc8dbc..4aad7f1be5d 100644
--- a/gcc/testsuite/gcc.dg/tree-ssa/vrp-unreachable.c
+++ b/gcc/testsuite/gcc.dg/tree-ssa/vrp-unreachable.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O2 -fdump-tree-vrp1-alias -fdump-tree-vrp2-alias" } */
+/* { dg-options "-O2 -fdump-tree-vrp1 -fdump-tree-vrp2 -fdump-tree-optimized-alias" } */
 
 void dead (unsigned n);
 void alive (unsigned n);
@@ -39,4 +39,4 @@ void func (unsigned n, unsigned m)
 /* { dg-final { scan-tree-dump-not "dead" "vrp1" } } */
 /* { dg-final { scan-tree-dump-times "builtin_unreachable" 1 "vrp1" } } */
 /* { dg-final { scan-tree-dump-not "builtin_unreachable" "vrp2" } } */
-/* { dg-final { scan-tree-dump-times "fff8 VALUE 0x0" 4 "vrp2" } } */
+/* { dg-final { scan-tree-dump-times "fff8 VALUE 0x0" 2 "optimized" } } */
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 23387b90fe3..1eae411ac1c 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -418,10 +418,17 @@ set_range_info (tree name, const vrange )
   if (r.undefined_p () || r.varying_p ())
 return false;
 
+  // Pick up the current range, or VARYING if none.
   tree type = TREE_TYPE (name);
+  Value_Range tmp (type);
+  if (range_info_p (name))
+range_info_get_range (name, tmp);
+  else
+tmp.set_varying (type);
+
   if (POINTER_TYPE_P (type))
 {
-  if (r.nonzero_p ())
+  if (r.nonzero_p () && !tmp.nonzero_p ())
 	{
 	  set_ptr_nonnull (name);
 	  return true;
@@ -429,18 +436,11 @@ set_range_info (tree name, const vrange )
   return false;
 }
 
-  /* If a global range already exists, incorporate it.  */
-  if (range_info_p (name))
-{
-  Value_Range tmp (type);
-  range_info_get_range (name, tmp);
-  tmp.intersect (r);
-  if (tmp.undefined_p ())
-	return false;
+  // If the result doesn't change, or is undefined, return false.
+  if (!tmp.intersect (r) || tmp.undefined_p ())
+return false;
 
-  return range_info_set_range (name, tmp);
-}
-  return range_info_set_range (name, r);
+  return range_info_set_range (name, tmp);
 }
 
 /* Set nonnull attribute to pointer NAME.  */
-- 
2.41.0



[COMMITTED] PR tree-optimization/111599 - Ensure ssa_name is still valid.

2023-09-26 Thread Andrew MacLeod
When processing an equivalence list, I neglected to make sure the 
ssa-name is still valid.  This patch simply checks to make sure it 
non-null and not in the free-list.


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 9df0f6bd582ceee53bfed8769cf156329ae33bd0 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 26 Sep 2023 09:27:52 -0400
Subject: [PATCH] Ensure ssa_name is still valid.

When the IL changes, an equivalence set may contain ssa_names that no
longer exist.  Ensure names are still valid and not in the free list.

	PR tree-optimization/111599
	gcc/
	* value-relation.cc (relation_oracle::valid_equivs): Ensure
	ssa_name is valid.

	gcc/testsuite/
	* gcc.dg/pr111599.c: New.
---
 gcc/testsuite/gcc.dg/pr111599.c | 16 
 gcc/value-relation.cc   |  9 ++---
 2 files changed, 22 insertions(+), 3 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr111599.c

diff --git a/gcc/testsuite/gcc.dg/pr111599.c b/gcc/testsuite/gcc.dg/pr111599.c
new file mode 100644
index 000..25880b759f7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111599.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -fno-inline-functions-called-once -fno-inline-small-functions -fno-tree-dce -fno-tree-forwprop -fno-tree-fre" } */
+
+int h(void);
+void l(int);
+void func_56(int p_57, unsigned p_58) {
+ // p_57 = 0x101BC642L;
+  if (p_57 || h()) {
+int *l_105[2];
+l_105[0] = _57;
+l(p_57);
+  }
+}
+void func_31(int p_33) {
+  func_56(0x101BC642L, (p_33));
+}
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index f2c668a0193..8fea4aad345 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -274,9 +274,12 @@ relation_oracle::valid_equivs (bitmap b, const_bitmap equivs, basic_block bb)
   EXECUTE_IF_SET_IN_BITMAP (equivs, 0, i, bi)
 {
   tree ssa = ssa_name (i);
-  const_bitmap ssa_equiv = equiv_set (ssa, bb);
-  if (ssa_equiv == equivs)
-	bitmap_set_bit (b, i);
+  if (ssa && !SSA_NAME_IN_FREE_LIST (ssa))
+	{
+	  const_bitmap ssa_equiv = equiv_set (ssa, bb);
+	  if (ssa_equiv == equivs)
+	bitmap_set_bit (b, i);
+	}
 }
 }
 
-- 
2.41.0



[COMMITTED][GCC13] PR tree-optimization/110315 - Reduce the initial size of int_range_max.

2023-09-26 Thread Andrew MacLeod
This patch adds the ability to resize ranges as needed, defaulting to no 
resizing.  int_range_max now defaults to 3 sub-ranges (instead of 255) 
and grows to 255 when the range being calculated does not fit.


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 70639014a69cf50fe11dc1adbfe1db4c7760ce69 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 26 Sep 2023 09:44:39 -0400
Subject: [PATCH] Reduce the initial size of int_range_max.

This patch adds the ability to resize ranges as needed, defaulting to no
resizing.  int_range_max now defaults to 3 sub-ranges (instead of 255)
and grows to 255 when the range being calculated does not fit.

	PR tree-optimization/110315
	* value-range-storage.h (vrange_allocator::alloc_irange): Adjust
	new params.
	* value-range.cc (irange::operator=): Resize range.
	(irange::irange_union): Same.
	(irange::irange_intersect): Same.
	(irange::invert): Same.
	* value-range.h (irange::maybe_resize): New.
	(~int_range): New.
	(int_range_max): Default to 3 sub-ranges and resize as needed.
	(int_range::int_range): Adjust for resizing.
	(int_range::operator=): Same.
---
 gcc/value-range-storage.h |  2 +-
 gcc/value-range.cc| 15 ++
 gcc/value-range.h | 96 +++
 3 files changed, 83 insertions(+), 30 deletions(-)

diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 6da377ebd2e..1ed6f1ccd61 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -184,7 +184,7 @@ vrange_allocator::alloc_irange (unsigned num_pairs)
   // Allocate the irange and required memory for the vector.
   void *r = alloc (sizeof (irange));
   tree *mem = static_cast  (alloc (nbytes));
-  return new (r) irange (mem, num_pairs);
+  return new (r) irange (mem, num_pairs, /*resizable=*/false);
 }
 
 inline frange *
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index ec826c2fe1b..753f5e8cc76 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -831,6 +831,10 @@ irange::operator= (const irange )
   copy_to_legacy (src);
   return *this;
 }
+
+  int needed = src.num_pairs ();
+  maybe_resize (needed);
+
   if (src.legacy_mode_p ())
 {
   copy_legacy_to_multi_range (src);
@@ -2506,6 +2510,7 @@ irange::irange_union (const irange )
   // Now it simply needs to be copied, and if there are too many
   // ranges, merge some.  We wont do any analysis as to what the
   // "best" merges are, simply combine the final ranges into one.
+  maybe_resize (i / 2);
   if (i > m_max_ranges * 2)
 {
   res[m_max_ranges * 2 - 1] = res[i - 1];
@@ -2605,6 +2610,11 @@ irange::irange_intersect (const irange )
   if (r.irange_contains_p (*this))
 return intersect_nonzero_bits (r);
 
+  // ?? We could probably come up with something smarter than the
+  // worst case scenario here.
+  int needed = num_pairs () + r.num_pairs ();
+  maybe_resize (needed);
+
   signop sign = TYPE_SIGN (TREE_TYPE(m_base[0]));
   unsigned bld_pair = 0;
   unsigned bld_lim = m_max_ranges;
@@ -2831,6 +2841,11 @@ irange::invert ()
   m_num_ranges = 1;
   return;
 }
+
+  // At this point, we need one extra sub-range to represent the
+  // inverse.
+  maybe_resize (m_num_ranges + 1);
+
   // The algorithm is as follows.  To calculate INVERT ([a,b][c,d]), we
   // generate [-MIN, a-1][b+1, c-1][d+1, MAX].
   //
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 969b2b68418..96e59ecfa72 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -172,7 +172,8 @@ public:
   bool legacy_verbose_intersect (const irange *);	// DEPRECATED
 
 protected:
-  irange (tree *, unsigned);
+  void maybe_resize (int needed);
+  irange (tree *, unsigned nranges, bool resizable);
   // potential promotion to public?
   tree tree_lower_bound (unsigned = 0) const;
   tree tree_upper_bound (unsigned) const;
@@ -200,6 +201,8 @@ protected:
   void copy_to_legacy (const irange &);
   void copy_legacy_to_multi_range (const irange &);
 
+  // Hard limit on max ranges allowed.
+  static const int HARD_MAX_RANGES = 255;
 private:
   friend void gt_ggc_mx (irange *);
   friend void gt_pch_nx (irange *);
@@ -214,15 +217,21 @@ private:
 
   bool intersect (const wide_int& lb, const wide_int& ub);
   unsigned char m_num_ranges;
+  bool m_resizable;
   unsigned char m_max_ranges;
   tree m_nonzero_mask;
+protected:
   tree *m_base;
 };
 
 // Here we describe an irange with N pairs of ranges.  The storage for
 // the pairs is embedded in the class as an array.
+//
+// If RESIZABLE is true, the storage will be resized on the heap when
+// the number of ranges needed goes past N up to a max of
+// HARD_MAX_RANGES.  This new storage is freed upon destruction.
 
-template
+template
 class GTY((user)) int_range : public irange
 {
 public:
@@ -233,7 +242,7 @@ public:
   int_range (tree type);
   int_range (const int_range &);
   int_range (const irange &);
-  virtual ~int_rang

Re: [PATCH] Add missing return in gori_compute::logical_combine

2023-09-25 Thread Andrew MacLeod
OK for trunk at least.   Thanks.  I presume it'll be fine for the other 
releases.


Andrew

On 9/25/23 11:51, Eric Botcazou wrote:

Hi,

the varying case currently falls through to the 1/true case.

Tested on x86_64-suse-linux, OK for mainline, 13 and 12 branches?


2023-09-25  Eric Botcazou  

* gimple-range-gori.cc (gori_compute::logical_combine): Add missing
return statement in the varying case.


2023-09-25  Eric Botcazou  

* gnat.dg/opt102.adb:New test.
* gnat.dg/opt102_pkg.adb, gnat.dg/opt102_pkg.ads: New helper.





[COMMITTED] Tweak ssa_cache::merge_range API.

2023-09-20 Thread Andrew MacLeod
Merge_range use to return TRUE if there was already a range in the 
cache.   This patches change the meaning of the return value such that 
it returns TRUE if the range in the cache changes..  ie, it either set a 
range where there wasn't one before, or updates an existing range when 
the old one intersects with the new one results in a different range.


It also tweaks the debug output for the cache to no longer output the 
header text "non-varying Global Ranges" in the class, as the class is 
now used for other purpoises as well.   The text is moved to when the 
dump is actually from a global table.


Bootstraps on 86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
commit 0885e96272f1335c324f99fd2d1e9b0b3da9090c
Author: Andrew MacLeod 
Date:   Wed Sep 20 12:53:04 2023 -0400

Tweak merge_range API.

merge_range use to return TRUE if ter was already a arange.  Now it
returns TRUE if it adds a new range, OR updates and existing range
with a new value.  FALSE is returned when the range already matches.

* gimple-range-cache.cc (ssa_cache::merge_range): Change meaning
of the return value.
(ssa_cache::dump): Don't print GLOBAL RANGE header.
(ssa_lazy_cache::merge_range): Adjust return value meaning.
(ranger_cache::dump): Print GLOBAL RANGE header.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 5b74681b61a..3c819933c4e 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -606,7 +606,7 @@ ssa_cache::set_range (tree name, const vrange )
 }
 
 // If NAME has a range, intersect it with R, otherwise set it to R.
-// Return TRUE if there was already a range set, otherwise false.
+// Return TRUE if the range is new or changes.
 
 bool
 ssa_cache::merge_range (tree name, const vrange )
@@ -616,19 +616,23 @@ ssa_cache::merge_range (tree name, const vrange )
 m_tab.safe_grow_cleared (num_ssa_names + 1);
 
   vrange_storage *m = m_tab[v];
-  if (m)
+  // Check if this is a new value.
+  if (!m)
+m_tab[v] = m_range_allocator->clone (r);
+  else
 {
   Value_Range curr (TREE_TYPE (name));
   m->get_vrange (curr, TREE_TYPE (name));
-  curr.intersect (r);
+  // If there is no change, return false.
+  if (!curr.intersect (r))
+   return false;
+
   if (m->fits_p (curr))
m->set_vrange (curr);
   else
m_tab[v] = m_range_allocator->clone (curr);
 }
-  else
-m_tab[v] = m_range_allocator->clone (r);
-  return m != NULL;
+  return true;
 }
 
 // Set the range for NAME to R in the ssa cache.
@@ -656,27 +660,14 @@ ssa_cache::clear ()
 void
 ssa_cache::dump (FILE *f)
 {
-  /* Cleared after the table header has been printed.  */
-  bool print_header = true;
   for (unsigned x = 1; x < num_ssa_names; x++)
 {
   if (!gimple_range_ssa_p (ssa_name (x)))
continue;
   Value_Range r (TREE_TYPE (ssa_name (x)));
-  // Invoke dump_range_query which is a private virtual version of
-  // get_range.   This avoids performance impacts on general queries,
-  // but allows sharing of the dump routine.
+  // Dump all non-varying ranges.
   if (get_range (r, ssa_name (x)) && !r.varying_p ())
{
- if (print_header)
-   {
- /* Print the header only when there's something else
-to print below.  */
- fprintf (f, "Non-varying global ranges:\n");
- fprintf (f, "=:\n");
- print_header = false;
-   }
-
  print_generic_expr (f, ssa_name (x), TDF_NONE);
  fprintf (f, "  : ");
  r.dump (f);
@@ -684,8 +675,6 @@ ssa_cache::dump (FILE *f)
}
 }
 
-  if (!print_header)
-fputc ('\n', f);
 }
 
 // Return true if NAME has an active range in the cache.
@@ -716,7 +705,7 @@ ssa_lazy_cache::set_range (tree name, const vrange )
 }
 
 // If NAME has a range, intersect it with R, otherwise set it to R.
-// Return TRUE if there was already a range set, otherwise false.
+// Return TRUE if the range is new or changes.
 
 bool
 ssa_lazy_cache::merge_range (tree name, const vrange )
@@ -731,7 +720,7 @@ ssa_lazy_cache::merge_range (tree name, const vrange )
   if (v >= m_tab.length ())
 m_tab.safe_grow (num_ssa_names + 1);
   m_tab[v] = m_range_allocator->clone (r);
-  return false;
+  return true;
 }
 
 // Return TRUE if NAME has a range, and return it in R.
@@ -996,6 +985,8 @@ ranger_cache::~ranger_cache ()
 void
 ranger_cache::dump (FILE *f)
 {
+  fprintf (f, "Non-varying global ranges:\n");
+  fprintf (f, "=:\n");
   m_globals.dump (f);
   fprintf (f, "\n");
 }


Re: [PATCH] [RFC] New early __builtin_unreachable processing.

2023-09-19 Thread Andrew MacLeod



On 9/19/23 08:56, Richard Biener wrote:

On Mon, Sep 18, 2023 at 3:48 PM Andrew MacLeod  wrote:


OK.

I dont see anything in the early VRP processing now that would allow a
later pass to remove the unreachable unless it does its own analysis
like DOM might do.

Isn't it as simple as

   if (i_2 > 5) __builtin_unreachable ();

registering a global range of [6, INF] for i_2 and then the next time
we fold if (i_2 > 5) using range info will eliminate it?  Yes, historically
that required VRP or DOM since nothing else looked at ranges, not
sure how it behaves now given more match.pd patterns do look
at (global) ranges.


if we set the range yes.   What I meant was in the cases where we decide 
it can't be removed, we do NOT set the range globally in vrp1 now. This 
means  unless some other pass determines the range is [6, +INF] the 
unreachcable call will remain in the IL and any ranger aware pass will 
still get the contextual range info resulting from the unreachable.  We 
were sometimes removing the unreachable without being able to update 
every affected global/future optimization opportunity, which this fixes. 
Hopefully :-)   Its certainly much better at least.


In theory, if inlining were aware of global ranges and propagated them, 
we could also now remove these some of these unreachables in EVRP rather 
than VRP1...   as I think we're now sure there is no benefit to keeping 
the unreachable call when we remove it.





In any case, thanks for the explanation and OK for the patch.


Will check it in shortly.

Andrew



Re: [PATCH] [RFC] New early __builtin_unreachable processing.

2023-09-18 Thread Andrew MacLeod via Gcc-patches



On 9/18/23 02:53, Richard Biener wrote:

On Fri, Sep 15, 2023 at 4:45 PM Andrew MacLeod  wrote:

Ive been looking at __builtin_unreachable () regressions.  The
fundamental problem seems to be  a lack of consistent expectation for
when we remove it earlier than the final pass of VRP.After looking
through them, I think this provides a sensible approach.

Ranger is pretty good at providing ranges in blocks dominated by the
__builtin_unreachable  branch, so removing it isn't quite a critical as
it once was.  Its also pretty good at identifying what in the block can
be affected by the branch.

This patch provide an alternate removal algorithm for earlier passes.
it looks at *all* the exports from the block, and if the branch
dominates every use of all the exports, AND none of those values access
memory, VRP will remove the unreachable call, rewrite the branch, update
all the values globally, and finally perform the simple DCE on the
branch's ssa-name.   This is kind of what it did before, but it wasn't
as stringent on the requirements.

The memory access check is required because there are a couple of test
cases for PRE in which there is a series of instruction leading to an
unreachable call, and none of those ssa names are ever used in the IL
again. The whole chunk is dead, and we update globals, however
pointlessly.  However, one of ssa_names loads from memory, and a later
passes commons this value with a later load, and then  the unreachable
call provides additional information about the later load.This is
evident in tree-ssa/ssa-pre-34.c.   The only way I see to avoid this
situation is to not remove the unreachable if there is a load feeding it.

What this does is a more sophisticated version of what DOM does in
all_uses_feed_or_dominated_by_stmt.  THe feeding instructions dont have
to be single use, but they do have to be dominated by the branch or be
single use within the branches block..

If there are multiple uses in the same block as the branch, this does
not remove the unreachable call.  If we could be sure there are no
intervening calls or side effects, it would be allowable, but this a
more expensive checking operation.  Ranger gets the ranges right anyway,
so with more passes using ranger, Im not sure we'd see much benefit from
the additional analysis.   It could always be added later.

This fixes at least 110249 and 110080 (and probably others).  The only
regression is 93917 for which I changed the testcase to adjust
expectations:

// PR 93917
void f1(int n)
{
if(n<0)
  __builtin_unreachable();
f3(n);
}

void f2(int*n)
{
if(*n<0)
  __builtin_unreachable();
f3 (*n);
}

We were removing both unreachable calls in VRP1, but only updating the
global values in the first case, meaning we lose information.   With the
change in semantics, we only update the global in the first case, but we
leave the unreachable call in the second case now (due to the load from
memory).  Ranger still calculates the contextual range correctly as [0,
+INF] in the second case, it just doesn't set the global value until
VRP2 when it is removed.

Does this seem reasonable?

I wonder how this addresses the fundamental issue we always faced
in that when we apply the range this range info in itself allows the
branch to the __builtin_unreachable () to be statically determined,
so when the first VRP pass sets the range the next pass evaluating
the condition will remove it (and the guarded __builtin_unreachable ()).

In principle there's nothing wrong with that if we don't lose the range
info during optimizations, but that unfortunately happens more often
than wanted and with the __builtin_unreachable () gone we've lost
the ability to re-compute them.

I think it's good to explicitly remove the branch at the point we want
rather than relying on the "next" visitor to pick up the global range.

As I read the patch we now remove __builtin_unreachable () explicitly
as soon as possible but don't really address the fundamental issue
in any way?



I think it pretty much addresses the issue completely.  No globals are 
updated by the unreachable branch unless it is removed.  We remove the 
unreachable early ONLY if every use of all the exports is dominated by 
the branch...    with the exception of a single use in the block used to 
define a different export. and those have to all have no other uses 
which are not dominated.  ie


  [local count: 1073741824]:
  y_2 = x_1(D) >> 1;
  t_3 = y_2 + 1;
  if (t_3 > 100)
    goto ; [0.00%]
  else
    goto ; [100.00%]

   [count: 0]:
  __builtin_unreachable ();

   [local count: 1073741824]:
  func (x_1(D), y_2, t_3);


In this case we will remove the unreachable call because we can provide 
an accurate global range for all values used in the definition chain for 
the program.


Global Exported (via early unreachable): x_1(D) = [irange] unsigned int 
[0, 199] MASK 0xff VALUE 0x0
Global Exported (via early unreachable): y_2 = [irange] u

[PATCH] [RFC] New early __builtin_unreachable processing.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
Ive been looking at __builtin_unreachable () regressions.  The 
fundamental problem seems to be  a lack of consistent expectation for 
when we remove it earlier than the final pass of VRP.    After looking 
through them, I think this provides a sensible approach.


Ranger is pretty good at providing ranges in blocks dominated by the 
__builtin_unreachable  branch, so removing it isn't quite a critical as 
it once was.  Its also pretty good at identifying what in the block can 
be affected by the branch.


This patch provide an alternate removal algorithm for earlier passes.  
it looks at *all* the exports from the block, and if the branch 
dominates every use of all the exports, AND none of those values access 
memory, VRP will remove the unreachable call, rewrite the branch, update 
all the values globally, and finally perform the simple DCE on the 
branch's ssa-name.   This is kind of what it did before, but it wasn't 
as stringent on the requirements.


The memory access check is required because there are a couple of test 
cases for PRE in which there is a series of instruction leading to an 
unreachable call, and none of those ssa names are ever used in the IL 
again. The whole chunk is dead, and we update globals, however 
pointlessly.  However, one of ssa_names loads from memory, and a later 
passes commons this value with a later load, and then  the unreachable 
call provides additional information about the later load.    This is 
evident in tree-ssa/ssa-pre-34.c.   The only way I see to avoid this 
situation is to not remove the unreachable if there is a load feeding it.


What this does is a more sophisticated version of what DOM does in 
all_uses_feed_or_dominated_by_stmt.  THe feeding instructions dont have 
to be single use, but they do have to be dominated by the branch or be 
single use within the branches block..


If there are multiple uses in the same block as the branch, this does 
not remove the unreachable call.  If we could be sure there are no 
intervening calls or side effects, it would be allowable, but this a 
more expensive checking operation.  Ranger gets the ranges right anyway, 
so with more passes using ranger, Im not sure we'd see much benefit from 
the additional analysis.   It could always be added later.


This fixes at least 110249 and 110080 (and probably others).  The only 
regression is 93917 for which I changed the testcase to adjust 
expectations:


// PR 93917
void f1(int n)
{
  if(n<0)
    __builtin_unreachable();
  f3(n);
}

void f2(int*n)
{
  if(*n<0)
    __builtin_unreachable();
  f3 (*n);
}

We were removing both unreachable calls in VRP1, but only updating the 
global values in the first case, meaning we lose information.   With the 
change in semantics, we only update the global in the first case, but we 
leave the unreachable call in the second case now (due to the load from 
memory).  Ranger still calculates the contextual range correctly as [0, 
+INF] in the second case, it just doesn't set the global value until 
VRP2 when it is removed.


Does this seem reasonable?

Bootstraps on x86_64-pc-linux-gnu with no regressions.  OK?

Andrew


From 87072ebfcd4f51276fc6ed1fb0557257d51ec446 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 11:52:15 -0400
Subject: [PATCH 3/3] New early __builtin_unreachable processing.

in VRP passes before __builtin_unreachable MUST be removed, only remove it
if all exports affected by the unreachable can have global values updated, and
do not involve loads from memory.

	PR tree-optimization/110080
	PR tree-optimization/110249
	gcc/
	* tree-vrp.cc (remove_unreachable::final_p): New.
	(remove_unreachable::maybe_register): Rename from
	maybe_register_block and call early or final routine.
	(fully_replaceable): New.
	(remove_unreachable::handle_early): New.
	(remove_unreachable::remove_and_update_globals): Remove
	non-final processing.
	(rvrp_folder::rvrp_folder): Add final flag to constructor.
	(rvrp_folder::post_fold_bb): Remove unreachable registration.
	(rvrp_folder::pre_fold_stmt): Move unreachable processing to here.
	(execute_ranger_vrp): Adjust some call parameters.

	gcc/testsuite/
	* g++.dg/pr110249.C: New.
	* gcc.dg/pr110080.c: New.
	* gcc.dg/pr93917.c: Adjust.

Tweak vuse case

Adjusted testcase 93917
---
 gcc/testsuite/g++.dg/pr110249.C |  16 +++
 gcc/testsuite/gcc.dg/pr110080.c |  27 +
 gcc/testsuite/gcc.dg/pr93917.c  |   7 +-
 gcc/tree-vrp.cc | 203 ++--
 4 files changed, 214 insertions(+), 39 deletions(-)
 create mode 100644 gcc/testsuite/g++.dg/pr110249.C
 create mode 100644 gcc/testsuite/gcc.dg/pr110080.c

diff --git a/gcc/testsuite/g++.dg/pr110249.C b/gcc/testsuite/g++.dg/pr110249.C
new file mode 100644
index 000..2b737618bdb
--- /dev/null
+++ b/gcc/testsuite/g++.dg/pr110249.C
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp1-alias" } */
+
+#include 
+#include 
+
+uint64_t read64

[COMMITTED 2/2] Always do PHI analysis before loop analysis.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
The original invocation of phi_analysis was only invoked if there was no 
loop information available.  I have found situations where phi analysis 
enhances existing loop information, and as such this patch moves the phi 
analysis block to before loop analysis is invoked (in case a query is 
made from within that area), and does it unconditionally.  There is 
minimal impact on compilation time.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5d5f90ec3b4a939cae5ce4f33b76849f6b08e3a9 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 10:09:16 -0400
Subject: [PATCH 2/3] Always do PHI analysis and before loop analysis.

PHI analysis wasn't being done if loop analysis found a value.  Always
do the PHI analysis, and run it for an iniital value before invoking
loop analysis.

	* gimple-range-fold.cc (fold_using_range::range_of_phi): Always
	run phi analysis, and do it before loop analysis.
---
 gcc/gimple-range-fold.cc | 53 
 1 file changed, 26 insertions(+), 27 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 03805d88d9b..d1945ccb554 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -939,7 +939,32 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	}
 }
 
-  bool loop_info_p = false;
+  // If PHI analysis is available, see if there is an iniital range.
+  if (phi_analysis_available_p ()
+  && irange::supports_p (TREE_TYPE (phi_def)))
+{
+  phi_group *g = (phi_analysis())[phi_def];
+  if (g && !(g->range ().varying_p ()))
+	{
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, "PHI GROUP query for ");
+	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
+	  fprintf (dump_file, " found : ");
+	  g->range ().dump (dump_file);
+	  fprintf (dump_file, " and adjusted original range from :");
+	  r.dump (dump_file);
+	}
+	  r.intersect (g->range ());
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, " to :");
+	  r.dump (dump_file);
+	  fprintf (dump_file, "\n");
+	}
+	}
+}
+
   // If SCEV is available, query if this PHI has any known values.
   if (scev_initialized_p ()
   && !POINTER_TYPE_P (TREE_TYPE (phi_def)))
@@ -962,32 +987,6 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 		  fprintf (dump_file, "\n");
 		}
 	  r.intersect (loop_range);
-	  loop_info_p = true;
-	}
-	}
-}
-
-  if (!loop_info_p && phi_analysis_available_p ()
-  && irange::supports_p (TREE_TYPE (phi_def)))
-{
-  phi_group *g = (phi_analysis())[phi_def];
-  if (g && !(g->range ().varying_p ()))
-	{
-	  if (dump_file && (dump_flags & TDF_DETAILS))
-	{
-	  fprintf (dump_file, "PHI GROUP query for ");
-	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
-	  fprintf (dump_file, " found : ");
-	  g->range ().dump (dump_file);
-	  fprintf (dump_file, " and adjusted original range from :");
-	  r.dump (dump_file);
-	}
-	  r.intersect (g->range ());
-	  if (dump_file && (dump_flags & TDF_DETAILS))
-	{
-	  fprintf (dump_file, " to :");
-	  r.dump (dump_file);
-	  fprintf (dump_file, "\n");
 	}
 	}
 }
-- 
2.41.0



[COMMITTED 1/2] Fix indentation in range_of_phi.

2023-09-15 Thread Andrew MacLeod via Gcc-patches
Somewhere along the way a large sequence of code in range_of_phi() ended 
up with the same indentation of the preceeding loop.. this simply fixes it.


committed as obvious.

Andrew
From e35c3b5335879afb616c6ead0f41bf6c275ee941 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 13 Sep 2023 09:58:39 -0400
Subject: [PATCH 1/3] Fix indentation.

No functio0nal change, indentation was incorrect.

	* gimple-range-fold.cc (fold_using_range::range_of_phi): Fix
	indentation.
---
 gcc/gimple-range-fold.cc | 80 
 1 file changed, 40 insertions(+), 40 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 8ebff7f5980..03805d88d9b 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -898,46 +898,46 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	break;
 }
 
-// If all arguments were equivalences, use the equivalence ranges as no
-// arguments were processed.
-if (r.undefined_p () && !equiv_range.undefined_p ())
-  r = equiv_range;
-
-// If the PHI boils down to a single effective argument, look at it.
-if (single_arg)
-  {
-	// Symbolic arguments can be equivalences.
-	if (gimple_range_ssa_p (single_arg))
-	  {
-	// Only allow the equivalence if the PHI definition does not
-	// dominate any incoming edge for SINGLE_ARG.
-	// See PR 108139 and 109462.
-	basic_block bb = gimple_bb (phi);
-	if (!dom_info_available_p (CDI_DOMINATORS))
-	  single_arg = NULL;
-	else
-	  for (x = 0; x < gimple_phi_num_args (phi); x++)
-		if (gimple_phi_arg_def (phi, x) == single_arg
-		&& dominated_by_p (CDI_DOMINATORS,
-	gimple_phi_arg_edge (phi, x)->src,
-	bb))
-		  {
-		single_arg = NULL;
-		break;
-		  }
-	if (single_arg)
-	  src.register_relation (phi, VREL_EQ, phi_def, single_arg);
-	  }
-	else if (src.get_operand (arg_range, single_arg)
-		 && arg_range.singleton_p ())
-	  {
-	// Numerical arguments that are a constant can be returned as
-	// the constant. This can help fold later cases where even this
-	// constant might have been UNDEFINED via an unreachable edge.
-	r = arg_range;
-	return true;
-	  }
-  }
+  // If all arguments were equivalences, use the equivalence ranges as no
+  // arguments were processed.
+  if (r.undefined_p () && !equiv_range.undefined_p ())
+r = equiv_range;
+
+  // If the PHI boils down to a single effective argument, look at it.
+  if (single_arg)
+{
+  // Symbolic arguments can be equivalences.
+  if (gimple_range_ssa_p (single_arg))
+	{
+	  // Only allow the equivalence if the PHI definition does not
+	  // dominate any incoming edge for SINGLE_ARG.
+	  // See PR 108139 and 109462.
+	  basic_block bb = gimple_bb (phi);
+	  if (!dom_info_available_p (CDI_DOMINATORS))
+	single_arg = NULL;
+	  else
+	for (x = 0; x < gimple_phi_num_args (phi); x++)
+	  if (gimple_phi_arg_def (phi, x) == single_arg
+		  && dominated_by_p (CDI_DOMINATORS,
+  gimple_phi_arg_edge (phi, x)->src,
+  bb))
+		{
+		  single_arg = NULL;
+		  break;
+		}
+	  if (single_arg)
+	src.register_relation (phi, VREL_EQ, phi_def, single_arg);
+	}
+  else if (src.get_operand (arg_range, single_arg)
+	   && arg_range.singleton_p ())
+	{
+	  // Numerical arguments that are a constant can be returned as
+	  // the constant. This can help fold later cases where even this
+	  // constant might have been UNDEFINED via an unreachable edge.
+	  r = arg_range;
+	  return true;
+	}
+}
 
   bool loop_info_p = false;
   // If SCEV is available, query if this PHI has any known values.
-- 
2.41.0



Re: [PATCH] Checking undefined_p before using the vr

2023-09-15 Thread Andrew MacLeod via Gcc-patches



On 9/14/23 22:07, Jiufu Guo wrote:


undefined is a perfectly acceptable range.  It can be used to
represent either values which has not been initialized, or more
frequently it identifies values that cannot occur due to
conflicting/unreachable code.  VARYING means it can be any range,
UNDEFINED means this is unusable, so treat it accordingly.  Its
propagated like any other range.

"undefined" means the ranger is unusable. So, for this ranger, it
seems only "undefined_p ()" can be checked, and it seems no other
functions of this ranger can be called.


not at all. It means ranger has determined that there is no valid range 
for the item you are asking about probably due to conflicting 
conditions, which imparts important information about the range.. or 
lack of range :-)


Quite frequently it means you are looking at a block of code that ranger 
knows is unreachable, but a pass of the compiler which removes such 
blocks has not been called yet.. so the awareness imparted is that there 
isn't much point in doing optimizations on it because its probably going 
to get thrown away by a following pass.




I'm thinking that it may be ok to let "range_of_expr" return false
if the "vr" is "undefined_p".  I know this may change the meaning
of "range_of_expr" slightly :)


No.  That would be like saying NULL is not a valid value for a pointer.  
undefined_p has very specific meaning that we use.. it just has no type.


Andrew



Re: [PATCH] Checking undefined_p before using the vr

2023-09-13 Thread Andrew MacLeod via Gcc-patches



On 9/12/23 21:42, Jiufu Guo wrote:

Hi,

Richard Biener  writes:


On Thu, 7 Sep 2023, Jiufu Guo wrote:


Hi,

As discussed in PR111303:

For pattern "(X + C) / N": "div (plus@3 @0 INTEGER_CST@1) INTEGER_CST@2)",
Even if "X" has value-range and "X + C" does not overflow, "@3" may still
be undefined. Like below example:

_3 = _2 + -5;
if (0 != 0)
   goto ; [34.00%]
else
   goto ; [66.00%]
;;  succ:   3
;;  4

;; basic block 3, loop depth 0
;;  pred:   2
_5 = _3 / 5;
;;  succ:   4

The whole pattern "(_2 + -5 ) / 5" is in "bb 3", but "bb 3" would be
unreachable (because "if (0 != 0)" is always false).
And "get_range_query (cfun)->range_of_expr (vr3, @3)" is checked in
"bb 3", "range_of_expr" gets an "undefined vr3". Where "@3" is "_5".

So, before using "vr3", it would be safe to check "!vr3.undefined_p ()".

Bootstrap & regtest pass on ppc64{,le} and x86_64.
Is this ok for trunk?

OK, but I wonder why ->range_of_expr () doesn't return false for
undefined_p ()?  While "undefined" technically means we can treat
it as nonnegative_p (or not, maybe but maybe not both), we seem to
not want to do that.  So why expose it at all to ranger users
(yes, internally we in some places want to handle undefined).

I guess, currently, it returns true and then lets the user check
undefined_p, maybe because it tries to only return false if the
type of EXPR is unsupported.


false is returned if no range can be calculated for any reason. The most 
common ones are unsupported types or in some cases, statements that are 
not understood.  FALSE means you cannot use the range being passed in.




Let "range_of_expr" return false for undefined_p would save checking
undefined_p again when using the APIs.

undefined is a perfectly acceptable range.  It can be used to represent 
either values which has not been initialized, or more frequently it 
identifies values that cannot occur due to conflicting/unreachable 
code.  VARYING means it can be any range, UNDEFINED means this is 
unusable, so treat it accordingly.  Its propagated like any other range.


The only reason you are having issues is you are then asking for the 
type of the range, and an undefined range currently has no type, for 
historical reasons.


Andrew

Andrew




[COMMITTED] PR tree-optimization/110875 - Some ssa-names get incorrectly marked as always_current.

2023-09-07 Thread Andrew MacLeod via Gcc-patches
When range_of_stmt invokes prefill_name to evaluate unvisited 
dependneciesit should not mark visited names as always_current.


when raner_cache::get_globaL_range() is invoked with the optional  
"current_p" flag, it triggers additional functionality. This call is 
meant to be from within ranger and it is understood that if the current 
value is not current,  set_global_range will always be called later with 
a value.  Thus it sets the always_current flag in the temporal cache to 
avoid computation cycles.


the prefill_stmt_dependencies () mechanism within ranger is intended to 
emulate the bahaviour od range_of_stmt on an arbitrarily long series of 
unresolved dependencies without triggering the overhead of huge call 
chains from the range_of_expr/range_on_entry/range_on_exit routines.  
Rather, it creates a stack of unvisited names, and invokes range_of_stmt 
on them directly in order to get initial cache values for each ssa-name.


The issue in this PR was that routine was incorrectly invoking the 
get_global_cache to determine whether there was a global value.  If 
there was, it would move on to the next dependency without invoking 
set_global_range to clear the always_current flag.


What it soudl have been doing was simply checking if there as a global 
value, and if there was not, add the name for processingand THEN invoke 
get_global_value to do all the special processing.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew




From e9be59f7d2dc6b302cf85ad69b0a77dee89ec809 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 7 Sep 2023 11:15:50 -0400
Subject: [PATCH] Some ssa-names get incorrectly marked as always_current.

When range_of_stmt invokes prefill_name to evaluate unvisited dependencies
it should not mark already visited names as always_current.

	PR tree-optimization/110875
	gcc/
	* gimple-range.cc (gimple_ranger::prefill_name): Only invoke
	cache-prefilling routine when the ssa-name has no global value.

	gcc/testsuite/
	* gcc.dg/pr110875.c: New.
---
 gcc/gimple-range.cc | 10 +++---
 gcc/testsuite/gcc.dg/pr110875.c | 34 +
 2 files changed, 41 insertions(+), 3 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110875.c

diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 01173c58f02..13c3308d537 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -351,10 +351,14 @@ gimple_ranger::prefill_name (vrange , tree name)
   if (!gimple_range_op_handler::supported_p (stmt) && !is_a (stmt))
 return;
 
-  bool current;
   // If this op has not been processed yet, then push it on the stack
-  if (!m_cache.get_global_range (r, name, current))
-m_stmt_list.safe_push (name);
+  if (!m_cache.get_global_range (r, name))
+{
+  bool current;
+  // Set the global cache value and mark as alway_current.
+  m_cache.get_global_range (r, name, current);
+  m_stmt_list.safe_push (name);
+}
 }
 
 // This routine will seed the global cache with most of the dependencies of
diff --git a/gcc/testsuite/gcc.dg/pr110875.c b/gcc/testsuite/gcc.dg/pr110875.c
new file mode 100644
index 000..4d6ecbca0c8
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110875.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp2" } */
+
+void foo(void);
+static int a, b;
+static int *c = , *d;
+static unsigned e;
+static short f;
+static unsigned g(unsigned char h, char i) { return h + i; }
+int main() {
+d = 
+int *j = d;
+e = -27;
+for (; e > 18; e = g(e, 6)) {
+a = 0;
+for (; a != -3; a--) {
+if (0 != a ^ *j)
+for (; b; b++) f = -f;
+else if (*c) {
+foo();
+break;
+}
+if (!(((e) >= 235) && ((e) <= 4294967269))) {
+__builtin_unreachable();
+}
+b = 0;
+}
+}
+}
+
+
+/* { dg-final { scan-tree-dump-not "foo" "vrp2" } } */
+
+
-- 
2.41.0



Re: [PATCH 2/2] VR-VALUES: Rewrite test_for_singularity using range_op_handler

2023-09-07 Thread Andrew MacLeod via Gcc-patches



On 9/1/23 02:40, Andrew Pinski wrote:

On Fri, Aug 11, 2023 at 8:08 AM Andrew MacLeod via Gcc-patches
 wrote:


If this is only going to work with integers, you might want to check
that somewhere or switch to irange and int_range_max..

You can make it work with any kind (if you know op1 is a constant) by
simply doing

Value_Range op1_range (TREE_TYPE (op1))
get_global_range_query->range_of_expr (op1_range, op1)

That will convert trees to a the appropriate range...  THis is also true
for integer constants... but you can also just do the WI conversion like
you do.

The routine also get confusing to read because it passes in op0 and
op1,  but of course ranger uses op1 and op2 nomenclature, and it looks a
bit confusing :-P   I'd change the operands passed in to op1 and op2 if
we are rewriting the routine.

Ranger using the nomenclature of op1/op2 and gimple is inconsistent
with trees and other parts of GCC.
It seems like we have to live with this inconsistency now too.
Renaming things in this one function to op1/op2 might be ok but the
rest of the file uses op0/op1 too; most likely because it was
originally written before gimple.

I think it would be good to have this written in the coding style,
which way should we have it for new code; if we start at 0 or 1 for
operands. It might reduce differences based on who wrote which part
(and even to some extent when). I don't really care which one is
picked as long as we pick one.

Thanks,
Andrew Pinski

I certainly wont argue it would be good to be consistent, but of course 
its quite prevalent. Perhaps we should rewrite vr-values.cc to change 
the terminology in one patch?


long term some of it is likely to get absorbed into rangeops, and what 
isn't could/should be made vrange/irange  aware...  no one has gotten to 
it yet. we could change the terminology as the routines are reworked too...


Andrew




[COMMITTED 2/2] tree-optimization/110918 - Phi analyzer - Initialize with a range instead of a tree.

2023-08-23 Thread Andrew MacLeod via Gcc-patches
Rangers PHI analyzer currently only allows a single initializing value 
to a group. This patch changes that to use an initialization range, which is
cumulative of all integer constants, plus a single symbolic value.  
There were many times when there were multiple constants feeding into 
PHIs and there is no reason to disqualify those from determining if 
there is a better starting range for a PHI,


This patch also changes the way PHI groups are printed so they show up 
in the listing as they are encountered, rather than as a list at the 
end.  It was quite difficult to see what was going on when it simply 
dumped the groups at the end of processing.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From bd50bbfa95e51edf51392f147e9a860adb5f495e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 17 Aug 2023 12:34:59 -0400
Subject: [PATCH 2/4] Phi analyzer - Initialize with range instead of a tree.

Rangers PHI analyzer currently only allows a single initializer to a group.
This patch changes that to use an inialization range, which is
cumulative of all integer constants, plus a single symbolic value.  There is no other change to group functionality.

This patch also changes the way PHI groups are printed so they show up in the
listing as they are encountered, rather than as a list at the end.  It
was more difficult to see what was going on previously.

	PR tree-optimization/110918 - Initialize with range instead of a tree.
	gcc/
	* gimple-range-fold.cc (fold_using_range::range_of_phi): Tweak output.
	* gimple-range-phi.cc (phi_group::phi_group): Remove unused members.
	Initialize using a range instead of value and edge.
	(phi_group::calculate_using_modifier): Use initializer value and
	process for relations after trying for iteration convergence.
	(phi_group::refine_using_relation): Use initializer range.
	(phi_group::dump): Rework the dump output.
	(phi_analyzer::process_phi): Allow multiple constant initilizers.
	Dump groups immediately as created.
	(phi_analyzer::dump): Tweak output.
	* gimple-range-phi.h (phi_group::phi_group): Adjust prototype.
	(phi_group::initial_value): Delete.
	(phi_group::refine_using_relation): Adjust prototype.
	(phi_group::m_initial_value): Delete.
	(phi_group::m_initial_edge): Delete.
	(phi_group::m_vr): Use int_range_max.
	* tree-vrp.cc (execute_ranger_vrp): Don't dump phi groups.

	gcc/testsuite/
	* gcc.dg/pr102983.c: Adjust output expectations.
	* gcc.dg/pr110918.c: New.
---
 gcc/gimple-range-fold.cc|   6 +-
 gcc/gimple-range-phi.cc | 186 
 gcc/gimple-range-phi.h  |   9 +-
 gcc/testsuite/gcc.dg/pr102983.c |   2 +-
 gcc/testsuite/gcc.dg/pr110918.c |  26 +
 gcc/tree-vrp.cc |   5 +-
 6 files changed, 129 insertions(+), 105 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110918.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 7fa5a27cb12..8ebff7f5980 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -953,7 +953,7 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 		{
-		  fprintf (dump_file, "   Loops range found for ");
+		  fprintf (dump_file, "Loops range found for ");
 		  print_generic_expr (dump_file, phi_def, TDF_SLIM);
 		  fprintf (dump_file, ": ");
 		  loop_range.dump (dump_file);
@@ -975,9 +975,9 @@ fold_using_range::range_of_phi (vrange , gphi *phi, fur_source )
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	{
-	  fprintf (dump_file, "   PHI group range found for ");
+	  fprintf (dump_file, "PHI GROUP query for ");
 	  print_generic_expr (dump_file, phi_def, TDF_SLIM);
-	  fprintf (dump_file, ": ");
+	  fprintf (dump_file, " found : ");
 	  g->range ().dump (dump_file);
 	  fprintf (dump_file, " and adjusted original range from :");
 	  r.dump (dump_file);
diff --git a/gcc/gimple-range-phi.cc b/gcc/gimple-range-phi.cc
index a94b90a4660..9884a0ebbb0 100644
--- a/gcc/gimple-range-phi.cc
+++ b/gcc/gimple-range-phi.cc
@@ -79,39 +79,33 @@ phi_analyzer _analysis ()
 phi_group::phi_group (const phi_group )
 {
   m_group = g.m_group;
-  m_initial_value = g.m_initial_value;
-  m_initial_edge = g.m_initial_edge;
   m_modifier = g.m_modifier;
   m_modifier_op = g.m_modifier_op;
   m_vr = g.m_vr;
 }
 
-// Create a new phi_group with members BM, initialvalue INIT_VAL, modifier
-// statement MOD, and resolve values using query Q.
-// Calculate the range for the gropup if possible, otherwise set it to
-// VARYING.
+// Create a new phi_group with members BM, initial range INIT_RANGE, modifier
+// statement MOD on edge MOD_EDGE, and resolve values using query Q.  Calculate
+// the range for the group if possible, otherwise set it to VARYING.
 
-phi_group::phi_group (bitmap bm, tree init_val,

[COMMITTED 1/2] Phi analyzer - Do not create phi groups with a single phi.

2023-08-23 Thread Andrew MacLeod via Gcc-patches
Rangers Phi Analyzer was creating a group consisting of a single PHI, 
which was problematic.  It didn't really help anything, and it prevented 
larger groups from including those PHIs and stopped some useful things 
from happening.


Bootstrapped on x86_64-pc-linux-gnu  with no regressions. Pushed.

Andrew
From 9855b3f0a2869d456f0ee34a94a1231eb6d44c4a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 16 Aug 2023 13:23:06 -0400
Subject: [PATCH 1/4] Don't process phi groups with one phi.

The phi analyzer should not create a phi group containing a single phi.

	* gimple-range-phi.cc (phi_analyzer::operator[]): Return NULL if
	no group was created.
	(phi_analyzer::process_phi): Do not create groups of one phi node.
---
 gcc/gimple-range-phi.cc | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/gcc/gimple-range-phi.cc b/gcc/gimple-range-phi.cc
index ffb4691d06b..a94b90a4660 100644
--- a/gcc/gimple-range-phi.cc
+++ b/gcc/gimple-range-phi.cc
@@ -344,9 +344,10 @@ phi_analyzer::operator[] (tree name)
   process_phi (as_a (SSA_NAME_DEF_STMT (name)));
   if (bitmap_bit_p (m_simple, v))
 	return  NULL;
-  // If m_simple bit isn't set, then process_phi allocated the table
-  // and should have a group.
-  gcc_checking_assert (v < m_tab.length ());
+ // If m_simple bit isn't set, and process_phi didn't allocated the table
+ // no group was created, so return NULL.
+ if (v >= m_tab.length ())
+  return NULL;
 }
   return m_tab[v];
 }
@@ -363,6 +364,7 @@ phi_analyzer::process_phi (gphi *phi)
   unsigned x;
   m_work.truncate (0);
   m_work.safe_push (gimple_phi_result (phi));
+  unsigned phi_count = 1;
   bitmap_clear (m_current);
 
   // We can only have 2 externals: an initial value and a modifier.
@@ -407,6 +409,7 @@ phi_analyzer::process_phi (gphi *phi)
 	  gimple *arg_stmt = SSA_NAME_DEF_STMT (arg);
 	  if (arg_stmt && is_a (arg_stmt))
 		{
+		  phi_count++;
 		  m_work.safe_push (arg);
 		  continue;
 		}
@@ -430,9 +433,12 @@ phi_analyzer::process_phi (gphi *phi)
 	}
 }
 
-  // If there are no names in the group, we're done.
-  if (bitmap_empty_p (m_current))
+  // If there are less than 2 names, just return.  This PHI may be included
+  // by another PHI, making it simple or a group of one will prevent a larger
+  // group from being formed.
+  if (phi_count < 2)
 return;
+  gcc_checking_assert (!bitmap_empty_p (m_current));
 
   phi_group *g = NULL;
   if (cycle_p)
-- 
2.41.0



[COMMITTED] PR tree-optimization/111009 - Fix range-ops operator_addr.

2023-08-17 Thread Andrew MacLeod via Gcc-patches
operator_addr was simply calling fold_range() to implement op1_range, 
but it turns out op1_range needs to be more restrictive.


take for example  from the PR :

   _13 = >maj

when folding,  getting a value of 0 for op1 means dso->maj resolved to a 
value of [0,0].  fold_using_range::range_of_address will have processed 
the symbolics, or at least we know that op1 is 0.  Likewise if it is 
non-zero, we can also conclude the LHS is non-zero.


however, when working from the LHS, we cannot make the same 
conclusions.  GORI has no concept of symblics, so knowing the expressions is


[0,0]  = & 

 we cannot conclude the op1 is also 0.. in particular >maj wouldnt 
be unless dso was zero and maj was also a zero offset.
Likewise if the LHS is [1,1] we cant be sure op1 is nonzero unless we 
know the type cannot wrap.


This patch simply implements op1_range with these rules instead of 
calling fold_range.


Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From dc48d1d1d4458773f89f21b2f019f66ddf88f2e5 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 17 Aug 2023 11:13:14 -0400
Subject: [PATCH] Fix range-ops operator_addr.

Lack of symbolic information prevents op1_range from beig able to draw
the same conclusions as fold_range can.

PR tree-optimization/111009
gcc/
* range-op.cc (operator_addr_expr::op1_range): Be more restrictive.

gcc/testsuite/
* gcc.dg/pr111009.c: New.
---
 gcc/range-op.cc | 12 ++-
 gcc/testsuite/gcc.dg/pr111009.c | 38 +
 2 files changed, 49 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr111009.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 086c6c19735..268f6b6f025 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -4325,7 +4325,17 @@ operator_addr_expr::op1_range (irange , tree type,
   const irange ,
   relation_trio) const
 {
-  return operator_addr_expr::fold_range (r, type, lhs, op2);
+   if (empty_range_varying (r, type, lhs, op2))
+return true;
+
+  // Return a non-null pointer of the LHS type (passed in op2), but only
+  // if we cant overflow, eitherwise a no-zero offset could wrap to zero.
+  // See PR 111009.
+  if (!contains_zero_p (lhs) && TYPE_OVERFLOW_UNDEFINED (type))
+r = range_nonzero (type);
+  else
+r.set_varying (type);
+  return true;
 }
 
 // Initialize any integral operators to the primary table
diff --git a/gcc/testsuite/gcc.dg/pr111009.c b/gcc/testsuite/gcc.dg/pr111009.c
new file mode 100644
index 000..3accd9ac063
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111009.c
@@ -0,0 +1,38 @@
+/* PR tree-optimization/111009 */
+/* { dg-do run } */
+/* { dg-options "-O3 -fno-strict-overflow" } */
+
+struct dso {
+ struct dso * next;
+ int maj;
+};
+
+__attribute__((noipa)) static void __dso_id__cmp_(void) {}
+
+__attribute__((noipa))
+static int bug(struct dso * d, struct dso *dso)
+{
+ struct dso **p = 
+ struct dso *curr = 0;
+
+ while (*p) {
+  curr = *p;
+  // prevent null deref below
+  if (!dso) return 1;
+  if (dso == curr) return 1;
+
+  int *a = >maj;
+  // null deref
+  if (!(a && *a)) __dso_id__cmp_();
+
+  p = >next;
+ }
+ return 0;
+}
+
+__attribute__((noipa))
+int main(void) {
+struct dso d = { 0, 0, };
+bug(, 0);
+}
+
-- 
2.41.0



Re: [PATCH 2/2] VR-VALUES: Rewrite test_for_singularity using range_op_handler

2023-08-11 Thread Andrew MacLeod via Gcc-patches



On 8/11/23 05:51, Richard Biener wrote:

On Fri, Aug 11, 2023 at 11:17 AM Andrew Pinski via Gcc-patches
 wrote:

So it turns out there was a simplier way of starting to
improve VRP to start to fix PR 110131, PR 108360, and PR 108397.
That was rewrite test_for_singularity to use range_op_handler
and Value_Range.

This patch implements that and

OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.

I'm hoping Andrew/Aldy can have a look here.

Richard.


gcc/ChangeLog:

 * vr-values.cc (test_for_singularity): Add edge argument
 and rewrite using range_op_handler.
 (simplify_compare_using_range_pairs): Use Value_Range
 instead of value_range and update test_for_singularity call.

gcc/testsuite/ChangeLog:

 * gcc.dg/tree-ssa/vrp124.c: New test.
 * gcc.dg/tree-ssa/vrp125.c: New test.
---
  gcc/testsuite/gcc.dg/tree-ssa/vrp124.c | 44 +
  gcc/testsuite/gcc.dg/tree-ssa/vrp125.c | 44 +
  gcc/vr-values.cc   | 91 --
  3 files changed, 114 insertions(+), 65 deletions(-)
  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/vrp125.c

diff --git a/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c 
b/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
new file mode 100644
index 000..6ccbda35d1b
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/vrp124.c
@@ -0,0 +1,44 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+/* Should be optimized to a == -100 */
+int g(int a)
+{
+  if (a == -100 || a >= 0)
+;
+  else
+return 0;
+  return a < 0;
+}
+
+/* Should optimize to a == 0 */
+int f(int a)
+{
+  if (a == 0 || a > 100)
+;
+  else
+return 0;
+  return a < 50;
+}
+
+/* Should be optimized to a == 0. */
+int f2(int a)
+{
+  if (a == 0 || a > 100)
+;
+  else
+return 0;
+  return a < 100;
+}
+
+/* Should optimize to a == 100 */
+int f1(int a)
+{
+  if (a < 0 || a == 100)
+;
+  else
+return 0;
+  return a > 50;
+}
+
+/* { dg-final { scan-tree-dump-not "goto " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c 
b/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c
new file mode 100644
index 000..f6c2f8e35f1
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/vrp125.c
@@ -0,0 +1,44 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+/* Should be optimized to a == -100 */
+int g(int a)
+{
+  if (a == -100 || a == -50 || a >= 0)
+;
+  else
+return 0;
+  return a < -50;
+}
+
+/* Should optimize to a == 0 */
+int f(int a)
+{
+  if (a == 0 || a == 50 || a > 100)
+;
+  else
+return 0;
+  return a < 50;
+}
+
+/* Should be optimized to a == 0. */
+int f2(int a)
+{
+  if (a == 0 || a == 50 || a > 100)
+;
+  else
+return 0;
+  return a < 25;
+}
+
+/* Should optimize to a == 100 */
+int f1(int a)
+{
+  if (a < 0 || a == 50 || a == 100)
+;
+  else
+return 0;
+  return a > 50;
+}
+
+/* { dg-final { scan-tree-dump-not "goto " "optimized" } } */
diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index a4fddd62841..7004b0224bd 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -907,66 +907,30 @@ simplify_using_ranges::simplify_bit_ops_using_ranges
 a known value range VR.

 If there is one and only one value which will satisfy the
-   conditional, then return that value.  Else return NULL.
-
-   If signed overflow must be undefined for the value to satisfy
-   the conditional, then set *STRICT_OVERFLOW_P to true.  */
+   conditional on the EDGE, then return that value.
+   Else return NULL.  */

  static tree
  test_for_singularity (enum tree_code cond_code, tree op0,
- tree op1, const value_range *vr)
+ tree op1, Value_Range vr, bool edge)


VR should be a "vrange &".   THis is the top level base class for all 
ranges of all types/kinds, and what we usually pass values around as if 
we want tohem to be any kind.   If this is inetger only, we'd pass a an 
'irange &'


Value_Range is the opposite. Its the sink that contains one of each kind 
of range and can switch around between them as needed. You do not want 
to pass that by value!   The generic engine uses these so it can suppose 
floats. int, pointers, whatever...



  {
-  tree min = NULL;
-  tree max = NULL;
-
-  /* Extract minimum/maximum values which satisfy the conditional as it was
- written.  */
-  if (cond_code == LE_EXPR || cond_code == LT_EXPR)
+  /* This is already a singularity.  */
+  if (cond_code == NE_EXPR || cond_code == EQ_EXPR)
+return NULL;
+  auto range_op = range_op_handler (cond_code);
+  int_range<2> op1_range (TREE_TYPE (op0));
+  wide_int w = wi::to_wide (op1);
+  op1_range.set (TREE_TYPE (op1), w, w);


If this is only going to work with integers, you might want to check 
that somewhere or switch to irange and int_range_max..


You can make it work with any kind (if you know op1 is a constant) by 

[COMMITTED] Add operand ranges to op1_op2_relation API.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
We're looking to add the unordered relations for floating point, and as 
a result, we can no longer determine the relation between op1 and op2 in 
a statement based purely on the LHS... we also need to know the type of 
the operands on the RHS.


This patch adjusts op1_op2_relation to fit the same mold as 
fold_range... ie, takes 3 vrange instead of just a LHS.


It also copies the functionality of the integral relations to the 
floating point counterparts, and when the unordered relations are added, 
those floating point routines can be adjusted to do the right thing.


This results in no current functional changes.

Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From de7ae277f497ed5b533af877fe26d8f133760f8b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 1 Aug 2023 14:33:09 -0400
Subject: [PATCH 3/3] Add operand ranges to op1_op2_relation API.

With additional floating point relations in the pipeline, we can no
longer tell based on the LHS what the relation of X < Y is without knowing
the type of X and Y.

	* gimple-range-fold.cc (fold_using_range::range_of_range_op): Add
	ranges to the call to relation_fold_and_or.
	(fold_using_range::relation_fold_and_or): Add op1 and op2 ranges.
	(fur_source::register_outgoing_edges): Add op1 and op2 ranges.
	* gimple-range-fold.h (relation_fold_and_or): Adjust params.
	* gimple-range-gori.cc (gori_compute::compute_operand_range): Add
	a varying op1 and op2 to call.
	* range-op-float.cc (range_operator::op1_op2_relation): New dafaults.
	(operator_equal::op1_op2_relation): New float version.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation) Ditto.
	* range-op-mixed.h (operator_equal::op1_op2_relation): New float
	prototype.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation): Ditto.
	* range-op.cc (range_op_handler::op1_op2_relation): Dispatch new
	variations.
	(range_operator::op1_op2_relation): Add extra params.
	(operator_equal::op1_op2_relation): Ditto.
	(operator_not_equal::op1_op2_relation): Ditto.
	(operator_lt::op1_op2_relation): Ditto.
	(operator_le::op1_op2_relation): Ditto.
	(operator_gt::op1_op2_relation): Ditto.
	(operator_ge::op1_op2_relation): Ditto.
	* range-op.h (range_operator): New prototypes.
	(range_op_handler): Ditto.
---
 gcc/gimple-range-fold.cc |  26 +---
 gcc/gimple-range-fold.h  |   3 +-
 gcc/gimple-range-gori.cc |   5 +-
 gcc/range-op-float.cc| 129 ++-
 gcc/range-op-mixed.h |  30 +++--
 gcc/range-op.cc  |  41 +
 gcc/range-op.h   |  15 -
 7 files changed, 216 insertions(+), 33 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index ab2d996c4eb..7fa5a27cb12 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -700,7 +700,7 @@ fold_using_range::range_of_range_op (vrange ,
    relation_trio::op1_op2 (rel)))
 	r.set_varying (type);
 	  if (irange::supports_p (type))
-	relation_fold_and_or (as_a  (r), s, src);
+	relation_fold_and_or (as_a  (r), s, src, range1, range2);
 	  if (lhs)
 	{
 	  if (src.gori ())
@@ -1103,7 +1103,8 @@ fold_using_range::range_of_ssa_name_with_loop_info (vrange , tree name,
 
 void
 fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
-	fur_source )
+	fur_source , vrange ,
+	vrange )
 {
   // No queries or already folded.
   if (!src.gori () || !src.query ()->oracle () || lhs_range.singleton_p ())
@@ -1164,9 +1165,8 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 return;
 
   int_range<2> bool_one = range_true ();
-
-  relation_kind relation1 = handler1.op1_op2_relation (bool_one);
-  relation_kind relation2 = handler2.op1_op2_relation (bool_one);
+  relation_kind relation1 = handler1.op1_op2_relation (bool_one, op1, op2);
+  relation_kind relation2 = handler2.op1_op2_relation (bool_one, op1, op2);
   if (relation1 == VREL_VARYING || relation2 == VREL_VARYING)
 return;
 
@@ -1201,7 +1201,8 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 // Register any outgoing edge relations from a conditional branch.
 
 void
-fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge e1)
+fur_source::register_outgoing_edges (gcond *s, irange _range,
+ edge e0, edge e1)
 {
   int_range<2> e0_range, e1_range;
   tree name;
@@ -1236,17 +1237,20 @@ fur_source::register_outgoing_edges (gcond *s, irange _range, edge e0, edge
   // if (a_2 < b_5)
   tree ssa1 = gimple_range_ssa_p (handler.operand1 ());
   tree ssa2 = gimple_range_ssa_p (handler.operand2 ());
+  Value_Range r1,r2;
   if (ssa1 && ssa2

[COMMITTED] Provide a routine for NAME == NAME relation.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
We've been assuming x == x is always VREL_EQ in GORI, but this is not 
always going to be true with floating point.  Provide an API to return 
the relation.


Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 430ff4f3e670e02185991190a5e2d90e61b39e07 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 2 Aug 2023 10:58:37 -0400
Subject: [PATCH 2/3] Provide a routine for NAME == NAME relation.

We've been assuming x == x s VREL_EQ in GORI, but this is not always going to
be true with floating point.  Provide an API to return the relation.

	* gimple-range-gori.cc (gori_compute::compute_operand1_range):
	Use identity relation.
	(gori_compute::compute_operand2_range): Ditto.
	* value-relation.cc (get_identity_relation): New.
	* value-relation.h (get_identity_relation): New prototype.
---
 gcc/gimple-range-gori.cc | 10 --
 gcc/value-relation.cc| 14 ++
 gcc/value-relation.h |  3 +++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 6dc15a0ce3f..c37e54bcf84 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1142,7 +1142,10 @@ gori_compute::compute_operand1_range (vrange ,
 
   // If op1 == op2, create a new trio for just this call.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-	trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
+	{
+	  relation_kind k = get_identity_relation (op1, op1_range);
+	  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
+	}
   if (!handler.calc_op1 (r, lhs, op2_range, trio))
 	return false;
 }
@@ -1218,7 +1221,10 @@ gori_compute::compute_operand2_range (vrange ,
 
   // If op1 == op2, create a new trio for this stmt.
   if (op1 == op2 && gimple_range_ssa_p (op1))
-trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
+{
+  relation_kind k = get_identity_relation (op1, op1_range);
+  trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), k);
+}
   // Intersect with range for op2 based on lhs and op1.
   if (!handler.calc_op2 (r, lhs, op1_range, trio))
 return false;
diff --git a/gcc/value-relation.cc b/gcc/value-relation.cc
index 7df2cd6e961..f2c668a0193 100644
--- a/gcc/value-relation.cc
+++ b/gcc/value-relation.cc
@@ -183,6 +183,20 @@ relation_transitive (relation_kind r1, relation_kind r2)
   return relation_kind (rr_transitive_table[r1][r2]);
 }
 
+// When operands of a statement are identical ssa_names, return the
+// approriate relation between operands for NAME == NAME, given RANGE.
+//
+relation_kind
+get_identity_relation (tree name, vrange  ATTRIBUTE_UNUSED)
+{
+  // Return VREL_UNEQ when it is supported for floats as appropriate.
+  if (frange::supports_p (TREE_TYPE (name)))
+return VREL_EQ;
+
+  // Otherwise return VREL_EQ.
+  return VREL_EQ;
+}
+
 // This vector maps a relation to the equivalent tree code.
 
 static const tree_code relation_to_code [VREL_LAST] = {
diff --git a/gcc/value-relation.h b/gcc/value-relation.h
index be6e277421b..f00f84f93b6 100644
--- a/gcc/value-relation.h
+++ b/gcc/value-relation.h
@@ -91,6 +91,9 @@ inline bool relation_equiv_p (relation_kind r)
 
 void print_relation (FILE *f, relation_kind rel);
 
+// Return relation for NAME == NAME with RANGE.
+relation_kind get_identity_relation (tree name, vrange );
+
 class relation_oracle
 {
 public:
-- 
2.40.1



[COMMITTED] Automatically set type is certain Value_Range routines.

2023-08-03 Thread Andrew MacLeod via Gcc-patches
When you use a Value_Range, you need to set it's type first so it knows 
whether it will be an irange or an frange or whatever.


There are a few set routines which take a type, and you shouldn't need 
to set the type first in those cases..  For instance set_varying() takes 
a type, so it seems pointless to specify the type twice.  ie


Value_Range r1 (TREE_TYPE (name));
r1.set_varying (TREE_TYPE (name));

this patch automatically sets the kind based on the type in the routines 
set_varying(), set_zero(), and set_nonzero().. All of which take a type 
parameter.  Now it is simply:


Value_Range r1;
r1.set_varying (TREE_TYPE (name));

Bootstraps on  x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew
From 1fbde4cc5fb7ad4b08f0f7ae1f247f9b35124f99 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 2 Aug 2023 17:46:58 -0400
Subject: [PATCH 1/3] Automatically set type is certain Value_Range routines.

Set routines which take a type shouldn't have to pre-set the type of the
underlying range as it is specified as a parameter already.

	* value-range.h (Value_Range::set_varying): Set the type.
	(Value_Range::set_zero): Ditto.
	(Value_Range::set_nonzero): Ditto.
---
 gcc/value-range.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/gcc/value-range.h b/gcc/value-range.h
index d8af6fca7d7..622b68863d2 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -679,15 +679,16 @@ public:
   tree type () { return m_vrange->type (); }
   bool varying_p () const { return m_vrange->varying_p (); }
   bool undefined_p () const { return m_vrange->undefined_p (); }
-  void set_varying (tree type) { m_vrange->set_varying (type); }
+  void set_varying (tree type) { init (type); m_vrange->set_varying (type); }
   void set_undefined () { m_vrange->set_undefined (); }
   bool union_ (const vrange ) { return m_vrange->union_ (r); }
   bool intersect (const vrange ) { return m_vrange->intersect (r); }
   bool contains_p (tree cst) const { return m_vrange->contains_p (cst); }
   bool singleton_p (tree *result = NULL) const
 { return m_vrange->singleton_p (result); }
-  void set_zero (tree type) { return m_vrange->set_zero (type); }
-  void set_nonzero (tree type) { return m_vrange->set_nonzero (type); }
+  void set_zero (tree type) { init (type); return m_vrange->set_zero (type); }
+  void set_nonzero (tree type)
+{ init (type); return m_vrange->set_nonzero (type); }
   bool nonzero_p () const { return m_vrange->nonzero_p (); }
   bool zero_p () const { return m_vrange->zero_p (); }
   wide_int lower_bound () const; // For irange/prange comparability.
-- 
2.40.1



Re: [PATCH V5 1/2] Add overflow API for plus minus mult on range

2023-08-03 Thread Andrew MacLeod via Gcc-patches

This is OK.


On 8/2/23 22:18, Jiufu Guo wrote:

Hi,

I would like to have a ping on this patch.

BR,
Jeff (Jiufu Guo)


Jiufu Guo  writes:


Hi,

As discussed in previous reviews, adding overflow APIs to range-op
would be useful. Those APIs could help to check if overflow happens
when operating between two 'range's, like: plus, minus, and mult.

Previous discussions are here:
https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624067.html
https://gcc.gnu.org/pipermail/gcc-patches/2023-July/624701.html

Bootstrap & regtest pass on ppc64{,le} and x86_64.
Is this patch ok for trunk?

BR,
Jeff (Jiufu Guo)

gcc/ChangeLog:

* range-op-mixed.h (operator_plus::overflow_free_p): New declare.
(operator_minus::overflow_free_p): New declare.
(operator_mult::overflow_free_p): New declare.
* range-op.cc (range_op_handler::overflow_free_p): New function.
(range_operator::overflow_free_p): New default function.
(operator_plus::overflow_free_p): New function.
(operator_minus::overflow_free_p): New function.
(operator_mult::overflow_free_p): New function.
* range-op.h (range_op_handler::overflow_free_p): New declare.
(range_operator::overflow_free_p): New declare.
* value-range.cc (irange::nonnegative_p): New function.
(irange::nonpositive_p): New function.
* value-range.h (irange::nonnegative_p): New declare.
(irange::nonpositive_p): New declare.

---
  gcc/range-op-mixed.h |  11 
  gcc/range-op.cc  | 124 +++
  gcc/range-op.h   |   5 ++
  gcc/value-range.cc   |  12 +
  gcc/value-range.h|   2 +
  5 files changed, 154 insertions(+)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 6944742ecbc..42157ed9061 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -383,6 +383,10 @@ public:
  relation_kind rel) const final override;
void update_bitmask (irange , const irange ,
   const irange ) const final override;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  private:
void wi_fold (irange , tree type, const wide_int _lb,
const wide_int _ub, const wide_int _lb,
@@ -446,6 +450,10 @@ public:
relation_kind rel) const final override;
void update_bitmask (irange , const irange ,
   const irange ) const final override;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  private:
void wi_fold (irange , tree type, const wide_int _lb,
const wide_int _ub, const wide_int _lb,
@@ -525,6 +533,9 @@ public:
const REAL_VALUE_TYPE _lb, const REAL_VALUE_TYPE _ub,
const REAL_VALUE_TYPE _lb, const REAL_VALUE_TYPE _ub,
relation_kind kind) const final override;
+  virtual bool overflow_free_p (const irange , const irange ,
+   relation_trio = TRIO_VARYING) const;
+
  };
  
  class operator_addr_expr : public range_operator

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index cb584314f4c..632b044331b 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -366,6 +366,22 @@ range_op_handler::op1_op2_relation (const vrange ) 
const
  }
  }
  
+bool

+range_op_handler::overflow_free_p (const vrange ,
+  const vrange ,
+  relation_trio rel) const
+{
+  gcc_checking_assert (m_operator);
+  switch (dispatch_kind (lh, lh, rh))
+{
+  case RO_III:
+   return m_operator->overflow_free_p(as_a  (lh),
+  as_a  (rh),
+  rel);
+  default:
+   return false;
+}
+}
  
  // Convert irange bitmasks into a VALUE MASK pair suitable for calling CCP.
  
@@ -688,6 +704,13 @@ range_operator::op1_op2_relation_effect (irange _range ATTRIBUTE_UNUSED,

return false;
  }
  
+bool

+range_operator::overflow_free_p (const irange &, const irange &,
+relation_trio) const
+{
+  return false;
+}
+
  // Apply any known bitmask updates based on this operator.
  
  void

@@ -4311,6 +4334,107 @@ range_op_table::initialize_integral_ops ()
  
  }
  
+bool

+operator_plus::overflow_free_p (const irange , const irange ,
+   relation_trio) const
+{
+  if (lh.undefined_p () || rh.undefined_p ())
+return false;
+
+  tree type = lh.type ();
+  if (TYPE_OVERFLOW_UNDEFINED (type))
+return true;
+
+  wi::overflow_type ovf;
+  signop sgn = TYPE_SIGN (type);
+  wide_int wmax0 = lh.upper_bound ();
+  wide_int wmax1 = rh.upper_bound ();
+  wi::add (wmax0, wmax1, sgn, );
+  if (ovf != wi::OVF_NONE)
+return false;
+
+  if (TYPE_UNSIGNED (type))
+return true;
+
+  

[COMMITTED] PR tree-optimization/110582 - fur_list should not use the range vector for non-ssa, operands.

2023-07-31 Thread Andrew MacLeod via Gcc-patches
The fold_using_range operand fetching mechanism has a variety of modes.  
The "normal" mechanism simply invokes the current or supplied 
range_query to satisfy fetching current range info for any ssa-names 
used during the evalaution of the statement,


I also added support for fur_list which allows a list of ranges to be 
supplied which is used to satisfy ssa-names as they appear in the stmt.  
Once the list is exhausted, then it reverts to using the range query.


This allows us to fold a stmt using whatever values we want. ie,

a_2 = b_3 + c_4


i can call fold_stmt (r, stmt, [1,2],  [4,5])

and a_2 would be calculated using [1,2] for the first ssa_name, and 
[4,5] for the second encountered name.  This allows us to manually fold 
stmts when we desire.


There was a bug in the implementation of fur_list where it was using the 
supplied values for *any* encountered operand, not just ssa_names.


The PHI analyzer is the first consumer of the fur_list API, and was 
tripping over this.



     [local count: 1052266993]:
  # a_lsm.12_29 = PHI 
  iftmp.1_15 = 3 / a_lsm.12_29;

   [local count: 1063004408]:
  # iftmp.1_11 = PHI 
  # ivtmp_2 = PHI 
  ivtmp_36 = ivtmp_2 - 1;
  if (ivtmp_36 != 0)
    goto ; [98.99%]
  else
    goto ; [1.01%]

It detemined that the initial value of iftmp.1_11 was [2, 2] (from the 
edge 2->4), and that the only modifying statement is

iftmp.1_15 = 3 / a_lsm.12_29;

One of the things it tries to do is determine is if a few iterations 
feeding the initial value and combining it with the result of the 
statement converge, thus providing a complete initial range.  Its uses 
fold_range supplying the value for the ssa-operand directly..  but 
tripped over the bug.


So for the first iteration, instead of calculating   _15 = 3 / [2,2]  
and coming up with [1,1],   it was instead calculating [2,2]/VARYING, 
and coming up with [-2, 2].  Next pass of the iteration checker then 
erroneously calculated [-2,2]/VARYING and the result was [-2,2] and 
convergence was achieved, and the initial value of the PHI set to[-2, 2] 
... incorrectly.  and of course bad things happened.


This patch fixes fur_list::get_operand to check for an ssa-name before 
it pulling a value from the supplied list.  With this, no partlculary 
good starting value for the PHI node can be determined.


Andrew

From 914fa35a7f7db76211ca259606578193773a254e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 31 Jul 2023 10:08:51 -0400
Subject: [PATCH] fur_list should not use the range vector for non-ssa
 operands.

	gcc/
	PR tree-optimization/110582
	* gimple-range-fold.cc (fur_list::get_operand): Do not use the
	range vector for non-ssa names.

	gcc/testsuite/
	* gcc.dg/pr110582.c: New.
---
 gcc/gimple-range-fold.cc|  3 ++-
 gcc/testsuite/gcc.dg/pr110582.c | 18 ++
 2 files changed, 20 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110582.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index d07246008f0..ab2d996c4eb 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -262,7 +262,8 @@ fur_list::fur_list (unsigned num, vrange **list, range_query *q)
 bool
 fur_list::get_operand (vrange , tree expr)
 {
-  if (m_index >= m_limit)
+  // Do not use the vector for non-ssa-names, or if it has been emptied.
+  if (TREE_CODE (expr) != SSA_NAME || m_index >= m_limit)
 return m_query->range_of_expr (r, expr);
   r = *m_list[m_index++];
   gcc_checking_assert (range_compatible_p (TREE_TYPE (expr), r.type ()));
diff --git a/gcc/testsuite/gcc.dg/pr110582.c b/gcc/testsuite/gcc.dg/pr110582.c
new file mode 100644
index 000..ae0650d3ae7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110582.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp2" } */
+
+int a, b;
+int main() {
+  char c = a = 0;
+  for (; c != -3; c++) {
+int d = 2;
+d ^= 2 && a;
+b = a == 0 ? d : d / a;
+a = b;
+  }
+  for (; (1 + 95 << 24) + b + 1 + 686658714L + b - 2297271457;)
+;
+}
+
+/* { dg-final { scan-tree-dump-not "Folding predicate" "vrp2" } } */
+
-- 
2.40.1



[COMMITTED] Remove value_query, push into sub class.

2023-07-28 Thread Andrew MacLeod via Gcc-patches
When we first introduced range_query, we provided a base class for 
constants rather than range queries.  Then inherioted from that and 
modified the value queries for a range-specific engine. .   At the time, 
we figured there would be other consumers of the value_query class.


When all the dust settled, it turned out that subsitute_and_fold is the 
only consumer, and all the other places we perceived there to be value 
clients actually use substitute_and_fold.


This patch simplifies everything by providing only a range-query class, 
and moving the old value_range functionality into substitute_and_fold, 
the only place that uses it.


Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew

From 619641397a558bf65c24b99a4c52878bd940fcbe Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sun, 16 Jul 2023 12:46:00 -0400
Subject: [PATCH 2/3] Remove value_query, push into sub class

	* tree-ssa-propagate.cc (substitute_and_fold_engine::value_on_edge):
	Move from value-query.cc.
	(substitute_and_fold_engine::value_of_stmt): Ditto.
	(substitute_and_fold_engine::range_of_expr): New.
	* tree-ssa-propagate.h (substitute_and_fold_engine): Inherit from
	range_query.  New prototypes.
	* value-query.cc (value_query::value_on_edge): Relocate.
	(value_query::value_of_stmt): Ditto.
	* value-query.h (class value_query): Remove.
	(class range_query): Remove base class.  Adjust prototypes.
---
 gcc/tree-ssa-propagate.cc | 28 
 gcc/tree-ssa-propagate.h  |  8 +++-
 gcc/value-query.cc| 21 -
 gcc/value-query.h | 30 --
 4 files changed, 39 insertions(+), 48 deletions(-)

diff --git a/gcc/tree-ssa-propagate.cc b/gcc/tree-ssa-propagate.cc
index 174d19890f9..cb68b419b8c 100644
--- a/gcc/tree-ssa-propagate.cc
+++ b/gcc/tree-ssa-propagate.cc
@@ -532,6 +532,34 @@ struct prop_stats_d
 
 static struct prop_stats_d prop_stats;
 
+// range_query default methods to drive from a value_of_expr() ranther than
+// range_of_expr.
+
+tree
+substitute_and_fold_engine::value_on_edge (edge, tree expr)
+{
+  return value_of_expr (expr);
+}
+
+tree
+substitute_and_fold_engine::value_of_stmt (gimple *stmt, tree name)
+{
+  if (!name)
+name = gimple_get_lhs (stmt);
+
+  gcc_checking_assert (!name || name == gimple_get_lhs (stmt));
+
+  if (name)
+return value_of_expr (name);
+  return NULL_TREE;
+}
+
+bool
+substitute_and_fold_engine::range_of_expr (vrange &, tree, gimple *)
+{
+  return false;
+}
+
 /* Replace USE references in statement STMT with the values stored in
PROP_VALUE. Return true if at least one reference was replaced.  */
 
diff --git a/gcc/tree-ssa-propagate.h b/gcc/tree-ssa-propagate.h
index be4cb457873..29bde37add9 100644
--- a/gcc/tree-ssa-propagate.h
+++ b/gcc/tree-ssa-propagate.h
@@ -96,11 +96,17 @@ class ssa_propagation_engine
   void simulate_block (basic_block);
 };
 
-class substitute_and_fold_engine : public value_query
+class substitute_and_fold_engine : public range_query
 {
  public:
   substitute_and_fold_engine (bool fold_all_stmts = false)
 : fold_all_stmts (fold_all_stmts) { }
+
+  virtual tree value_of_expr (tree expr, gimple * = NULL) = 0;
+  virtual tree value_on_edge (edge, tree expr) override;
+  virtual tree value_of_stmt (gimple *, tree name = NULL) override;
+  virtual bool range_of_expr (vrange , tree expr, gimple * = NULL);
+
   virtual ~substitute_and_fold_engine (void) { }
   virtual bool fold_stmt (gimple_stmt_iterator *) { return false; }
 
diff --git a/gcc/value-query.cc b/gcc/value-query.cc
index adef93415b7..0870d6c60a6 100644
--- a/gcc/value-query.cc
+++ b/gcc/value-query.cc
@@ -33,27 +33,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "gimple-range.h"
 #include "value-range-storage.h"
 
-// value_query default methods.
-
-tree
-value_query::value_on_edge (edge, tree expr)
-{
-  return value_of_expr (expr);
-}
-
-tree
-value_query::value_of_stmt (gimple *stmt, tree name)
-{
-  if (!name)
-name = gimple_get_lhs (stmt);
-
-  gcc_checking_assert (!name || name == gimple_get_lhs (stmt));
-
-  if (name)
-return value_of_expr (name);
-  return NULL_TREE;
-}
-
 // range_query default methods.
 
 bool
diff --git a/gcc/value-query.h b/gcc/value-query.h
index d10c3eac1e2..429446b32eb 100644
--- a/gcc/value-query.h
+++ b/gcc/value-query.h
@@ -37,28 +37,6 @@ along with GCC; see the file COPYING3.  If not see
 // Proper usage of the correct query in passes will enable other
 // valuation mechanisms to produce more precise results.
 
-class value_query
-{
-public:
-  value_query () { }
-  // Return the singleton expression for EXPR at a gimple statement,
-  // or NULL if none found.
-  virtual tree value_of_expr (tree expr, gimple * = NULL) = 0;
-  // Return the singleton expression for EXPR at an edge, or NULL if
-  // none found.
-  virtual tree value_on_edge (edge, tree expr);
-  // Return the singleton expression for the LHS of 

[COMMITTED] Add a merge_range to ssa_cache and use it.

2023-07-28 Thread Andrew MacLeod via Gcc-patches

This adds some tweaks to the ssa-range cache.

1)   Adds a new merge_range which works like set_range, except if there 
is already a value, the two values are merged via intersection and 
stored.   THis avpoids having to check if there is a value, load it, 
intersect it then store that in the client. There is one usage pattern 
(but more to come) in the code base.. change to use it.


2)  The range_of_expr() method in ssa_cache does not set the stmt to a 
default of NULL.  Correct that oversight.


3)  the method empty_p() is added to the ssa_lazy_cache class so we can 
detect if the lazy cache has any active elements in it or not.


Bootstrapped on 86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew

From 72fb44ca53fda15024e0c272052b74b1f32735b1 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 28 Jul 2023 11:00:57 -0400
Subject: [PATCH 3/3] Add a merge_range to ssa_cache and use it.  add empty_p
 and param tweaks.

	* gimple-range-cache.cc (ssa_cache::merge_range): New.
	(ssa_lazy_cache::merge_range): New.
	* gimple-range-cache.h (class ssa_cache): Adjust protoypes.
	(class ssa_lazy_cache): Ditto.
	* gimple-range.cc (assume_query::calculate_op): Use merge_range.
---
 gcc/gimple-range-cache.cc | 45 +++
 gcc/gimple-range-cache.h  |  6 --
 gcc/gimple-range.cc   |  6 ++
 3 files changed, 51 insertions(+), 6 deletions(-)

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 52165d2405b..5b74681b61a 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -605,6 +605,32 @@ ssa_cache::set_range (tree name, const vrange )
   return m != NULL;
 }
 
+// If NAME has a range, intersect it with R, otherwise set it to R.
+// Return TRUE if there was already a range set, otherwise false.
+
+bool
+ssa_cache::merge_range (tree name, const vrange )
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+m_tab.safe_grow_cleared (num_ssa_names + 1);
+
+  vrange_storage *m = m_tab[v];
+  if (m)
+{
+  Value_Range curr (TREE_TYPE (name));
+  m->get_vrange (curr, TREE_TYPE (name));
+  curr.intersect (r);
+  if (m->fits_p (curr))
+	m->set_vrange (curr);
+  else
+	m_tab[v] = m_range_allocator->clone (curr);
+}
+  else
+m_tab[v] = m_range_allocator->clone (r);
+  return m != NULL;
+}
+
 // Set the range for NAME to R in the ssa cache.
 
 void
@@ -689,6 +715,25 @@ ssa_lazy_cache::set_range (tree name, const vrange )
   return false;
 }
 
+// If NAME has a range, intersect it with R, otherwise set it to R.
+// Return TRUE if there was already a range set, otherwise false.
+
+bool
+ssa_lazy_cache::merge_range (tree name, const vrange )
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (!bitmap_set_bit (active_p, v))
+{
+  // There is already an entry, simply merge it.
+  gcc_checking_assert (v < m_tab.length ());
+  return ssa_cache::merge_range (name, r);
+}
+  if (v >= m_tab.length ())
+m_tab.safe_grow (num_ssa_names + 1);
+  m_tab[v] = m_range_allocator->clone (r);
+  return false;
+}
+
 // Return TRUE if NAME has a range, and return it in R.
 
 bool
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index a0f436b5723..bbb9b18a10c 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -61,11 +61,11 @@ public:
   virtual bool has_range (tree name) const;
   virtual bool get_range (vrange , tree name) const;
   virtual bool set_range (tree name, const vrange );
+  virtual bool merge_range (tree name, const vrange );
   virtual void clear_range (tree name);
   virtual void clear ();
   void dump (FILE *f = stderr);
-  virtual bool range_of_expr (vrange , tree expr, gimple *stmt);
-
+  virtual bool range_of_expr (vrange , tree expr, gimple *stmt = NULL);
 protected:
   vec m_tab;
   vrange_allocator *m_range_allocator;
@@ -80,8 +80,10 @@ class ssa_lazy_cache : public ssa_cache
 public:
   inline ssa_lazy_cache () { active_p = BITMAP_ALLOC (NULL); }
   inline ~ssa_lazy_cache () { BITMAP_FREE (active_p); }
+  inline bool empty_p () const { return bitmap_empty_p (active_p); }
   virtual bool has_range (tree name) const;
   virtual bool set_range (tree name, const vrange );
+  virtual bool merge_range (tree name, const vrange );
   virtual bool get_range (vrange , tree name) const;
   virtual void clear_range (tree name);
   virtual void clear ();
diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
index 01e62d3ff39..01173c58f02 100644
--- a/gcc/gimple-range.cc
+++ b/gcc/gimple-range.cc
@@ -809,10 +809,8 @@ assume_query::calculate_op (tree op, gimple *s, vrange , fur_source )
   if (m_gori.compute_operand_range (op_range, s, lhs, op, src)
   && !op_range.varying_p ())
 {
-  Value_Range range (TREE_TYPE (op));
-  if (global.get_range (range, op))
-	op_range.intersect (range);
-  global.set_range (op, op_range);
+  // Set the global range, merging if there is already a r

[COMMITTED] PR tree-optimization/110205 -Fix some warnings

2023-07-28 Thread Andrew MacLeod via Gcc-patches

This patch simply fixes the code up a little to remove potential warnings.

Bootstrapped on x86_64-pc-linux-gnu with no regressions. Pushed.

Andrew

From 7905c071c35070fff3397b1e24f140c128c08e64 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 10 Jul 2023 13:58:22 -0400
Subject: [PATCH 1/3] Fix some warnings

	PR tree-optimization/110205
	* gimple-range-cache.h (ranger_cache::m_estimate): Delete.
	* range-op-mixed.h (operator_bitwise_xor::op1_op2_relation_effect):
	Add final override.
	* range-op.cc (operator_lshift): Add missing final overrides.
	(operator_rshift): Ditto.
---
 gcc/gimple-range-cache.h |  1 -
 gcc/range-op-mixed.h |  2 +-
 gcc/range-op.cc  | 44 ++--
 3 files changed, 21 insertions(+), 26 deletions(-)

diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 93d16294d2e..a0f436b5723 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -137,7 +137,6 @@ private:
   void exit_range (vrange , tree expr, basic_block bb, enum rfd_mode);
   bool edge_range (vrange , edge e, tree name, enum rfd_mode);
 
-  phi_analyzer *m_estimate;
   vec m_workback;
   class update_list *m_update;
 };
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 3cb904f9d80..b623a88cc71 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -574,7 +574,7 @@ public:
 	tree type,
 	const irange _range,
 	const irange _range,
-	relation_kind rel) const;
+	relation_kind rel) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 private:
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 615e5fe0036..19fdff0eb64 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -2394,22 +2394,21 @@ class operator_lshift : public cross_product_operator
   using range_operator::fold_range;
   using range_operator::op1_range;
 public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
+  virtual bool op1_range (irange , tree type, const irange ,
+			  const irange , relation_trio rel = TRIO_VARYING)
+const final override;
+  virtual bool fold_range (irange , tree type, const irange ,
+			   const irange , relation_trio rel = TRIO_VARYING)
+const final override;
 
   virtual void wi_fold (irange , tree type,
 			const wide_int _lb, const wide_int _ub,
-			const wide_int _lb, const wide_int _ub) const;
+			const wide_int _lb,
+			const wide_int _ub) const final override;
   virtual bool wi_op_overflows (wide_int ,
 tree type,
 const wide_int &,
-const wide_int &) const;
+const wide_int &) const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override
 { update_known_bitmask (r, LSHIFT_EXPR, lh, rh); }
@@ -2421,27 +2420,24 @@ class operator_rshift : public cross_product_operator
   using range_operator::op1_range;
   using range_operator::lhs_op1_relation;
 public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
+  virtual bool fold_range (irange , tree type, const irange ,
+			   const irange , relation_trio rel = TRIO_VARYING)
+   const final override;
   virtual void wi_fold (irange , tree type,
 			const wide_int _lb,
 			const wide_int _ub,
 			const wide_int _lb,
-			const wide_int _ub) const;
+			const wide_int _ub) const final override;
   virtual bool wi_op_overflows (wide_int ,
 tree type,
 const wide_int ,
-const wide_int ) const;
-  virtual bool op1_range (irange &, tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	   const irange ,
-	   const irange ,
-	   relation_kind rel) const;
+const wide_int ) const final override;
+  virtual bool op1_range (irange &, tree type, const irange ,
+			  const irange , relation_trio rel = TRIO_VARYING)
+const final override;
+  virtual relation_kind lhs_op1_relation (const irange , const irange ,
+	  const irange , relation_kind rel)
+const final override;
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override
 { update_known_bitmask (r, RSHIFT_EXPR, lh, rh); }
-- 
2.40.1



Re: [PATCH] [GCC13] PR tree-optimization/110315 - Add auto-resizing capability to irange's

2023-07-24 Thread Andrew MacLeod via Gcc-patches



On 7/24/23 12:49, Richard Biener wrote:



Am 24.07.2023 um 16:40 schrieb Andrew MacLeod via Gcc-patches 
:

Aldy has ported his irange reduction patch to GCC 13.  It resolves this PR.

I have bootstrapped it and it passes regression tests.

Do we want to check it into the GCC 13 branch?  The patch has all his comments 
in it.

Please wait until the branch is open again, then yes , I think we want this 
there.  Was there any work reducing the recursion depth that’s worth 
backporting as well?


I think most of the recursion depth work is in GCC13.  Im looking at a 
few more tweaks for GCC14, but they are fairly minor at the moment.  
reduction of the size of the stack was the huge win.




[PATCH] [GCC13] PR tree-optimization/110315 - Add auto-resizing capability to irange's

2023-07-24 Thread Andrew MacLeod via Gcc-patches

Aldy has ported his irange reduction patch to GCC 13.  It resolves this PR.

I have bootstrapped it and it passes regression tests.

Do we want to check it into the GCC 13 branch?  The patch has all his 
comments in it.


Andrew
From 777aa930b106fea2dd6ed9fe22b42a2717f1472d Mon Sep 17 00:00:00 2001
From: Aldy Hernandez 
Date: Mon, 15 May 2023 12:25:58 +0200
Subject: [PATCH] [GCC13] Add auto-resizing capability to irange's [PR109695]

Backport the following from trunk.

	Note that the patch has been adapted to trees.

	The numbers for various sub-ranges on GCC13 are:
		< 2> =  64 bytes, -3.02% for VRP.
		< 3> =  80 bytes, -2.67% for VRP.
		< 8> = 160 bytes, -2.46% for VRP.
		<16> = 288 bytes, -2.40% for VRP.


We can now have int_range for automatically
resizable ranges.  int_range_max is now int_range<3, true>
for a 69X reduction in size from current trunk, and 6.9X reduction from
GCC12.  This incurs a 5% performance penalty for VRP that is more than
covered by our > 13% improvements recently.


int_range_max is the temporary range object we use in the ranger for
integers.  With the conversion to wide_int, this structure bloated up
significantly because wide_ints are huge (80 bytes a piece) and are
about 10 times as big as a plain tree.  Since the temporary object
requires 255 sub-ranges, that's 255 * 80 * 2, plus the control word.
This means the structure grew from 4112 bytes to 40912 bytes.

This patch adds the ability to resize ranges as needed, defaulting to
no resizing, while int_range_max now defaults to 3 sub-ranges (instead
of 255) and grows to 255 when the range being calculated does not fit.

For example:

int_range<1> foo;	// 1 sub-range with no resizing.
int_range<5> foo;	// 5 sub-ranges with no resizing.
int_range<5, true> foo;	// 5 sub-ranges with resizing.

I ran some tests and found that 3 sub-ranges cover 99% of cases, so
I've set the int_range_max default to that:

	typedef int_range<3, /*RESIZABLE=*/true> int_range_max;

We don't bother growing incrementally, since the default covers most
cases and we have a 255 hard-limit.  This hard limit could be reduced
to 128, since my tests never saw a range needing more than 124, but we
could do that as a follow-up if needed.

With 3-subranges, int_range_max is now 592 bytes versus 40912 for
trunk, and versus 4112 bytes for GCC12!  The penalty is 5.04% for VRP
and 3.02% for threading, with no noticeable change in overall
compilation (0.27%).  This is more than covered by our 13.26%
improvements for the legacy removal + wide_int conversion.

I think this approach is a good alternative, while providing us with
flexibility going forward.  For example, we could try defaulting to a
8 sub-ranges for a noticeable improvement in VRP.  We could also use
large sub-ranges for switch analysis to avoid resizing.

Another approach I tried was always resizing.  With this, we could
drop the whole int_range nonsense, and have irange just hold a
resizable range.  This simplified things, but incurred a 7% penalty on
ipa_cp.  This was hard to pinpoint, and I'm not entirely convinced
this wasn't some artifact of valgrind.  However, until we're sure,
let's avoid massive changes, especially since IPA changes are coming
up.

For the curious, a particular hot spot for IPA in this area was:

ipcp_vr_lattice::meet_with_1 (const value_range *other_vr)
{
...
...
  value_range save (m_vr);
  m_vr.union_ (*other_vr);
  return m_vr != save;
}

The problem isn't the resizing (since we do that at most once) but the
fact that for some functions with lots of callers we end up a huge
range that gets copied and compared for every meet operation.  Maybe
the IPA algorithm could be adjusted somehow??.

Anywhooo... for now there is nothing to worry about, since value_range
still has 2 subranges and is not resizable.  But we should probably
think what if anything we want to do here, as I envision IPA using
infinite ranges here (well, int_range_max) and handling frange's, etc.

gcc/ChangeLog:

	PR tree-optimization/109695
	* value-range.cc (irange::operator=): Resize range.
	(irange::union_): Same.
	(irange::intersect): Same.
	(irange::invert): Same.
	(int_range_max): Default to 3 sub-ranges and resize as needed.
	* value-range.h (irange::maybe_resize): New.
	(~int_range): New.
	(int_range::int_range): Adjust for resizing.
	(int_range::operator=): Same.
---
 gcc/value-range-storage.h |  2 +-
 gcc/value-range.cc| 15 ++
 gcc/value-range.h | 96 +++
 3 files changed, 83 insertions(+), 30 deletions(-)

diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 6da377ebd2e..1ed6f1ccd61 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -184,7 +184,7 @@ vrange_allocator::alloc_irange (unsigned num_pairs)
   // Allocate the irange and required memory for the vector.
   void *r = alloc (sizeof (irange));
   tree *mem = static_cast  (alloc (nbytes));
-  return new (r) irange (mem, num_pairs);
+  return new 

Re: [PATCH V4] Optimize '(X - N * M) / N' to 'X / N - M' if valid

2023-07-17 Thread Andrew MacLeod via Gcc-patches


On 7/17/23 09:45, Jiufu Guo wrote:



Should we decide we would like it in general, it wouldnt be hard to add to
irange.  wi_fold() cuurently returns null, it could easily return a bool
indicating if an overflow happened, and wi_fold_in_parts and fold_range would
simply OR the results all together of the compoent wi_fold() calls.  It would
require updating/audfiting  a number of range-op entries and adding an
overflowed_p()  query to irange.

Ah, yeah - the folding APIs would be a good fit I guess.  I was
also looking to have the "new" helpers to be somewhat consistent
with the ranger API.

So if we had a fold_range overload with either an output argument
or a flag that makes it return false on possible overflow that
would work I guess?  Since we have a virtual class setup we
might be able to provide a default failing method and implement
workers for plus and mult (as needed for this patch) as the need
arises?

Thanks for your comments!
Here is a concern.  The patterns in match.pd may be supported by
'vrp' passes. At that time, the range info would be computed (via
the value-range machinery) and cached for each SSA_NAME. In the
patterns, when range_of_expr is called for a capture, the range
info is retrieved from the cache, and no need to fold_range again.
This means the overflow info may also need to be cached together
with other range info.  There may be additional memory and time
cost.



I've been thinking about this a little bit, and how to make the info 
available in a useful way.


I wonder if maybe we just add another entry point  to range-ops that 
looks a bit like fold_range ..


  Attached is an (untested) patch which ads overflow_free_p(op1, op2, 
relation)  to rangeops.   It defaults to returning false.  If you want 
to implement it for say plus,  you'd add to operator_plus in 
range-ops.cc  something like


operator_plus::overflow_free_p (irange, irange& op2, relation_kind)
{
   // stuff you do in plus_without_overflow
}

I added relation_kind as  param, but you can ignore it.  maybe it wont 
ever help, but it seems like if we know there is a relation between op1 
and op2 we might be able to someday determine something else? if 
not, remove it.


Then all you need to do too access it is to go thru range-op_handler.. 
so for instance:


range_op_handler (PLUS_EXPR).overflow_free_p (op1, op2)

It'll work for all types an all tree codes. the dispatch machinery will 
return false unless both op1 and op2 are integral ranges, and then it 
will invoke the appropriate handler, defaulting to returning FALSE.


I also am not a fan of the get_range  routine.  It would be better to 
generally just call range_of_expr, get the results, then handle 
undefined in the new overflow_free_p() routine and return false.  
varying should not need anything special since it will trigger the 
overflow when you do the calculation.


The auxillary routines could go in vr-values.h/cc.  They seem like 
things that simplify_using_ranges could utilize, and when we get to 
integrating simplify_using_ranges better,  what you are doing may end up 
there anyway


Does that work?

Andrew
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index d1c735ee6aa..f2a863db286 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -366,6 +366,24 @@ range_op_handler::op1_op2_relation (const vrange ) const
 }
 }
 
+bool
+range_op_handler::overflow_free_p (const vrange ,
+   const vrange ,
+   relation_trio rel) const
+{
+  gcc_checking_assert (m_operator);
+  switch (dispatch_kind (lh, lh, rh))
+{
+  case RO_III:
+	return m_operator->overflow_free_p(as_a  (lh),
+	   as_a  (rh),
+	   rel);
+  default:
+	return false;
+}
+}
+
+
 
 // Convert irange bitmasks into a VALUE MASK pair suitable for calling CCP.
 
@@ -688,6 +706,13 @@ range_operator::op1_op2_relation_effect (irange _range ATTRIBUTE_UNUSED,
   return false;
 }
 
+bool
+range_operator::overflow_free_p (const irange &, const irange &,
+ relation_trio) const
+{
+  return false;
+}
+
 // Apply any known bitmask updates based on this operator.
 
 void
diff --git a/gcc/range-op.h b/gcc/range-op.h
index af94c2756a7..db3b03f28a5 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -147,6 +147,9 @@ public:
 
   virtual relation_kind op1_op2_relation (const irange ) const;
   virtual relation_kind op1_op2_relation (const frange ) const;
+
+  virtual bool overflow_free_p (const irange , const irange ,
+relation_trio = TRIO_VARYING) const;
 protected:
   // Perform an integral operation between 2 sub-ranges and return it.
   virtual void wi_fold (irange , tree type,
@@ -214,6 +217,8 @@ public:
   const vrange ,
   relation_kind = VREL_VARYING) const;
   relation_kind op1_op2_relation (const vrange ) const;
+  bool overflow_free_p (const vrange , const vrange ,
+			relation_trio = TRIO_VARYING) const;
 protected:
   unsigned dispatch_kind (const vrange , const vrange ,
 			  const vrange& op2) const;


Re: [PATCH V4] Optimize '(X - N * M) / N' to 'X / N - M' if valid

2023-07-14 Thread Andrew MacLeod via Gcc-patches



On 7/14/23 09:37, Richard Biener wrote:

On Fri, 14 Jul 2023, Aldy Hernandez wrote:


I don't know what you're trying to accomplish here, as I haven't been
following the PR, but adding all these helper functions to the ranger header
file seems wrong, especially since there's only one use of them. I see you're
tweaking the irange API, adding helper functions to range-op (which is only
for code dealing with implementing range operators for tree codes), etc etc.

If you need these helper functions, I suggest you put them closer to their
uses (i.e. wherever the match.pd support machinery goes).

Note I suggested the opposite beacuse I thought these kind of helpers
are closer to value-range support than to match.pd.



probably vr-values.{cc.h} and  the simply_using_ranges paradigm would be 
the most sensible place to put these kinds of auxiliary routines?





But I take away from your answer that there's nothing close in the
value-range machinery that answers the question whether A op B may
overflow?


we dont track it in ranges themselves.   During calculation of a range 
we obviously know, but propagating that generally when we rarely care 
doesn't seem worthwhile.  The very first generation of irange 6 years 
ago had an overflow_p() flag, but it was removed as not being worth 
keeping.     easier to simply ask the question when it matters


As the routines show, it pretty easy to figure out when the need arises 
so I think that should suffice.  At least for now,


Should we decide we would like it in general, it wouldnt be hard to add 
to irange.  wi_fold() cuurently returns null, it could easily return a 
bool indicating if an overflow happened, and wi_fold_in_parts and 
fold_range would simply OR the results all together of the compoent 
wi_fold() calls.  It would require updating/audfiting  a number of 
range-op entries and adding an overflowed_p()  query to irange.


Andrew



[COMMITTED 5/5] Make compute_operand_range a tail call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
This simply tweaks cmpute_operand_range a little so the recursion is a 
tail call.


With this, the patchset produces a modest speedup of 0.2% in VRP and 
0.4% in threading.  It will also have a much smaller stack profile.


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew

From 51ed3a6ce432e7e6226bb62125ef8a09b2ebf60c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 14:26:00 -0400
Subject: [PATCH 5/6] Make compute_operand_range a tail call.

Tweak the routine so it is making a tail call.

	* gimple-range-gori.cc (compute_operand_range): Convert to a tail
	call.
---
 gcc/gimple-range-gori.cc | 34 --
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b036ed56f02..6dc15a0ce3f 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -725,36 +725,34 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 			 op1_trange, op1_frange, op2_trange, op2_frange);
   if (idx)
 	tracer.trailer (idx, "compute_operand", res, name, r);
+  return res;
 }
   // Follow the appropriate operands now.
-  else if (op1_in_chain && op2_in_chain)
-res = compute_operand1_and_operand2_range (r, handler, lhs, name, src,
-	   vrel_ptr);
-  else if (op1_in_chain)
+  if (op1_in_chain && op2_in_chain)
+return compute_operand1_and_operand2_range (r, handler, lhs, name, src,
+		vrel_ptr);
+  Value_Range vr;
+  gimple *src_stmt;
+  if (op1_in_chain)
 {
-  Value_Range vr (TREE_TYPE (op1));
+  vr.set_type (TREE_TYPE (op1));
   if (!compute_operand1_range (vr, handler, lhs, src, vrel_ptr))
 	return false;
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
-  gcc_checking_assert (src_stmt);
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+  src_stmt = SSA_NAME_DEF_STMT (op1);
 }
-  else if (op2_in_chain)
+  else
 {
-  Value_Range vr (TREE_TYPE (op2));
+  gcc_checking_assert (op2_in_chain);
+  vr.set_type (TREE_TYPE (op2));
   if (!compute_operand2_range (vr, handler, lhs, src, vrel_ptr))
 	return false;
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
-  gcc_checking_assert (src_stmt);
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+  src_stmt = SSA_NAME_DEF_STMT (op2);
 }
-  else
-gcc_unreachable ();
 
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
   // If neither operand is derived, this statement tells us nothing.
-  return res;
 }
 
 
-- 
2.40.1



[COMMITTED 4/5] Make compute_operand2_range a leaf call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
now operand2 alone is resolved, and returned as the result.  much 
cleaner, and removes it from the recursion stack.


compute_operand_range() will decide if further evaluation is required.

Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 298952bcf05d298892e99adba1f4a75af17bc65a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:52:21 -0400
Subject: [PATCH 4/6] Make compute_operand2_range a leaf call.

Rather than creating long call chains, put the onus for finishing
the evlaution on the caller.

	* gimple-range-gori.cc (compute_operand_range): After calling
	compute_operand2_range, recursively call self if needed.
	(compute_operand2_range): Turn into a leaf function.
	(gori_compute::compute_operand1_and_operand2_range): Finish
	operand2 calculation.
	* gimple-range-gori.h (compute_operand2_range): Remove name param.
---
 gcc/gimple-range-gori.cc | 52 +++-
 gcc/gimple-range-gori.h  |  2 +-
 2 files changed, 26 insertions(+), 28 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b66b9b0398c..b036ed56f02 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -639,7 +639,7 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   if (op1 == name)
 return compute_operand1_range (r, handler, lhs, src, vrel_ptr);
   if (op2 == name)
-return compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
+return compute_operand2_range (r, handler, lhs, src, vrel_ptr);
 
   // NAME is not in this stmt, but one of the names in it ought to be
   // derived from it.
@@ -741,7 +741,15 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
 }
   else if (op2_in_chain)
-res = compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
+{
+  Value_Range vr (TREE_TYPE (op2));
+  if (!compute_operand2_range (vr, handler, lhs, src, vrel_ptr))
+	return false;
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+}
   else
 gcc_unreachable ();
 
@@ -1188,7 +1196,7 @@ gori_compute::compute_operand1_range (vrange ,
 bool
 gori_compute::compute_operand2_range (vrange ,
   gimple_range_op_handler ,
-  const vrange , tree name,
+  const vrange ,
   fur_source , value_relation *rel)
 {
   gimple *stmt = handler.stmt ();
@@ -1198,7 +1206,6 @@ gori_compute::compute_operand2_range (vrange ,
 
   Value_Range op1_range (TREE_TYPE (op1));
   Value_Range op2_range (TREE_TYPE (op2));
-  Value_Range tmp (TREE_TYPE (op2));
 
   src.get_operand (op1_range, op1);
   src.get_operand (op2_range, op2);
@@ -1215,7 +1222,7 @@ gori_compute::compute_operand2_range (vrange ,
   if (op1 == op2 && gimple_range_ssa_p (op1))
 trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
   // Intersect with range for op2 based on lhs and op1.
-  if (!handler.calc_op2 (tmp, lhs, op1_range, trio))
+  if (!handler.calc_op2 (r, lhs, op1_range, trio))
 return false;
 
   unsigned idx;
@@ -1237,31 +1244,16 @@ gori_compute::compute_operand2_range (vrange ,
   tracer.print (idx, "Computes ");
   print_generic_expr (dump_file, op2, TDF_SLIM);
   fprintf (dump_file, " = ");
-  tmp.dump (dump_file);
+  r.dump (dump_file);
   fprintf (dump_file, " intersect Known range : ");
   op2_range.dump (dump_file);
   fputc ('\n', dump_file);
 }
   // Intersect the calculated result with the known result and return if done.
-  if (op2 == name)
-{
-  tmp.intersect (op2_range);
-  r = tmp;
-  if (idx)
-	tracer.trailer (idx, " produces ", true, NULL_TREE, r);
-  return true;
-}
-  // If the calculation continues, we're using op2_range as the new LHS.
-  op2_range.intersect (tmp);
-
+  r.intersect (op2_range);
   if (idx)
-tracer.trailer (idx, " produces ", true, op2, op2_range);
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
-  gcc_checking_assert (src_stmt);
-//  gcc_checking_assert (!is_import_p (op2, find.bb));
-
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, op2_range, name, src, rel);
+tracer.trailer (idx, " produces ", true, op2, r);
+  return true;
 }
 
 // Calculate a range for NAME from both operand positions of S
@@ -1279,15 +1271,21 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
+  Value_Range vr (TREE_TYPE (handler.operand2 ()));
   // Calculate a good a range through op2.
-  if (!compute_operand2_range (r, handler, lhs, name, src, rel))
+  if (!compute_operand2_range (vr, handler, lhs, src, rel))
+ret

[COMMITTED 3/5] Make compute_operand1_range a leaf call.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
now operand1 alone is resolved, and returned as the result.  much 
cleaner, and removes it from the recursion stack.


compute_operand_range() will decide if further evaluation is required.

Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew

From 912b5ac49677160aada7a2d862273251406dfca5 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:41:50 -0400
Subject: [PATCH 3/6] Make compute_operand1_range a leaf call.

Rather than creating long call chains, put the onus for finishing
the evlaution on the caller.

	* gimple-range-gori.cc (compute_operand_range): After calling
	compute_operand1_range, recursively call self if needed.
	(compute_operand1_range): Turn into a leaf function.
	(gori_compute::compute_operand1_and_operand2_range): Finish
	operand1 calculation.
	* gimple-range-gori.h (compute_operand1_range): Remove name param.
---
 gcc/gimple-range-gori.cc | 49 
 gcc/gimple-range-gori.h  |  2 +-
 2 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 5429c6e3c1a..b66b9b0398c 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -637,7 +637,7 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 
   // Handle end of lookup first.
   if (op1 == name)
-return compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
+return compute_operand1_range (r, handler, lhs, src, vrel_ptr);
   if (op2 == name)
 return compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
 
@@ -731,7 +731,15 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
 res = compute_operand1_and_operand2_range (r, handler, lhs, name, src,
 	   vrel_ptr);
   else if (op1_in_chain)
-res = compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
+{
+  Value_Range vr (TREE_TYPE (op1));
+  if (!compute_operand1_range (vr, handler, lhs, src, vrel_ptr))
+	return false;
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
+  gcc_checking_assert (src_stmt);
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, vr, name, src, vrel_ptr);
+}
   else if (op2_in_chain)
 res = compute_operand2_range (r, handler, lhs, name, src, vrel_ptr);
   else
@@ -1099,7 +1107,7 @@ gori_compute::refine_using_relation (tree op1, vrange _range,
 bool
 gori_compute::compute_operand1_range (vrange ,
   gimple_range_op_handler ,
-  const vrange , tree name,
+  const vrange ,
   fur_source , value_relation *rel)
 {
   gimple *stmt = handler.stmt ();
@@ -1112,7 +1120,6 @@ gori_compute::compute_operand1_range (vrange ,
 trio = rel->create_trio (lhs_name, op1, op2);
 
   Value_Range op1_range (TREE_TYPE (op1));
-  Value_Range tmp (TREE_TYPE (op1));
   Value_Range op2_range (op2 ? TREE_TYPE (op2) : TREE_TYPE (op1));
 
   // Fetch the known range for op1 in this block.
@@ -1130,7 +1137,7 @@ gori_compute::compute_operand1_range (vrange ,
   // If op1 == op2, create a new trio for just this call.
   if (op1 == op2 && gimple_range_ssa_p (op1))
 	trio = relation_trio (trio.lhs_op1 (), trio.lhs_op2 (), VREL_EQ);
-  if (!handler.calc_op1 (tmp, lhs, op2_range, trio))
+  if (!handler.calc_op1 (r, lhs, op2_range, trio))
 	return false;
 }
   else
@@ -1138,7 +1145,7 @@ gori_compute::compute_operand1_range (vrange ,
   // We pass op1_range to the unary operation.  Normally it's a
   // hidden range_for_type parameter, but sometimes having the
   // actual range can result in better information.
-  if (!handler.calc_op1 (tmp, lhs, op1_range, trio))
+  if (!handler.calc_op1 (r, lhs, op1_range, trio))
 	return false;
 }
 
@@ -1161,30 +1168,16 @@ gori_compute::compute_operand1_range (vrange ,
   tracer.print (idx, "Computes ");
   print_generic_expr (dump_file, op1, TDF_SLIM);
   fprintf (dump_file, " = ");
-  tmp.dump (dump_file);
+  r.dump (dump_file);
   fprintf (dump_file, " intersect Known range : ");
   op1_range.dump (dump_file);
   fputc ('\n', dump_file);
 }
-  // Intersect the calculated result with the known result and return if done.
-  if (op1 == name)
-{
-  tmp.intersect (op1_range);
-  r = tmp;
-  if (idx)
-	tracer.trailer (idx, "produces ", true, name, r);
-  return true;
-}
-  // If the calculation continues, we're using op1_range as the new LHS.
-  op1_range.intersect (tmp);
 
+  r.intersect (op1_range);
   if (idx)
-tracer.trailer (idx, "produces ", true, op1, op1_range);
-  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
-  gcc_checking_assert (src_stmt);
-
-  // Then feed this range back as the LHS of the defining statement.
-  return compute_operand_range (r, src_stmt, op1_range, name, src, rel);
+tracer.trailer (idx, "produces ", tr

[COMMITTED 2/5] Simplify compute_operand_range for op1 and op2 case.

2023-07-05 Thread Andrew MacLeod via Gcc-patches
This patch simplifies compute_operand1_and_operand2() such that it only 
calls each routine one. This will simplify the next couple of patches.


It also allows moves the determination that op1 and op2 have an 
interdependence to  compute_operand_range().


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 7276248946d3eae83e5e08fc023163614c9ea9ab Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Wed, 5 Jul 2023 13:36:27 -0400
Subject: [PATCH 2/6] Simplify compute_operand_range for op1 and op2 case.

Move the check for co-dependency between 2 operands into
compute_operand_range, resulting in a much cleaner
compute_operand1_and_operand2_range routine.

	* gimple-range-gori.cc (compute_operand_range): Check for
	operand interdependence when both op1 and op2 are computed.
	(compute_operand1_and_operand2_range): No checks required now.
---
 gcc/gimple-range-gori.cc | 25 +++--
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index b0d13a8ac53..5429c6e3c1a 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -650,6 +650,17 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   if (!op1_in_chain && !op2_in_chain)
 return false;
 
+  // If either operand is in the def chain of the other (or they are equal), it
+  // will be evaluated twice and can result in an exponential time calculation.
+  // Instead just evaluate the one operand.
+  if (op1_in_chain && op2_in_chain)
+{
+  if (in_chain_p (op1, op2) || op1 == op2)
+	op1_in_chain = false;
+  else if (in_chain_p (op2, op1))
+	op2_in_chain = false;
+}
+
   bool res = false;
   // If the lhs doesn't tell us anything only a relation can possibly enhance
   // the result.
@@ -1275,24 +1286,10 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
-  // If op1 is in the def chain of op2, we'll do the work twice to evalaute
-  // op1.  This can result in an exponential time calculation.
-  // Instead just evaluate op2, which will eventualy get to op1.
-  if (in_chain_p (handler.operand1 (), handler.operand2 ()))
-return compute_operand2_range (r, handler, lhs, name, src, rel);
-
-  // Likewise if op2 is in the def chain of op1.
-  if (in_chain_p (handler.operand2 (), handler.operand1 ()))
-return compute_operand1_range (r, handler, lhs, name, src, rel);
-
   // Calculate a good a range through op2.
   if (!compute_operand2_range (r, handler, lhs, name, src, rel))
 return false;
 
-  // If op1 == op2 there is again no need to go further.
-  if (handler.operand1 () == handler.operand2 ())
-return true;
-
   // Now get the range thru op1.
   if (!compute_operand1_range (op_range, handler, lhs, name, src, rel))
 return false;
-- 
2.40.1



[COMMITTED 1/5] Move relation discovery into compute_operand_range

2023-07-05 Thread Andrew MacLeod via Gcc-patches

This is a set of 5 patches which cleans up GORIs compute_operand routines.

This is the mechanism GORI uses to calculate ranges from the bottom of 
the routine back thru definitions in the block to the name that is 
requested.


Currently, compute_operand_range() is called on a stmt, and it divides 
the work based on which operands are used to get back to the requested 
name.  It calls compute_operand1_range or compute_operand2_range or 
compute_operand1_and_operand2_range. If the specified name is not on 
this statement, then a call back to compute_operand_range on the 
definition statement is made.


this means the call chain is recursive, but involves alternating 
functions.  This patch sets changes the compute_operand1_range and 
compute_operand2_range to be leaf functions, and then 
compute_operand_range is still recursive, but has a much smaller stack 
footprint, and is also becomes a tailcall.


I tried removing the recursion, but at this point, removing the 
recursion is a performance hit :-P   stay tuned on that one.


This patch moves some common code for relation discovery from 
compute_operand[12]range into compute_operand_range.


Bootstraps on  x86_64-pc-linux-gnu  with no regressions.  Pushed.

Andrew
From 290798faef706c335bd346b13771f977ddedb415 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Tue, 4 Jul 2023 11:28:52 -0400
Subject: [PATCH 1/6] Move relation discovery into compute_operand_range

compute_operand1_range and compute_operand2_range were both doing
relation discovery between the 2 operands... move it into a common area.

	* gimple-range-gori.cc (compute_operand_range): Check for
	a relation between op1 and op2 and use that instead.
	(compute_operand1_range): Don't look for a relation override.
	(compute_operand2_range): Ditto.
---
 gcc/gimple-range-gori.cc | 42 +---
 1 file changed, 13 insertions(+), 29 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 4ee0ae36014..b0d13a8ac53 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -623,6 +623,18 @@ gori_compute::compute_operand_range (vrange , gimple *stmt,
   tree op1 = gimple_range_ssa_p (handler.operand1 ());
   tree op2 = gimple_range_ssa_p (handler.operand2 ());
 
+  // If there is a relation betwen op1 and op2, use it instead as it is
+  // likely to be more applicable.
+  if (op1 && op2)
+{
+  relation_kind k = handler.op1_op2_relation (lhs);
+  if (k != VREL_VARYING)
+	{
+	  vrel.set_relation (k, op1, op2);
+	  vrel_ptr = 
+	}
+}
+
   // Handle end of lookup first.
   if (op1 == name)
 return compute_operand1_range (r, handler, lhs, name, src, vrel_ptr);
@@ -1079,7 +1091,6 @@ gori_compute::compute_operand1_range (vrange ,
   const vrange , tree name,
   fur_source , value_relation *rel)
 {
-  value_relation local_rel;
   gimple *stmt = handler.stmt ();
   tree op1 = handler.operand1 ();
   tree op2 = handler.operand2 ();
@@ -1088,7 +1099,6 @@ gori_compute::compute_operand1_range (vrange ,
   relation_trio trio;
   if (rel)
 trio = rel->create_trio (lhs_name, op1, op2);
-  relation_kind op_op = trio.op1_op2 ();
 
   Value_Range op1_range (TREE_TYPE (op1));
   Value_Range tmp (TREE_TYPE (op1));
@@ -1102,19 +1112,7 @@ gori_compute::compute_operand1_range (vrange ,
 {
   src.get_operand (op2_range, op2);
 
-  // If there is a relation betwen op1 and op2, use it instead.
-  // This allows multiple relations to be processed in compound logicals.
-  if (gimple_range_ssa_p (op1) && gimple_range_ssa_p (op2))
-	{
-	  relation_kind k = handler.op1_op2_relation (lhs);
-	  if (k != VREL_VARYING)
-	{
-	  op_op = k;
-	  local_rel.set_relation (op_op, op1, op2);
-	  rel = _rel;
-	}
-	}
-
+  relation_kind op_op = trio.op1_op2 ();
   if (op_op != VREL_VARYING)
 	refine_using_relation (op1, op1_range, op2, op2_range, src, op_op);
 
@@ -1189,7 +1187,6 @@ gori_compute::compute_operand2_range (vrange ,
   const vrange , tree name,
   fur_source , value_relation *rel)
 {
-  value_relation local_rel;
   gimple *stmt = handler.stmt ();
   tree op1 = handler.operand1 ();
   tree op2 = handler.operand2 ();
@@ -1207,19 +1204,6 @@ gori_compute::compute_operand2_range (vrange ,
 trio = rel->create_trio (lhs_name, op1, op2);
   relation_kind op_op = trio.op1_op2 ();
 
-  // If there is a relation betwen op1 and op2, use it instead.
-  // This allows multiple relations to be processed in compound logicals.
-  if (gimple_range_ssa_p (op1) && gimple_range_ssa_p (op2))
-{
-  relation_kind k = handler.op1_op2_relation (lhs);
-  if (k != VREL_VARYING)
-	{
-	  op_op = k;
-	  local_rel.set_relation (op_op, op1, op2);
-	  rel = _rel;
-	}
-}
-
   if (op_op != VREL_VARYING)
 refine_using_relation (op1, op1_range, op2, op2_range, src, op_op);
 
-- 
2.40.1



Re: Enable ranger for ipa-prop

2023-06-27 Thread Andrew MacLeod via Gcc-patches



On 6/27/23 12:24, Jan Hubicka wrote:

On 6/27/23 09:19, Jan Hubicka wrote:

Hi,
as shown in the testcase (which would eventually be useful for
optimizing std::vector's push_back), ipa-prop can use context dependent ranger
queries for better value range info.

Bootstrapped/regtested x86_64-linux, OK?

Quick question.

When you call enable_ranger(), its gives you a ranger back, but it also sets
the range query for the specified context to that same instance.  So from
that point forward  all existing calls to get_range_query(fun) will now use
the context ranger

enable_ranger (struct function *fun, bool use_imm_uses)
<...>
   gcc_checking_assert (!fun->x_range_query);
   r = new gimple_ranger (use_imm_uses);
   fun->x_range_query = r;
   return r;

So you probably dont have to pass a ranger around?  or is that ranger you
are passing for a different context?

I don't need passing ranger around - I just did not know that.  I tought
the default one is the context insensitive one, I will simplify the
patch.  I need to look more into how ranger works.



No need. Its magic!

Andrew


PS. well, we tried to provide an interface to make it as seamless as 
possible with the whole range-query thing.

10,000 foot view:

The range_query object (value-range.h) replaces the old 
SSA_NAME_RANGE_INFO macros.  It adds the ability to provide an optional 
context in the form of a stmt or edge to any query.  If no context is 
provided, it simply provides the global value. There are basically 3 
queries:


  virtual bool range_of_expr (vrange , tree expr, gimple * = NULL) ;
  virtual bool range_on_edge (vrange , edge, tree expr);
  virtual bool range_of_stmt (vrange , gimple *, tree name = NULL);

- range_of_stmt evaluates the DEF of the stmt, but can also evaluate 
things like  "if (x < y)" that have an implicit boolean LHS.  If NAME is 
provided, it needs to match the DEF. Thats mostly flexibility for 
dealing with something like multiple defs, you can specify which def.
- range_on_edge provides the range of an ssa-name as it would be valued 
on a specific edge.
- range_of_expr is used to ask for the range of any ssa_name or tree 
expression as it occurs on entry to a specific stmt. Normally we use 
this to ask for the range of an ssa-name as its used on a stmt,  but it 
can evaluate expression trees as well.


These requests are not limited to names which occur on a stmt.. we can 
recompute values by asking for the range of value as they occur at other 
locations in the IL.  ie

x_2 = b_3 + 5
<...>
if (b_3 > 7)
   blah (x_2)
When we ask for the range of x_2 at the call to blah, ranger actually 
recomputes x_2 = b_3 + 5 at the call site by asking for the range of b_3 
on the outgoing edge leading to the block with the call to blah, and 
thus uses b_3 == [8, +INF] to re-evaluate x_2


Internally, ranger uses the exact same API to evaluate everything that 
external clients use.



The default query object is global_range_query, which ignores any 
location (stmt or edge) information provided, and simply returns the 
global value. This amounts to an identical result as the old 
SSA_NAME_RANGE_INFO request, and when get_range_query () is called, this 
is the default range_query that is provided.


When a pass calls enable_ranger(), the default query is changed to this 
new instance (which supports context information), and any further calls 
to get_range_query() will now invoke ranger instead of the 
global_range_query.  It uses its on-demand support to go and answer the 
range question by looking at only what it needs to in order to answer 
the question.  This is the exact same ranger code base that all the VRP 
passes use, so you get almost the same level of power to answer 
questions.  There are just a couple of little things that VRP enables 
because it does a DOM walk, but they are fairly minor for most cases.


if you use the range_query API, and do not provide a stmt or an edge, 
then we can't provide contextual range information, and you'll go back 
to getting just global information again.


I think Aldy has converted everything to the new range_query API...  
which means any pass that could benefit from contextual range 
information , in theory, only needs to enable_ranger() and provide a 
context stmt or edge on the range query call.


Just remember to disable it when done :-)

Andrew



Re: Enable ranger for ipa-prop

2023-06-27 Thread Andrew MacLeod via Gcc-patches



On 6/27/23 09:19, Jan Hubicka wrote:

Hi,
as shown in the testcase (which would eventually be useful for
optimizing std::vector's push_back), ipa-prop can use context dependent ranger
queries for better value range info.

Bootstrapped/regtested x86_64-linux, OK?


Quick question.

When you call enable_ranger(), its gives you a ranger back, but it also 
sets the range query for the specified context to that same instance.  
So from that point forward  all existing calls to get_range_query(fun) 
will now use the context ranger


enable_ranger (struct function *fun, bool use_imm_uses)
<...>
  gcc_checking_assert (!fun->x_range_query);
  r = new gimple_ranger (use_imm_uses);
  fun->x_range_query = r;
  return r;

So you probably dont have to pass a ranger around?  or is that ranger 
you are passing for a different context?



Andrew




[COMMITTED] PR tree-optimization/110251 - Avoid redundant GORI calcuations.

2023-06-26 Thread Andrew MacLeod via Gcc-patches
When calculating ranges, GORI evaluates the chain of definitions until 
it finds the desired name.


  _4 = (short unsigned int) c.2_1;
  _5 = _4 + 65535;
  a_lsm.19_30 = a;
  _49 = _4 + 65534;
  _12 = _5 & _49;
  _46 = _12 + 65535;
  _48 = _12 & _46;    <<--
  if (_48 != 0)

When evaluating c.2_1 on the true edge, GORI starts with _48 with a 
range of [1, +INF]


Looking at _48's operands (_12 and _46), note that it depends both  _12 
and _46.  Also note that _46 is also dependent on _12.


GORI currently simply calculates c.2_1 through both operands. this means 
_12 will be evaluates back thru to c.2_1, and then _46 will do the same 
and the results will be combined.  that means the statements from _12 
back to c.2_1 are actually calculated twice.


This PR produces a sequence of code which is quite long, with cascading 
chains of dependencies like this that feed each other. This becomes a 
geometric/exponential growth in calculation time, over and over.


This patch identifies the situation of one operand depending on the 
other, and simply evaluates only  the one which includes the other.  In 
the above case, it simply winds back thru _46 ignoring the _12 operand 
in the definition of _48.    During the process of evaluating _46, we 
eventually get to evaluating _12 anyway, so we don't lose much, if 
anything.    This results in a much more consistently linear time 
evaluation.


Bootstraps on x86_64-pc-linux-gnu with no regressions.   Pushed.

Andrew







commit 6246ee062062b53275c229daf8676ccaa535f419
Author: Andrew MacLeod 
Date:   Thu Jun 22 10:00:12 2023 -0400

Avoid redundant GORI calcuations.

When GORI evaluates a statement, if operand 1 and 2 are both in the
dependency chain, GORI evaluates the name through both operands sequentially
and combines the results.

If either operand is in the dependency chain of the other, this
evaluation will do the same work twice, for questionable gain.
Instead, simple evaluate only the operand which depends on the other
and keep the evaluation linear in time.

* gimple-range-gori.cc (compute_operand1_and_operand2_range):
Check for interdependence between operands 1 and 2.

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index abc70cd54ee..4ee0ae36014 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1291,13 +1291,26 @@ gori_compute::compute_operand1_and_operand2_range (vrange ,
 {
   Value_Range op_range (TREE_TYPE (name));
 
-  // Calculate a good a range for op2.  Since op1 == op2, this will
-  // have already included whatever the actual range of name is.
-  if (!compute_operand2_range (op_range, handler, lhs, name, src, rel))
+  // If op1 is in the def chain of op2, we'll do the work twice to evalaute
+  // op1.  This can result in an exponential time calculation.
+  // Instead just evaluate op2, which will eventualy get to op1.
+  if (in_chain_p (handler.operand1 (), handler.operand2 ()))
+return compute_operand2_range (r, handler, lhs, name, src, rel);
+
+  // Likewise if op2 is in the def chain of op1.
+  if (in_chain_p (handler.operand2 (), handler.operand1 ()))
+return compute_operand1_range (r, handler, lhs, name, src, rel);
+
+  // Calculate a good a range through op2.
+  if (!compute_operand2_range (r, handler, lhs, name, src, rel))
 return false;
 
+  // If op1 == op2 there is again no need to go further.
+  if (handler.operand1 () == handler.operand2 ())
+return true;
+
   // Now get the range thru op1.
-  if (!compute_operand1_range (r, handler, lhs, name, src, rel))
+  if (!compute_operand1_range (op_range, handler, lhs, name, src, rel))
 return false;
 
   // Both operands have to be simultaneously true, so perform an intersection.


[PATCH] PR tree-optimization/110266 - Check for integer only complex

2023-06-15 Thread Andrew MacLeod via Gcc-patches
With the expanded capabilities of range-op dispatch, floating point 
complex objects can appear when folding, whic they couldn't before. In 
the processing for extracting integers from complex int's, make sure it 
actually is an integer.


Bootstraps on x86_64-pc-linux-gnu.  Regtesting currently under way.  
Assuming there are no issues, I will push this.


Andrew

From 2ba20a9e7b41fbcf1f03d5447e14b9b7b174fead Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Thu, 15 Jun 2023 11:59:55 -0400
Subject: [PATCH] Check for integer only complex.

With the expanded capabilities of range-op dispatch, floating point
complex objects can appear when folding, whic they couldn't before.
In the processig for extracting integers from complex ints, make sure it
is an integer complex.

	PR tree-optimization/110266
	gcc/
	* gimple-range-fold.cc (adjust_imagpart_expr): Check for integer
	complex type.
	(adjust_realpart_expr): Ditto.

	gcc/testsuite/
	* gcc.dg/pr110266.c: New.
---
 gcc/gimple-range-fold.cc|  6 --
 gcc/testsuite/gcc.dg/pr110266.c | 20 
 2 files changed, 24 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/pr110266.c

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 173d9f386c5..b4018d08d2b 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -506,7 +506,8 @@ adjust_imagpart_expr (vrange , const gimple *stmt)
   && gimple_assign_rhs_code (def_stmt) == COMPLEX_CST)
 {
   tree cst = gimple_assign_rhs1 (def_stmt);
-  if (TREE_CODE (cst) == COMPLEX_CST)
+  if (TREE_CODE (cst) == COMPLEX_CST
+	  && TREE_CODE (TREE_TYPE (TREE_TYPE (cst))) == INTEGER_TYPE)
 	{
 	  wide_int w = wi::to_wide (TREE_IMAGPART (cst));
 	  int_range<1> imag (TREE_TYPE (TREE_IMAGPART (cst)), w, w);
@@ -533,7 +534,8 @@ adjust_realpart_expr (vrange , const gimple *stmt)
   && gimple_assign_rhs_code (def_stmt) == COMPLEX_CST)
 {
   tree cst = gimple_assign_rhs1 (def_stmt);
-  if (TREE_CODE (cst) == COMPLEX_CST)
+  if (TREE_CODE (cst) == COMPLEX_CST
+	  && TREE_CODE (TREE_TYPE (TREE_TYPE (cst))) == INTEGER_TYPE)
 	{
 	  wide_int imag = wi::to_wide (TREE_REALPART (cst));
 	  int_range<2> tmp (TREE_TYPE (TREE_REALPART (cst)), imag, imag);
diff --git a/gcc/testsuite/gcc.dg/pr110266.c b/gcc/testsuite/gcc.dg/pr110266.c
new file mode 100644
index 000..0b2acb5a791
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110266.c
@@ -0,0 +1,20 @@
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+
+#include 
+
+int Hann_i, PsyBufferUpdate_psyInfo_0, PsyBufferUpdate_i;
+double *mdct_data;
+double PsyBufferUpdate_sfreq;
+void PsyBufferUpdate() {
+  if (PsyBufferUpdate_psyInfo_0 == 4)
+for (; Hann_i;)
+  ;
+  {
+double xr_0 = cos(PsyBufferUpdate_psyInfo_0);
+PsyBufferUpdate_sfreq = sin(PsyBufferUpdate_psyInfo_0);
+for (; PsyBufferUpdate_psyInfo_0; PsyBufferUpdate_i++)
+  mdct_data[PsyBufferUpdate_i] = xr_0 * PsyBufferUpdate_sfreq;
+  }
+}
+
-- 
2.40.1



[COMMITTED 12/17] - Add a hybrid MAX_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


This is the last use of the pointer table, so it is also removed.

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cd194f582c5be3cc91e025e304e2769f61ceb6b6 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:35:18 -0400
Subject: [PATCH 12/17] Add a hybrid MAX_EXPR operator for integer and pointer.

This adds an operator to the unified table for MAX_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

THIs also removes the pointer table which is no longer needed.

	* range-op-mixed.h (operator_max): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove MAX_EXPR.
	(pointer_table::pointer_table): Remove.
	(class hybrid_max_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_max_operator.
	* range-op.cc (pointer_tree_table): Remove.
	(unified_table::unified_table): Comment out MAX_EXPR.
	(get_op_handler): Remove check of pointer table.
	* range-op.h (class pointer_table): Remove.
---
 gcc/range-op-mixed.h |  6 +++---
 gcc/range-op-ptr.cc  | 30 --
 gcc/range-op.cc  | 10 ++
 gcc/range-op.h   |  9 -
 4 files changed, 25 insertions(+), 30 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index a65935435c2..bdc488b8754 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -636,10 +636,10 @@ class operator_max : public range_operator
 {
 public:
   void update_bitmask (irange , const irange ,
-  const irange ) const final override;
-private:
+  const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 483e43ca994..ea66fe9056b 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -157,7 +157,6 @@ pointer_min_max_operator::wi_fold (irange , tree type,
 r.set_varying (type);
 }
 
-
 class pointer_and_operator : public range_operator
 {
 public:
@@ -265,14 +264,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 	rel);
 }
 
-// When PRANGE is implemented, these are all the opcodes which are currently
-// expecting routines with PRANGE signatures.
-
-pointer_table::pointer_table ()
-{
-  set (MAX_EXPR, op_ptr_min_max);
-}
-
 // --
 // Hybrid operators for the 4 operations which integer and pointers share,
 // but which have different implementations.  Simply check the type in
@@ -404,8 +395,26 @@ public:
 }
 } op_hybrid_min;
 
+class hybrid_max_operator : public operator_max
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_max::update_bitmask (r, lh, rh);
+}
 
-
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_max::wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+  else
+	return op_ptr_min_max.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_max;
 
 // Initialize any pointer operators to the primary table
 
@@ -417,4 +426,5 @@ range_op_table::initialize_pointer_ops ()
   set (BIT_AND_EXPR, op_hybrid_and);
   set (BIT_IOR_EXPR, op_hybrid_or);
   set (MIN_EXPR, op_hybrid_min);
+  set (MAX_EXPR, op_hybrid_max);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 481f3b1324d..046b7691bb6 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,8 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-pointer_table pointer_tree_table;
-
 // Instantiate a range_op_table for unified operations.
 class unified_table : public range_op_table
 {
@@ -124,18 +122,14 @@ unified_table::unified_table ()
   // set (BIT_AND_EXPR, op_bitwise_and);
   // set (BIT_IOR_EXPR, op_bitwise_or);
   // set (MIN_EXPR, op_min);
-  set (MAX_EXPR, op_max);
+  // set (MAX_EXPR, op_max);
 }
 
 // The tables are hidden and accessed via a simple extern function.
 
 range_operator *
-get_op_handler (enum tree_code code, tree type)
+get_op_handler (enum tree_code code, tree)
 {
-  // If this is pointer type and there is pointer specifc routine, use it.
-  if (POINTER_TYPE_P (type) && pointer_tree_table[code])
-return pointer_tree_table[code];
-
   return unified_tree_table[code];
 }
 
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 08c51bace40..15c45137af2 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h

[COMMITTED 17/17] PR tree-optimization/110205 - Add some overrides.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add some missing overrides, and add the diaptch pattern for FII which 
will be used for integer to float conversion.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 1bed4b49302e2fd7bf89426117331ae89ebdc90b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Mon, 12 Jun 2023 09:47:43 -0400
Subject: [PATCH 17/17] Add some overrides.

	PR tree-optimization/110205
	* range-op-float.cc (range_operator::fold_range): Add default FII
	fold routine.
	(Class operator_gt): Add missing final overrides.
	* range-op.cc (range_op_handler::fold_range): Add RO_FII case.
	(operator_lshift ::update_bitmask): Add final override.
	(operator_rshift ::update_bitmask): Add final override.
	* range-op.h (range_operator::fold_range): Add FII prototype.
---
 gcc/range-op-float.cc | 10 ++
 gcc/range-op-mixed.h  |  9 +
 gcc/range-op.cc   | 10 --
 gcc/range-op.h|  4 
 4 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 24f2235884f..f5c0cec75c4 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -157,6 +157,16 @@ range_operator::fold_range (irange  ATTRIBUTE_UNUSED,
   return false;
 }
 
+bool
+range_operator::fold_range (frange  ATTRIBUTE_UNUSED,
+			tree type ATTRIBUTE_UNUSED,
+			const irange  ATTRIBUTE_UNUSED,
+			const irange  ATTRIBUTE_UNUSED,
+			relation_trio) const
+{
+  return false;
+}
+
 bool
 range_operator::op1_range (frange  ATTRIBUTE_UNUSED,
  tree type ATTRIBUTE_UNUSED,
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index bdc488b8754..6944742ecbc 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -239,26 +239,27 @@ public:
   using range_operator::op1_op2_relation;
   bool fold_range (irange , tree type,
 		   const irange , const irange ,
-		   relation_trio = TRIO_VARYING) const;
+		   relation_trio = TRIO_VARYING) const final override;
   bool fold_range (irange , tree type,
 		   const frange , const frange ,
 		   relation_trio = TRIO_VARYING) const final override;
 
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio = TRIO_VARYING) const;
+		  relation_trio = TRIO_VARYING) const final override;
   bool op1_range (frange , tree type,
 		  const irange , const frange ,
 		  relation_trio = TRIO_VARYING) const final override;
 
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio = TRIO_VARYING) const;
+		  relation_trio = TRIO_VARYING) const final override;
   bool op2_range (frange , tree type,
 		  const irange , const frange ,
 		  relation_trio = TRIO_VARYING) const final override;
   relation_kind op1_op2_relation (const irange ) const final override;
-  void update_bitmask (irange , const irange , const irange ) const;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
 };
 
 class operator_ge :  public range_operator
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 8a661fdb042..f0dff53ec1e 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -219,6 +219,10 @@ range_op_handler::fold_range (vrange , tree type,
 	return m_operator->fold_range (as_a  (r), type,
    as_a  (lh),
    as_a  (rh), rel);
+  case RO_FII:
+	return m_operator->fold_range (as_a  (r), type,
+   as_a  (lh),
+   as_a  (rh), rel);
   default:
 	return false;
 }
@@ -2401,7 +2405,8 @@ public:
 tree type,
 const wide_int &,
 const wide_int &) const;
-  void update_bitmask (irange , const irange , const irange ) const
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
 { update_known_bitmask (r, LSHIFT_EXPR, lh, rh); }
 } op_lshift;
 
@@ -2432,7 +2437,8 @@ public:
 	   const irange ,
 	   const irange ,
 	   relation_kind rel) const;
-  void update_bitmask (irange , const irange , const irange ) const
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
 { update_known_bitmask (r, RSHIFT_EXPR, lh, rh); }
 } op_rshift;
 
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 3602bc4e123..af94c2756a7 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -72,6 +72,10 @@ public:
 			   const frange ,
 			   const frange ,
 			   relation_trio = TRIO_VARYING) const;
+  virtual bool fold_range (frange , tree type,
+			   const irange ,
+			   const irange ,
+			   relation_trio = TRIO_VARYING) const;
 
   // Return the range for op[12] in the general case.  LHS is the range for
   // the LHS of the expression, OP[12]is the range for the other
-- 
2.40.1



[COMMITTED 10/17] - Add a hybrid BIT_IOR_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 80f402e832a2ce402ee1562030d5c67ebc276f7c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:33:17 -0400
Subject: [PATCH 10/17] Add a hybrid BIT_IOR_EXPR operator for integer and
 pointer.

This adds an operator to the unified table for BIT_IOR_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_bitwise_or): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove BIT_IOR_EXPR.
	(class hybrid_or_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_or_operator.
	* range-op.cc (unified_table::unified_table): Comment out BIT_IOR_EXPR.
---
 gcc/range-op-mixed.h | 10 -
 gcc/range-op-ptr.cc  | 52 ++--
 gcc/range-op.cc  |  4 ++--
 3 files changed, 57 insertions(+), 9 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 4177818e4b9..e4852e974c4 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -609,16 +609,16 @@ public:
   using range_operator::op2_range;
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 
 class operator_min : public range_operator
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 941026994ed..7b22d0bf05b 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -184,9 +184,9 @@ pointer_and_operator::wi_fold (irange , tree type,
 
 class pointer_or_operator : public range_operator
 {
+public:
   using range_operator::op1_range;
   using range_operator::op2_range;
-public:
   virtual bool op1_range (irange , tree type,
 			  const irange ,
 			  const irange ,
@@ -270,7 +270,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
@@ -334,6 +333,54 @@ public:
 }
 } op_hybrid_and;
 
+// Temporary class which dispatches routines to either the INT version or
+// the pointer version depending on the type.  Once PRANGE is a range
+// class, we can remove the hybrid.
+
+class hybrid_or_operator : public operator_bitwise_or
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::op1_range (r, type, lhs, op2, rel);
+  else
+	return op_pointer_or.op1_range (r, type, lhs, op2, rel);
+}
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::op2_range (r, type, lhs, op1, rel);
+  else
+	return op_pointer_or.op2_range (r, type, lhs, op1, rel);
+}
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_bitwise_or::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_or::wi_fold (r, type, lh_lb, lh_ub,
+	  rh_lb, rh_ub);
+  else
+	return op_pointer_or.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_or;
+
+
 
 // Initialize any pointer operators to the primary table
 
@@ -343,4 +390,5 @@ range_op_table::initialize_pointer_ops ()
   set (POINTER_PLUS_EXPR, op_pointer_plus);
   set (POINTER_DIFF_EXPR, op_pointer_diff);
   set (BIT_AND_EXPR, op_hybrid_and);
+  set (BIT_IOR_EXPR, op_hybrid_or);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index dcb922143ce..0a9a3297de7 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -121,8 +121,8 @@ unified_table::unified_table ()
   // is used until there is a pointer range class.  Then we can simply
   // u

[COMMITTED 15/17] - Provide a default range_operator via range_op_handler.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This provides range_op_handler with a default range_operator, so you no 
longer need to check if it has a valid handler or not.


The valid check now turns into a "is this something other than a default 
operator" check.   IT means you can now simply invoke fold without 
checking.. ie instead of


range_op_handler handler(CONVERT_EXPR);
if (handler &&  handler.fold_range (..))

we can simply write
if (range_op_handler(CONVERT_EXPR).fold_range ())

The new method range_op() will return the a pointer to the custom 
range_operator, or NULL if its the default.   THis allos use of 
range_op_handler() to behave as if you were indexing a range table/ if 
that happens to be needed.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 3c4399657d35a0b5bf7caeb88c6ddc0461322d3f Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:59:38 -0400
Subject: [PATCH 15/17] Provide a default range_operator via range_op_handler.

range_op_handler now provides a default range_operator for any opcode,
so there is no longer a need to check for a valid operator.

	* gimple-range-op.cc (gimple_range_op_handler): Set m_operator
	manually as there is no access to the default operator.
	(cfn_copysign::fold_range): Don't check for validity.
	(cfn_ubsan::fold_range): Ditto.
	(gimple_range_op_handler::maybe_builtin_call): Don't set to NULL.
	* range-op.cc (default_operator): New.
	(range_op_handler::range_op_handler): Use default_operator
	instead of NULL.
	(range_op_handler::operator bool): Move from header, compare
	against default operator.
	(range_op_handler::range_op): New.
	* range-op.h (range_op_handler::operator bool): Move.
---
 gcc/gimple-range-op.cc | 28 +---
 gcc/range-op.cc| 32 ++--
 gcc/range-op.h |  3 ++-
 3 files changed, 45 insertions(+), 18 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 4cbc981ee04..021a9108ecf 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -120,21 +120,22 @@ gimple_range_op_handler::supported_p (gimple *s)
 // Construct a handler object for statement S.
 
 gimple_range_op_handler::gimple_range_op_handler (gimple *s)
-  : range_op_handler (get_code (s))
 {
+  range_op_handler oper (get_code (s));
   m_stmt = s;
   m_op1 = NULL_TREE;
   m_op2 = NULL_TREE;
 
-  if (m_operator)
+  if (oper)
 switch (gimple_code (m_stmt))
   {
 	case GIMPLE_COND:
 	  m_op1 = gimple_cond_lhs (m_stmt);
 	  m_op2 = gimple_cond_rhs (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
-	  if (!Value_Range::supports_type_p (TREE_TYPE (m_op1)))
-	m_operator = NULL;
+	  if (Value_Range::supports_type_p (TREE_TYPE (m_op1)))
+	m_operator = oper.range_op ();
+	  gcc_checking_assert (m_operator);
 	  return;
 	case GIMPLE_ASSIGN:
 	  m_op1 = gimple_range_base_of_assignment (m_stmt);
@@ -153,7 +154,9 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 	m_op2 = gimple_assign_rhs2 (m_stmt);
 	  // Check that operands are supported types.  One check is enough.
 	  if ((m_op1 && !Value_Range::supports_type_p (TREE_TYPE (m_op1
-	m_operator = NULL;
+	return;
+	  m_operator = oper.range_op ();
+	  gcc_checking_assert (m_operator);
 	  return;
 	default:
 	  gcc_unreachable ();
@@ -165,6 +168,7 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
 maybe_builtin_call ();
   else
 maybe_non_standard ();
+  gcc_checking_assert (m_operator);
 }
 
 // Calculate what we can determine of the range of this unary
@@ -364,11 +368,10 @@ public:
 			   const frange , relation_trio) const override
   {
 frange neg;
-range_op_handler abs_op (ABS_EXPR);
-range_op_handler neg_op (NEGATE_EXPR);
-if (!abs_op || !abs_op.fold_range (r, type, lh, frange (type)))
+if (!range_op_handler (ABS_EXPR).fold_range (r, type, lh, frange (type)))
   return false;
-if (!neg_op || !neg_op.fold_range (neg, type, r, frange (type)))
+if (!range_op_handler (NEGATE_EXPR).fold_range (neg, type, r,
+		frange (type)))
   return false;
 
 bool signbit;
@@ -1073,14 +1076,11 @@ public:
   virtual bool fold_range (irange , tree type, const irange ,
 			   const irange , relation_trio rel) const
   {
-range_op_handler handler (m_code);
-gcc_checking_assert (handler);
-
 bool saved_flag_wrapv = flag_wrapv;
 // Pretend the arithmetic is wrapping.  If there is any overflow,
 // we'll complain, but will actually do wrapping operation.
 flag_wrapv = 1;
-bool result = handler.fold_range (r, type, lh, rh, rel);
+bool result = range_op_handler (m_code).fold_range (r, type, lh, rh, rel);
 flag_wrapv = saved_flag_wrapv;
 
 // If for both arguments vrp_valueize returned non-NULL, this should
@@ -1230,8 +1230,6 @@ gimple_range_op_handler::maybe_builtin_call ()
 	m_operator = _cfn_constant_p;
   else if (frange::su

[COMMITTED 9/17] - Add a hybrid BIT_AND_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 8adb8b2fd5797706e9fbb353d52fda123545431d Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:28:40 -0400
Subject: [PATCH 09/17] Add a hybrid BIT_AND_EXPR operator for integer and
 pointer.

This adds an operator to the unified table for BIT_AND_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_bitwise_and): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove BIT_AND_EXPR.
	(class hybrid_and_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_and_operator.
	* range-op.cc (unified_table::unified_table): Comment out BIT_AND_EXPR.
---
 gcc/range-op-mixed.h | 12 -
 gcc/range-op-ptr.cc  | 62 +++-
 gcc/range-op.cc  |  9 ---
 3 files changed, 73 insertions(+), 10 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index b188f5a516e..4177818e4b9 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -584,19 +584,19 @@ public:
   using range_operator::lhs_op1_relation;
   bool op1_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   bool op2_range (irange , tree type,
 		  const irange , const irange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
+		  relation_trio rel = TRIO_VARYING) const override;
   relation_kind lhs_op1_relation (const irange ,
   const irange , const irange ,
-  relation_kind) const final override;
+  relation_kind) const override;
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
   void simple_op1_range_solver (irange , tree type,
 const irange ,
 const irange ) const;
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 55c37cc8c86..941026994ed 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -270,12 +270,71 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (BIT_AND_EXPR, op_pointer_and);
   set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
 
+// --
+// Hybrid operators for the 4 operations which integer and pointers share,
+// but which have different implementations.  Simply check the type in
+// the call and choose the appropriate method.
+// Once there is a PRANGE signature, simply add the appropriate
+// prototypes in the rmixed range class, and remove these hybrid classes.
+
+class hybrid_and_operator : public operator_bitwise_and
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::op1_range (r, type, lhs, op2, rel);
+  else
+	return false;
+}
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::op2_range (r, type, lhs, op1, rel);
+  else
+	return false;
+}
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind rel) const final override
+{
+  if (!lhs.undefined_p () && INTEGRAL_TYPE_P (lhs.type ()))
+	return operator_bitwise_and::lhs_op1_relation (lhs, op1, op2, rel);
+  else
+	return VREL_VARYING;
+}
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_bitwise_and::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_bitwise_and::wi_fold (r, type, lh_lb, lh_ub,
+	  rh_lb, rh_ub);
+  else
+	return op_pointer_and.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_and;
+
+
 // Initialize any pointer operators to the primary table
 
 void
@@ -283,4 +342,5 @@ range_op_table::initialize_pointer_ops ()
 {
   set (POINTER_PLUS_EXPR,

[COMMITTED 16/17] - Provide interface for non-standard operators.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This patch removes the hack introduced late last year for the 
non-standard range-op support.


Instead of adding a a pointer to a range_operator in the header file, 
and then setting the operator from another file via that pointer, the 
table itself is extended and  we provide new #defines to declare new 
operators.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6d3b6847bcb36221185a6259d19d743f4cfe1b5a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 17:06:36 -0400
Subject: [PATCH 16/17] Provide interface for non-standard operators.

THis removes the hack introduced for WIDEN_MULT which exported a pointer
to the operator and the gimple-range-op.cc set the operator to this
pointer whenn it was appropriate.

Instead, we simple change the range-op table to be unsigned indexed,
and add new opcodes to the end of the table, allowing them to be indexed
directly via range_op_handler::range_op.

	* gimple-range-op.cc (gimple_range_op_handler::maybe_non_standard):
	Use range_op_handler directly.
	* range-op.cc (range_op_handler::range_op_handler): Unsigned
	param instead of tree-code.
	(ptr_op_widen_plus_signed): Delete.
	(ptr_op_widen_plus_unsigned): Delete.
	(ptr_op_widen_mult_signed): Delete.
	(ptr_op_widen_mult_unsigned): Delete.
	(range_op_table::initialize_integral_ops): Add new opcodes.
	* range-op.h (range_op_handler): Use unsigned.
	(OP_WIDEN_MULT_SIGNED): New.
	(OP_WIDEN_MULT_UNSIGNED): New.
	(OP_WIDEN_PLUS_SIGNED): New.
	(OP_WIDEN_PLUS_UNSIGNED): New.
	(RANGE_OP_TABLE_SIZE): New.
	(range_op_table::operator []): Use unsigned.
	(range_op_table::set): Use unsigned.
	(m_range_tree): Make unsigned.
	(ptr_op_widen_mult_signed): Remove.
	(ptr_op_widen_mult_unsigned): Remove.
	(ptr_op_widen_plus_signed): Remove.
	(ptr_op_widen_plus_unsigned): Remove.
---
 gcc/gimple-range-op.cc | 11 +++
 gcc/range-op.cc| 11 ++-
 gcc/range-op.h | 26 --
 3 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 021a9108ecf..72c7b866f90 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -1168,8 +1168,11 @@ public:
 void
 gimple_range_op_handler::maybe_non_standard ()
 {
-  range_operator *signed_op = ptr_op_widen_mult_signed;
-  range_operator *unsigned_op = ptr_op_widen_mult_unsigned;
+  range_op_handler signed_op (OP_WIDEN_MULT_SIGNED);
+  gcc_checking_assert (signed_op);
+  range_op_handler unsigned_op (OP_WIDEN_MULT_UNSIGNED);
+  gcc_checking_assert (unsigned_op);
+
   if (gimple_code (m_stmt) == GIMPLE_ASSIGN)
 switch (gimple_assign_rhs_code (m_stmt))
   {
@@ -1195,9 +1198,9 @@ gimple_range_op_handler::maybe_non_standard ()
 	std::swap (m_op1, m_op2);
 
 	  if (signed1 || signed2)
-	m_operator = signed_op;
+	m_operator = signed_op.range_op ();
 	  else
-	m_operator = unsigned_op;
+	m_operator = unsigned_op.range_op ();
 	  break;
 	}
 	default:
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a271e00fa07..8a661fdb042 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -135,7 +135,7 @@ range_op_handler::range_op_handler ()
 // Create a range_op_handler for CODE.  Use a default operatoer if CODE
 // does not have an entry.
 
-range_op_handler::range_op_handler (tree_code code)
+range_op_handler::range_op_handler (unsigned code)
 {
   m_operator = operator_table[code];
   if (!m_operator)
@@ -1726,7 +1726,6 @@ public:
 			const wide_int _lb,
 			const wide_int _ub) const;
 } op_widen_plus_signed;
-range_operator *ptr_op_widen_plus_signed = _widen_plus_signed;
 
 void
 operator_widen_plus_signed::wi_fold (irange , tree type,
@@ -1760,7 +1759,6 @@ public:
 			const wide_int _lb,
 			const wide_int _ub) const;
 } op_widen_plus_unsigned;
-range_operator *ptr_op_widen_plus_unsigned = _widen_plus_unsigned;
 
 void
 operator_widen_plus_unsigned::wi_fold (irange , tree type,
@@ -2184,7 +2182,6 @@ public:
 			const wide_int _ub)
 const;
 } op_widen_mult_signed;
-range_operator *ptr_op_widen_mult_signed = _widen_mult_signed;
 
 void
 operator_widen_mult_signed::wi_fold (irange , tree type,
@@ -2217,7 +2214,6 @@ public:
 			const wide_int _ub)
 const;
 } op_widen_mult_unsigned;
-range_operator *ptr_op_widen_mult_unsigned = _widen_mult_unsigned;
 
 void
 operator_widen_mult_unsigned::wi_fold (irange , tree type,
@@ -4298,6 +4294,11 @@ range_op_table::initialize_integral_ops ()
   set (IMAGPART_EXPR, op_unknown);
   set (REALPART_EXPR, op_unknown);
   set (ABSU_EXPR, op_absu);
+  set (OP_WIDEN_MULT_SIGNED, op_widen_mult_signed);
+  set (OP_WIDEN_MULT_UNSIGNED, op_widen_mult_unsigned);
+  set (OP_WIDEN_PLUS_SIGNED, op_widen_plus_signed);
+  set (OP_WIDEN_PLUS_UNSIGNED, op_widen_plus_unsigned);
+
 }
 
 #if CHECKING_P
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 8243258eea5..3602bc4e123 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -185,7 +185,7 @@ class range_op_handler
 {
 public:
   range_op_handler

[COMMITTED 14/17] - Switch from unified table to range_op_table. There can be only one.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Now that the unified table is the only one,  remove it and simply use 
range_op_table as the class instead of inheriting from it.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5bb9d2acd1987f788a52a2be9bca10c47033020a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:56:06 -0400
Subject: [PATCH 14/17] Switch from unified table to range_op_table.  There can
 be only one.

Now that there is only a single range_op_table, make the base table the
only table.

	* range-op.cc (unified_table): Delete.
	(range_op_table operator_table): Instantiate.
	(range_op_table::range_op_table): Rename from unified_table.
	(range_op_handler::range_op_handler): Use range_op_table.
	* range-op.h (range_op_table::operator []): Inline.
	(range_op_table::set): Inline.
---
 gcc/range-op.cc | 14 +-
 gcc/range-op.h  | 33 +++--
 2 files changed, 16 insertions(+), 31 deletions(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 3e8b1222b1c..382f5d50ffa 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,13 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-// Instantiate a range_op_table for unified operations.
-class unified_table : public range_op_table
-{
-  public:
-unified_table ();
-} unified_tree_table;
-
 // Instantiate the operators which apply to multiple types here.
 
 operator_equal op_equal;
@@ -80,9 +73,12 @@ operator_bitwise_or op_bitwise_or;
 operator_min op_min;
 operator_max op_max;
 
+// Instantaite a range operator table.
+range_op_table operator_table;
+
 // Invoke the initialization routines for each class of range.
 
-unified_table::unified_table ()
+range_op_table::range_op_table ()
 {
   initialize_integral_ops ();
   initialize_pointer_ops ();
@@ -134,7 +130,7 @@ range_op_handler::range_op_handler ()
 
 range_op_handler::range_op_handler (tree_code code)
 {
-  m_operator = unified_tree_table[code];
+  m_operator = operator_table[code];
 }
 
 // Create a dispatch pattern for value range discriminators LHS, OP1, and OP2.
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 295e5116dd1..328910d0ec5 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -266,35 +266,24 @@ extern void wi_set_zero_nonzero_bits (tree type,
 class range_op_table
 {
 public:
-  range_operator *operator[] (enum tree_code code);
-  void set (enum tree_code code, range_operator );
+  range_op_table ();
+  inline range_operator *operator[] (enum tree_code code)
+{
+  gcc_checking_assert (code >= 0 && code < MAX_TREE_CODES);
+  return m_range_tree[code];
+}
 protected:
+  inline void set (enum tree_code code, range_operator )
+{
+  gcc_checking_assert (m_range_tree[code] == NULL);
+  m_range_tree[code] = 
+}
   range_operator *m_range_tree[MAX_TREE_CODES];
   void initialize_integral_ops ();
   void initialize_pointer_ops ();
   void initialize_float_ops ();
 };
 
-
-// Return a pointer to the range_operator instance, if there is one
-// associated with tree_code CODE.
-
-inline range_operator *
-range_op_table::operator[] (enum tree_code code)
-{
-  gcc_checking_assert (code >= 0 && code < MAX_TREE_CODES);
-  return m_range_tree[code];
-}
-
-// Add OP to the handler table for CODE.
-
-inline void
-range_op_table::set (enum tree_code code, range_operator )
-{
-  gcc_checking_assert (m_range_tree[code] == NULL);
-  m_range_tree[code] = 
-}
-
 extern range_operator *ptr_op_widen_mult_signed;
 extern range_operator *ptr_op_widen_mult_unsigned;
 extern range_operator *ptr_op_widen_plus_signed;
-- 
2.40.1



[COMMITTED 6/17] - Move operator_min to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 508645fd461ceb8b743837e24411df2e17bd3950 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:09:58 -0400
Subject: [PATCH 06/17] Move operator_min to the unified range-op table.

	* range-op-mixed.h (class operator_min): Move from...
	* range-op.cc (unified_table::unified_table): Add MIN_EXPR.
	(class operator_min): Move from here.
	(integral_table::integral_table): Remove MIN_EXPR.
---
 gcc/range-op-mixed.h | 11 +++
 gcc/range-op.cc  | 18 +++---
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 8a11d61220c..7bd9b5e1129 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -596,4 +596,15 @@ private:
 		const wide_int _ub) const final override;
 };
 
+class operator_min : public range_operator
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 07e0c88e209..a777fb0d8a3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -80,6 +80,7 @@ operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
 operator_bitwise_or op_bitwise_or;
+operator_min op_min;
 
 // Invoke the initialization routines for each class of range.
 
@@ -119,6 +120,7 @@ unified_table::unified_table ()
   // speifc version is provided.
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
+  set (MIN_EXPR, op_min);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -1980,17 +1982,12 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 }
 
 
-class operator_min : public range_operator
+void
+operator_min::update_bitmask (irange , const irange ,
+			  const irange ) const
 {
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, MIN_EXPR, lh, rh); }
-} op_min;
+  update_known_bitmask (r, MIN_EXPR, lh, rh);
+}
 
 void
 operator_min::wi_fold (irange , tree type,
@@ -4534,7 +4531,6 @@ pointer_or_operator::wi_fold (irange , tree type,
 
 integral_table::integral_table ()
 {
-  set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
 }
 
-- 
2.40.1



[COMMITTED 13/17] - Remove type from range_op_handler table selection

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Lucky 13.  WIth the unified table complete, it is no longer necessary to 
specify a type when constructing a range_op_handler. This patch removes 
that requirement.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 8934830333933349d41e62f9fd6a3d21ab71150c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:41:20 -0400
Subject: [PATCH 13/17] Remove type from range_op_handler table selection

With the unified table complete, we no loonger need to specify a type
to choose a table when setting a range_op_handler.

	* gimple-range-gori.cc (gori_compute::condexpr_adjust): Do not
	pass type.
	* gimple-range-op.cc (get_code): Rename from get_code_and_type
	and simplify.
	(gimple_range_op_handler::supported_p): No need for type.
	(gimple_range_op_handler::gimple_range_op_handler): Ditto.
	(cfn_copysign::fold_range): Ditto.
	(cfn_ubsan::fold_range): Ditto.
	* ipa-cp.cc (ipa_vr_operation_and_type_effects): Ditto.
	* ipa-fnsummary.cc (evaluate_conditions_for_known_args): Ditto.
	* range-op-float.cc (operator_plus::op1_range): Ditto.
	(operator_mult::op1_range): Ditto.
	(range_op_float_tests): Ditto.
	* range-op.cc (get_op_handler): Remove.
	(range_op_handler::set_op_handler): Remove.
	(operator_plus::op1_range): No need for type.
	(operator_minus::op1_range): Ditto.
	(operator_mult::op1_range): Ditto.
	(operator_exact_divide::op1_range): Ditto.
	(operator_cast::op1_range): Ditto.
	(perator_bitwise_not::fold_range): Ditto.
	(operator_negate::fold_range): Ditto.
	* range-op.h (range_op_handler::range_op_handler): Remove type param.
	(range_cast): No need for type.
	(range_op_table::operator[]): Check for enum_code >= 0.
	* tree-data-ref.cc (compute_distributive_range): No need for type.
	* tree-ssa-loop-unswitch.cc (unswitch_predicate): Ditto.
	* value-query.cc (range_query::get_tree_range): Ditto.
	* value-relation.cc (relation_oracle::validate_relation): Ditto.
	* vr-values.cc (range_of_var_in_loop): Ditto.
	(simplify_using_ranges::fold_cond_with_ops): Ditto.
---
 gcc/gimple-range-gori.cc  |  2 +-
 gcc/gimple-range-op.cc| 42 ++-
 gcc/ipa-cp.cc |  6 ++---
 gcc/ipa-fnsummary.cc  |  6 ++---
 gcc/range-op-float.cc |  6 ++---
 gcc/range-op.cc   | 39 
 gcc/range-op.h| 10 +++--
 gcc/tree-data-ref.cc  |  4 ++--
 gcc/tree-ssa-loop-unswitch.cc |  2 +-
 gcc/value-query.cc|  5 ++---
 gcc/value-relation.cc |  2 +-
 gcc/vr-values.cc  |  6 ++---
 12 files changed, 43 insertions(+), 87 deletions(-)

diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index a1c8d51e484..abc70cd54ee 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -1478,7 +1478,7 @@ gori_compute::condexpr_adjust (vrange , vrange , gimple *, tree cond,
   tree type = TREE_TYPE (gimple_assign_rhs1 (cond_def));
   if (!range_compatible_p (type, TREE_TYPE (gimple_assign_rhs2 (cond_def
 return false;
-  range_op_handler hand (gimple_assign_rhs_code (cond_def), type);
+  range_op_handler hand (gimple_assign_rhs_code (cond_def));
   if (!hand)
 return false;
 
diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index b6b10e47b78..4cbc981ee04 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -94,28 +94,14 @@ gimple_range_base_of_assignment (const gimple *stmt)
 
 // If statement is supported by range-ops, set the CODE and return the TYPE.
 
-static tree
-get_code_and_type (gimple *s, enum tree_code )
+static inline enum tree_code
+get_code (gimple *s)
 {
-  tree type = NULL_TREE;
-  code = NOP_EXPR;
-
   if (const gassign *ass = dyn_cast (s))
-{
-  code = gimple_assign_rhs_code (ass);
-  // The LHS of a comparison is always an int, so we must look at
-  // the operands.
-  if (TREE_CODE_CLASS (code) == tcc_comparison)
-	type = TREE_TYPE (gimple_assign_rhs1 (ass));
-  else
-	type = TREE_TYPE (gimple_assign_lhs (ass));
-}
-  else if (const gcond *cond = dyn_cast (s))
-{
-  code = gimple_cond_code (cond);
-  type = TREE_TYPE (gimple_cond_lhs (cond));
-}
-  return type;
+return gimple_assign_rhs_code (ass);
+  if (const gcond *cond = dyn_cast (s))
+return gimple_cond_code (cond);
+  return ERROR_MARK;
 }
 
 // If statement S has a supported range_op handler return TRUE.
@@ -123,9 +109,8 @@ get_code_and_type (gimple *s, enum tree_code )
 bool
 gimple_range_op_handler::supported_p (gimple *s)
 {
-  enum tree_code code;
-  tree type = get_code_and_type (s, code);
-  if (type && range_op_handler (code, type))
+  enum tree_code code = get_code (s);
+  if (range_op_handler (code))
 return true;
   if (is_a  (s) && gimple_range_op_handler (s))
 return true;
@@ -135,14 +120,11 @@ gimple_range_op_handler::supported_p (gimple *s)
 // Construct a handler object for statement S.
 
 gimple_range_op_handler::gimp

[COMMITTED 8/17] - Split pointer based range operators to range-op-ptr.cc

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This patch moves all the pointer specific code into a new file 
range-op-ptr.cc


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cb511d2209fa3a05801983a6965656734c1592c6 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:17:51 -0400
Subject: [PATCH 08/17] Split pointer ibased range operators to range-op-ptr.cc

MOve the pointer table and all pointer specific operators into a
new file for pointers.

	* Makefile.in (OBJS): Add range-op-ptr.o.
	* range-op-mixed.h (update_known_bitmask): Move prototype here.
	(minus_op1_op2_relation_effect): Move prototype here.
	(wi_includes_zero_p): Move function to here.
	(wi_zero_p): Ditto.
	* range-op.cc (update_known_bitmask): Remove static.
	(wi_includes_zero_p): Move to header.
	(wi_zero_p): Move to header.
	(minus_op1_op2_relation_effect): Remove static.
	(operator_pointer_diff): Move class and routines to range-op-ptr.cc.
	(pointer_plus_operator): Ditto.
	(pointer_min_max_operator): Ditto.
	(pointer_and_operator): Ditto.
	(pointer_or_operator): Ditto.
	(pointer_table): Ditto.
	(range_op_table::initialize_pointer_ops): Ditto.
	* range-op-ptr.cc: New.
---
 gcc/Makefile.in  |   1 +
 gcc/range-op-mixed.h |  25 
 gcc/range-op-ptr.cc  | 286 +++
 gcc/range-op.cc  | 258 +-
 4 files changed, 314 insertions(+), 256 deletions(-)
 create mode 100644 gcc/range-op-ptr.cc

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 0c02f312985..4be82e83b9e 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1588,6 +1588,7 @@ OBJS = \
 	range.o \
 	range-op.o \
 	range-op-float.o \
+	range-op-ptr.o \
 	read-md.o \
 	read-rtl.o \
 	read-rtl-function.o \
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index cd137acd0e6..b188f5a516e 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -22,6 +22,31 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_RANGE_OP_MIXED_H
 #define GCC_RANGE_OP_MIXED_H
 
+void update_known_bitmask (irange &, tree_code, const irange &, const irange &);
+bool minus_op1_op2_relation_effect (irange _range, tree type,
+const irange &, const irange &,
+relation_kind rel);
+
+
+// Return TRUE if 0 is within [WMIN, WMAX].
+
+inline bool
+wi_includes_zero_p (tree type, const wide_int , const wide_int )
+{
+  signop sign = TYPE_SIGN (type);
+  return wi::le_p (wmin, 0, sign) && wi::ge_p (wmax, 0, sign);
+}
+
+// Return TRUE if [WMIN, WMAX] is the singleton 0.
+
+inline bool
+wi_zero_p (tree type, const wide_int , const wide_int )
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return wmin == wmax && wi::eq_p (wmin, wi::zero (prec));
+}
+
+
 enum bool_range_state { BRS_FALSE, BRS_TRUE, BRS_EMPTY, BRS_FULL };
 bool_range_state get_bool_state (vrange , const vrange , tree val_type);
 
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
new file mode 100644
index 000..55c37cc8c86
--- /dev/null
+++ b/gcc/range-op-ptr.cc
@@ -0,0 +1,286 @@
+/* Code for range operators.
+   Copyright (C) 2017-2023 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod 
+   and Aldy Hernandez .
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "optabs-tree.h"
+#include "gimple-pretty-print.h"
+#include "diagnostic-core.h"
+#include "flags.h"
+#include "fold-const.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "cfganal.h"
+#include "gimple-iterator.h"
+#include "gimple-fold.h"
+#include "tree-eh.h"
+#include "gimple-walk.h"
+#include "tree-cfg.h"
+#include "wide-int.h"
+#include "value-relation.h"
+#include "range-op.h"
+#include "tree-ssa-ccp.h"
+#include "range-op-mixed.h"
+
+class pointer_plus_operator : public range_operator
+{
+  using range_operator::op2_range;
+public:
+  virtual void wi_fold (irange , tree type,
+	

[COMMITTED 5/17] - Move operator_bitwise_or to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From a71ee5c2d48691280f76a90e2838d968f45de0c8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:05:33 -0400
Subject: [PATCH 05/17] Move operator_bitwise_or to the unified range-op table.

	* range-op-mixed.h (class operator_bitwise_or): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_IOR_EXPR.
	(class operator_bitwise_or): Move from here.
	(integral_table::integral_table): Remove BIT_IOR_EXPR.
---
 gcc/range-op-mixed.h | 19 +++
 gcc/range-op.cc  | 28 +++-
 2 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index b3d51f8a54e..8a11d61220c 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -577,4 +577,23 @@ private:
 const irange ) const;
 };
 
+class operator_bitwise_or : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 57bd95a1151..07e0c88e209 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -79,6 +79,7 @@ operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
+operator_bitwise_or op_bitwise_or;
 
 // Invoke the initialization routines for each class of range.
 
@@ -117,6 +118,7 @@ unified_table::unified_table ()
   // implementation.  These also remain in the pointer table until a pointer
   // speifc version is provided.
   set (BIT_AND_EXPR, op_bitwise_and);
+  set (BIT_IOR_EXPR, op_bitwise_or);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -3608,27 +3610,12 @@ operator_logical_or::op2_range (irange , tree type,
 }
 
 
-class operator_bitwise_or : public range_operator
+void
+operator_bitwise_or::update_bitmask (irange , const irange ,
+ const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_IOR_EXPR, lh, rh); }
-} op_bitwise_or;
+  update_known_bitmask (r, BIT_IOR_EXPR, lh, rh);
+}
 
 void
 operator_bitwise_or::wi_fold (irange , tree type,
@@ -4549,7 +4536,6 @@ integral_table::integral_table ()
 {
   set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
-  set (BIT_IOR_EXPR, op_bitwise_or);
 }
 
 // Initialize any integral operators to the primary table
-- 
2.40.1



[COMMITTED 4/17] - Move operator_bitwise_and to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From f2166fc81194a3e4e9ef185a7404551b410bb752 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:02:09 -0400
Subject: [PATCH 04/17] Move operator_bitwise_and to the unified range-op
 table.

At this point, the remaining 4 integral operation have different
impllementations than pointers, so we now check for a pointer table
entry first, then if there is nothing, look at the Unified table.

	* range-op-mixed.h (class operator_bitwise_and): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_AND_EXPR.
	(get_op_handler): Check for a pointer table entry first.
	(class operator_bitwise_and): Move from here.
	(integral_table::integral_table): Remove BIT_AND_EXPR.
---
 gcc/range-op-mixed.h | 27 
 gcc/range-op.cc  | 49 ++--
 2 files changed, 42 insertions(+), 34 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 644473053e0..b3d51f8a54e 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -550,4 +550,31 @@ private:
 		const wide_int _ub, const wide_int _lb,
 		const wide_int _ub) const final override;
 };
+
+class operator_bitwise_and : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::lhs_op1_relation;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind) const final override;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+  void simple_op1_range_solver (irange , tree type,
+const irange ,
+const irange ) const;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 11f576c55c5..57bd95a1151 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -78,6 +78,7 @@ operator_mult op_mult;
 operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
 operator_bitwise_xor op_bitwise_xor;
+operator_bitwise_and op_bitwise_and;
 
 // Invoke the initialization routines for each class of range.
 
@@ -111,6 +112,11 @@ unified_table::unified_table ()
   set (ADDR_EXPR, op_addr);
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
+
+  // These are in both integer and pointer tables, but pointer has a different
+  // implementation.  These also remain in the pointer table until a pointer
+  // speifc version is provided.
+  set (BIT_AND_EXPR, op_bitwise_and);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -118,16 +124,17 @@ unified_table::unified_table ()
 range_operator *
 get_op_handler (enum tree_code code, tree type)
 {
+  // If this is pointer type and there is pointer specifc routine, use it.
+  if (POINTER_TYPE_P (type) && pointer_tree_table[code])
+return pointer_tree_table[code];
+
   if (unified_tree_table[code])
 {
   // Should not be in any other table if it is in the unified table.
-  gcc_checking_assert (!pointer_tree_table[code]);
   gcc_checking_assert (!integral_tree_table[code]);
   return unified_tree_table[code];
 }
 
-  if (POINTER_TYPE_P (type))
-return pointer_tree_table[code];
   if (INTEGRAL_TYPE_P (type))
 return integral_tree_table[code];
   return NULL;
@@ -3121,37 +3128,12 @@ operator_logical_and::op2_range (irange , tree type,
 }
 
 
-class operator_bitwise_and : public range_operator
+void
+operator_bitwise_and::update_bitmask (irange , const irange ,
+  const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::lhs_op1_relation;
-public:
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	  const irange ,
-	  const irange ,
-	  relation_kind) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_AND_EXPR, lh, rh); }
-private:
-  void simple_op1_range_solver (irange , tree type,
-const irange ,
-const irange ) const;
-} op_bit

[COMMITTED 11/17] - Add a hybrid MIN_EXPR operator for integer and pointer.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
Add a hybrid operator to choose between integer and pointer versions at 
runtime.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 08f2e419b1e29f114857b3d817904abf3b4891be Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:34:26 -0400
Subject: [PATCH 11/17] Add a hybrid MIN_EXPR operator for integer and pointer.

This adds an operator to the unified table for MIN_EXPR which will
select either the pointer or integer version based on the type passed
to the method.   This is for use until we have a seperate PRANGE class.

	* range-op-mixed.h (operator_min): Remove final.
	* range-op-ptr.cc (pointer_table::pointer_table): Remove MIN_EXPR.
	(class hybrid_min_operator): New.
	(range_op_table::initialize_pointer_ops): Add hybrid_min_operator.
	* range-op.cc (unified_table::unified_table): Comment out MIN_EXPR.
---
 gcc/range-op-mixed.h |  6 +++---
 gcc/range-op-ptr.cc  | 28 +++-
 gcc/range-op.cc  |  2 +-
 3 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index e4852e974c4..a65935435c2 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -625,11 +625,11 @@ class operator_min : public range_operator
 {
 public:
   void update_bitmask (irange , const irange ,
-		   const irange ) const final override;
-private:
+		   const irange ) const override;
+protected:
   void wi_fold (irange , tree type, const wide_int _lb,
 		const wide_int _ub, const wide_int _lb,
-		const wide_int _ub) const final override;
+		const wide_int _ub) const override;
 };
 
 class operator_max : public range_operator
diff --git a/gcc/range-op-ptr.cc b/gcc/range-op-ptr.cc
index 7b22d0bf05b..483e43ca994 100644
--- a/gcc/range-op-ptr.cc
+++ b/gcc/range-op-ptr.cc
@@ -270,7 +270,6 @@ operator_pointer_diff::op1_op2_relation_effect (irange _range, tree type,
 
 pointer_table::pointer_table ()
 {
-  set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 }
 
@@ -380,6 +379,32 @@ public:
 }
 } op_hybrid_or;
 
+// Temporary class which dispatches routines to either the INT version or
+// the pointer version depending on the type.  Once PRANGE is a range
+// class, we can remove the hybrid.
+
+class hybrid_min_operator : public operator_min
+{
+public:
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override
+{
+  if (!r.undefined_p () && INTEGRAL_TYPE_P (r.type ()))
+	operator_min::update_bitmask (r, lh, rh);
+}
+
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override
+{
+  if (INTEGRAL_TYPE_P (type))
+	return operator_min::wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+  else
+	return op_ptr_min_max.wi_fold (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+} op_hybrid_min;
+
+
 
 
 // Initialize any pointer operators to the primary table
@@ -391,4 +416,5 @@ range_op_table::initialize_pointer_ops ()
   set (POINTER_DIFF_EXPR, op_pointer_diff);
   set (BIT_AND_EXPR, op_hybrid_and);
   set (BIT_IOR_EXPR, op_hybrid_or);
+  set (MIN_EXPR, op_hybrid_min);
 }
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 0a9a3297de7..481f3b1324d 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -123,7 +123,7 @@ unified_table::unified_table ()
 
   // set (BIT_AND_EXPR, op_bitwise_and);
   // set (BIT_IOR_EXPR, op_bitwise_or);
-  set (MIN_EXPR, op_min);
+  // set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
 }
 
-- 
2.40.1



[COMMITTED 7/17] - Move operator_max to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches
This is the last of the integral operators, so also remove the integral 
table.


Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 6585fa54e0f2a54f1a398b49b5b4b6a9cd6da4ea Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:10:54 -0400
Subject: [PATCH 07/17] Move operator_max to the unified range-op table.

Also remove the integral table.

	* range-op-mixed.h (class operator_max): Move from...
	* range-op.cc (unified_table::unified_table): Add MAX_EXPR.
	(get_op_handler): Remove the integral table.
	(class operator_max): Move from here.
	(integral_table::integral_table): Delete.
	* range-op.h (class integral_table): Delete.
---
 gcc/range-op-mixed.h | 10 ++
 gcc/range-op.cc  | 34 --
 gcc/range-op.h   |  9 -
 3 files changed, 18 insertions(+), 35 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 7bd9b5e1129..cd137acd0e6 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -607,4 +607,14 @@ private:
 		const wide_int _ub) const final override;
 };
 
+class operator_max : public range_operator
+{
+public:
+  void update_bitmask (irange , const irange ,
+  const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a777fb0d8a3..e83f627a722 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -49,7 +49,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-ccp.h"
 #include "range-op-mixed.h"
 
-integral_table integral_tree_table;
 pointer_table pointer_tree_table;
 
 // Instantiate a range_op_table for unified operations.
@@ -81,6 +80,7 @@ operator_bitwise_xor op_bitwise_xor;
 operator_bitwise_and op_bitwise_and;
 operator_bitwise_or op_bitwise_or;
 operator_min op_min;
+operator_max op_max;
 
 // Invoke the initialization routines for each class of range.
 
@@ -121,6 +121,7 @@ unified_table::unified_table ()
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (MIN_EXPR, op_min);
+  set (MAX_EXPR, op_max);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -132,16 +133,7 @@ get_op_handler (enum tree_code code, tree type)
   if (POINTER_TYPE_P (type) && pointer_tree_table[code])
 return pointer_tree_table[code];
 
-  if (unified_tree_table[code])
-{
-  // Should not be in any other table if it is in the unified table.
-  gcc_checking_assert (!integral_tree_table[code]);
-  return unified_tree_table[code];
-}
-
-  if (INTEGRAL_TYPE_P (type))
-return integral_tree_table[code];
-  return NULL;
+  return unified_tree_table[code];
 }
 
 range_op_handler::range_op_handler ()
@@ -2001,17 +1993,12 @@ operator_min::wi_fold (irange , tree type,
 }
 
 
-class operator_max : public range_operator
+void
+operator_max::update_bitmask (irange , const irange ,
+			  const irange ) const
 {
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, MAX_EXPR, lh, rh); }
-} op_max;
+  update_known_bitmask (r, MAX_EXPR, lh, rh);
+}
 
 void
 operator_max::wi_fold (irange , tree type,
@@ -4529,11 +4516,6 @@ pointer_or_operator::wi_fold (irange , tree type,
 r.set_varying (type);
 }
 
-integral_table::integral_table ()
-{
-  set (MAX_EXPR, op_max);
-}
-
 // Initialize any integral operators to the primary table
 
 void
diff --git a/gcc/range-op.h b/gcc/range-op.h
index 0f5ee41f96c..08c51bace40 100644
--- a/gcc/range-op.h
+++ b/gcc/range-op.h
@@ -299,15 +299,6 @@ range_op_table::set (enum tree_code code, range_operator )
   m_range_tree[code] = 
 }
 
-// This holds the range op tables
-
-class integral_table : public range_op_table
-{
-public:
-  integral_table ();
-};
-extern integral_table integral_tree_table;
-
 // Instantiate a range op table for pointer operations.
 
 class pointer_table : public range_op_table
-- 
2.40.1



[COMMITTED 3/17] - Move operator_bitwise_xor to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From cc18db2826c5449e84366644fa461816fa5f3f99 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 16:01:05 -0400
Subject: [PATCH 03/17] Move operator_bitwise_xor to the unified range-op
 table.

	* range-op-mixed.h (class operator_bitwise_xor): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_XOR_EXPR.
	(class operator_bitwise_xor): Move from here.
	(integral_table::integral_table): Remove BIT_XOR_EXPR.
	(pointer_table::pointer_table): Remove BIT_XOR_EXPR.
---
 gcc/range-op-mixed.h | 23 +++
 gcc/range-op.cc  | 36 +++-
 2 files changed, 30 insertions(+), 29 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index ba04c51a2d8..644473053e0 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -527,4 +527,27 @@ public:
 		  relation_trio rel = TRIO_VARYING) const final override;
 };
 
+class operator_bitwise_xor : public range_operator
+{
+public:
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_op2_relation_effect (irange _range,
+	tree type,
+	const irange _range,
+	const irange _range,
+	relation_kind rel) const;
+  void update_bitmask (irange , const irange ,
+		   const irange ) const final override;
+private:
+  void wi_fold (irange , tree type, const wide_int _lb,
+		const wide_int _ub, const wide_int _lb,
+		const wide_int _ub) const final override;
+};
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 107582a9571..11f576c55c5 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -77,6 +77,7 @@ operator_negate op_negate;
 operator_mult op_mult;
 operator_addr_expr op_addr;
 operator_bitwise_not op_bitwise_not;
+operator_bitwise_xor op_bitwise_xor;
 
 // Invoke the initialization routines for each class of range.
 
@@ -109,6 +110,7 @@ unified_table::unified_table ()
   // integral implementation.
   set (ADDR_EXPR, op_addr);
   set (BIT_NOT_EXPR, op_bitwise_not);
+  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -3732,33 +3734,12 @@ operator_bitwise_or::op2_range (irange , tree type,
   return operator_bitwise_or::op1_range (r, type, lhs, op1);
 }
 
-
-class operator_bitwise_xor : public range_operator
+void
+operator_bitwise_xor::update_bitmask (irange , const irange ,
+  const irange ) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual void wi_fold (irange , tree type,
-		const wide_int _lb,
-		const wide_int _ub,
-		const wide_int _lb,
-		const wide_int _ub) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op2_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_op2_relation_effect (irange _range,
-	tree type,
-	const irange _range,
-	const irange _range,
-	relation_kind rel) const;
-  void update_bitmask (irange , const irange , const irange ) const
-{ update_known_bitmask (r, BIT_XOR_EXPR, lh, rh); }
-} op_bitwise_xor;
+  update_known_bitmask (r, BIT_XOR_EXPR, lh, rh);
+}
 
 void
 operator_bitwise_xor::wi_fold (irange , tree type,
@@ -4588,7 +4569,6 @@ integral_table::integral_table ()
   set (MAX_EXPR, op_max);
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
-  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // Initialize any integral operators to the primary table
@@ -4618,8 +4598,6 @@ pointer_table::pointer_table ()
   set (BIT_IOR_EXPR, op_pointer_or);
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
-
-  set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
 // Initialize any pointer operators to the primary table
-- 
2.40.1



[COMMITTED 2/17] - Move operator_bitwise_not to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew
From 5bb4c53870db1331592a89119f41beee2b17d832 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 15:59:43 -0400
Subject: [PATCH 02/17] Move operator_bitwise_not to the unified range-op
 table.

	* range-op-mixed.h (class operator_bitwise_not): Move from...
	* range-op.cc (unified_table::unified_table): Add BIT_NOT_EXPR.
	(class operator_bitwise_not): Move from here.
	(integral_table::integral_table): Remove BIT_NOT_EXPR.
	(pointer_table::pointer_table): Remove BIT_NOT_EXPR.
---
 gcc/range-op-mixed.h | 13 +
 gcc/range-op.cc  | 21 +++--
 2 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index d31b144169d..ba04c51a2d8 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -514,4 +514,17 @@ public:
 		  relation_trio rel = TRIO_VARYING) const final override;
 };
 
+class operator_bitwise_not : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 20cc9b0dc9c..107582a9571 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -76,6 +76,7 @@ operator_minus op_minus;
 operator_negate op_negate;
 operator_mult op_mult;
 operator_addr_expr op_addr;
+operator_bitwise_not op_bitwise_not;
 
 // Invoke the initialization routines for each class of range.
 
@@ -105,8 +106,9 @@ unified_table::unified_table ()
   set (MULT_EXPR, op_mult);
 
   // Occur in both integer and pointer tables, but currently share
-  // integral implelmentation.
+  // integral implementation.
   set (ADDR_EXPR, op_addr);
+  set (BIT_NOT_EXPR, op_bitwise_not);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4080,21 +4082,6 @@ operator_logical_not::op1_range (irange ,
 }
 
 
-class operator_bitwise_not : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-} op_bitwise_not;
-
 bool
 operator_bitwise_not::fold_range (irange , tree type,
   const irange ,
@@ -4602,7 +4589,6 @@ integral_table::integral_table ()
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
-  set (BIT_NOT_EXPR, op_bitwise_not);
 }
 
 // Initialize any integral operators to the primary table
@@ -4633,7 +4619,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
 }
 
-- 
2.40.1



[COMMITTED 1/17] Move operator_addr_expr to the unified range-op table.

2023-06-12 Thread Andrew MacLeod via Gcc-patches

Bootstraps on x86_64-pc-linux-gnu with no regressions.  Pushed.

Andrew

From 438f8281ad2d821e09eaf5691d1b76b6f2f39b4c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Sat, 10 Jun 2023 15:56:15 -0400
Subject: [PATCH 01/17] Move operator_addr_expr to the unified range-op table.

	* range-op-mixed.h (class operator_addr_expr): Move from...
	* range-op.cc (unified_table::unified_table): Add ADDR_EXPR.
	(class operator_addr_expr): Move from here.
	(integral_table::integral_table): Remove ADDR_EXPR.
	(pointer_table::pointer_table): Remove ADDR_EXPR.
---
 gcc/range-op-mixed.h | 13 +
 gcc/range-op.cc  | 23 +--
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 52b8570cb2a..d31b144169d 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -501,4 +501,17 @@ public:
 		relation_kind kind) const final override;
 };
 
+class operator_addr_expr : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 028631c6851..20cc9b0dc9c 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -75,6 +75,7 @@ operator_abs op_abs;
 operator_minus op_minus;
 operator_negate op_negate;
 operator_mult op_mult;
+operator_addr_expr op_addr;
 
 // Invoke the initialization routines for each class of range.
 
@@ -102,6 +103,10 @@ unified_table::unified_table ()
   set (MINUS_EXPR, op_minus);
   set (NEGATE_EXPR, op_negate);
   set (MULT_EXPR, op_mult);
+
+  // Occur in both integer and pointer tables, but currently share
+  // integral implelmentation.
+  set (ADDR_EXPR, op_addr);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4366,21 +4371,6 @@ operator_negate::op1_range (irange , tree type,
 }
 
 
-class operator_addr_expr : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-} op_addr;
-
 bool
 operator_addr_expr::fold_range (irange , tree type,
 const irange ,
@@ -4613,7 +4603,6 @@ integral_table::integral_table ()
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
   set (BIT_NOT_EXPR, op_bitwise_not);
-  set (ADDR_EXPR, op_addr);
 }
 
 // Initialize any integral operators to the primary table
@@ -4644,8 +4633,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (ADDR_EXPR, op_addr);
-
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
 }
-- 
2.40.1



[COMMITTED 0/17] - Range-op dispatch unification rework

2023-06-12 Thread Andrew MacLeod via Gcc-patches

This patch set completes the range-op dispatch and unification rework.

The first 7 patches move the remainder of the integral table to the 
unified table, and remove the integer table.


The 8th patch moves all the pointer specific code into a new file 
range-op-ptr.cc


Patches 9-12 introduce a "hybrid" operator class for the 4 operations 
which pointers and integer share a TREE_CODE, but have different 
implementations.  And extra hybrid class is introduced in the pointer 
file which inherits from the integer version, and adds new overloads for 
the used methods which look sa tthe type being passed in and does the 
dispatcxh itself to either the inherited integer version, or call the 
pointer version opcode.


This allows us to have a unified entry for those 4 operators 
(BIT_AND_EXPR, BIT_IOR_EXPR, MIN_EXPR, and MAX_EXPR) and move on.   WHen 
we introduce a pointer range type (ie PRANGE), we can simply add the 
prange signature to the appropriate range_operator methods, and remove 
the pointer and hybrid classes.


 patch 13 thru 16 does some tweaking to range_op_handler and hows its 
used. It now provides a default operator under the covers, so you no 
longer need to check if its valid.   The valid check now simply 
indicates if its has a custom operator implemented or not. This means 
you can simply write:


if (range_op_handler (CONVERT_EXPR).fold_range (...  ))

without worrying about whether there is an entry.  If there is no 
CONVERT_EXPR implemented, you'll simple get false back from all the calls.


Combined with the previous work, it is now always safe to call any 
range_operator routine via range_op_handler with any set of types for 
vrange parameters (including unsupported types)  on any tree code, and 
you will simply get false back if it isn't implemented.


Andrew



[COMMITTED 15/15] Unify MULT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

This is the final shared integer/float opcode.

This patch also removes the floating point table and all references to it.

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 11/15] Unify PLUS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From cc4eaf6f1e1958f920007d4cc7cafb635b5dda64 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:41:28 -0400
Subject: [PATCH 11/31] Unify PLUS_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_plus): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_plus::fold_range): Rename from foperator_plus.
	(operator_plus::op1_range): Ditto.
	(operator_plus::op2_range): Ditto.
	(operator_plus::rv_fold): Ditto.
	(float_table::float_table): Remove PLUS_EXPR.
	* range-op-mixed.h (class operator_plus): Combined from integer
	and float files.
	* range-op.cc (op_plus): New object.
	(unified_table::unified_table): Add PLUS_EXPR.
	(class operator_plus): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove PLUS_EXPR.
	(pointer_table::pointer_table): Remove PLUS_EXPR.
---
 gcc/range-op-float.cc | 94 ---
 gcc/range-op-mixed.h  | 39 ++
 gcc/range-op.cc   | 37 -
 3 files changed, 90 insertions(+), 80 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 11d76f2ef25..bd1b79281d0 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -2254,54 +2254,49 @@ float_widen_lhs_range (tree type, const frange )
   return ret;
 }
 
-class foperator_plus : public range_operator
+bool
+operator_plus::op1_range (frange , tree type, const frange ,
+			  const frange , relation_trio) const
 {
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-public:
-  virtual bool op1_range (frange , tree type,
-			  const frange ,
-			  const frange ,
-			  relation_trio = TRIO_VARYING) const final override
-  {
-if (lhs.undefined_p ())
-  return false;
-range_op_handler minus (MINUS_EXPR, type);
-if (!minus)
-  return false;
-frange wlhs = float_widen_lhs_range (type, lhs);
-return float_binary_op_range_finish (minus.fold_range (r, type, wlhs, op2),
-	 r, type, wlhs);
-  }
-  virtual bool op2_range (frange , tree type,
-			  const frange ,
-			  const frange ,
-			  relation_trio = TRIO_VARYING) const final override
-  {
-return op1_range (r, type, lhs, op1);
-  }
-private:
-  void rv_fold (REAL_VALUE_TYPE , REAL_VALUE_TYPE , bool _nan,
-		tree type,
-		const REAL_VALUE_TYPE _lb,
-		const REAL_VALUE_TYPE _ub,
-		const REAL_VALUE_TYPE _lb,
-		const REAL_VALUE_TYPE _ub,
-		relation_kind) const final override
-  {
-frange_arithmetic (PLUS_EXPR, type, lb, lh_lb, rh_lb, dconstninf);
-frange_arithmetic (PLUS_EXPR, type, ub, lh_ub, rh_ub, dconstinf);
+  if (lhs.undefined_p ())
+return false;
+  range_op_handler minus (MINUS_EXPR, type);
+  if (!minus)
+return false;
+  frange wlhs = float_widen_lhs_range (type, lhs);
+  return float_binary_op_range_finish (minus.fold_range (r, type, wlhs, op2),
+   r, type, wlhs);
+}
 
-// [-INF] + [+INF] = NAN
-if (real_isinf (_lb, true) && real_isinf (_ub, false))
-  maybe_nan = true;
-// [+INF] + [-INF] = NAN
-else if (real_isinf (_ub, false) && real_isinf (_lb, true))
-  maybe_nan = true;
-else
-  maybe_nan = false;
-  }
-} fop_plus;
+bool
+operator_plus::op2_range (frange , tree type,
+			  const frange , const frange ,
+			  relation_trio) const
+{
+  return op1_range (r, type, lhs, op1);
+}
+
+void
+operator_plus::rv_fold (REAL_VALUE_TYPE , REAL_VALUE_TYPE ,
+			bool _nan, tree type,
+			const REAL_VALUE_TYPE _lb,
+			const REAL_VALUE_TYPE _ub,
+			const REAL_VALUE_TYPE _lb,
+			const REAL_VALUE_TYPE _ub,
+			relation_kind) const
+{
+  frange_arithmetic (PLUS_EXPR, type, lb, lh_lb, rh_lb, dconstninf);
+  frange_arithmetic (PLUS_EXPR, type, ub, lh_ub, rh_ub, dconstinf);
+
+  // [-INF] + [+INF] = NAN
+  if (real_isinf (_lb, true) && real_isinf (_ub, false))
+maybe_nan = true;
+  // [+INF] + [-INF] = NAN
+  else if (real_isinf (_ub, false) && real_isinf (_lb, true))
+maybe_nan = true;
+  else
+maybe_nan = false;
+}
 
 
 class foperator_minus : public range_operator
@@ -2317,9 +2312,9 @@ public:
 if (lhs.undefined_p ())
   return false;
 frange wlhs = float_widen_lhs_range (type, lhs);
-return float_binary_op_range_finish (fop_plus.fold_range (r, type, wlhs,
-			  op2),
-	 r, type, wlhs);
+return float_binary_op_range_finish (
+		range_op_handler (PLUS_EXPR).fold_range (r, type, wlhs, op2),
+		r, type, wlhs);
   }
   virtual bool op2_range (frange , tree type,
 			  const frange ,
@@ -2698,7 +2693,6 @@ float_table::float_table ()
 {
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
-  set (PLUS_EXPR, fop_plus);
   set (MINUS_EXPR, fop_minus);
   set (MULT_EXPR, fop_mult);
 }
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 9de8479cd24..fbfe3

[COMMITTED 14/15] Unify NEGATE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 6/15] Unify GT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify GT_EXPR the  range operator

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From e5a4bb7c12d00926e0c7bbf0c77dd1be8f23a39a Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:32:25 -0400
Subject: [PATCH 06/31] Unify GT_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_gt): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_gt::fold_range): Rename from foperator_gt.
	(operator_gt::op1_range): Ditto.
	(float_table::float_table): Remove GT_EXPR.
	* range-op-mixed.h (class operator_gt): Combined from integer
	and float files.
	* range-op.cc (op_gt): New object.
	(unified_table::unified_table): Add GT_EXPR.
	(class operator_gt): Move to range-op-mixed.h.
	(gt_op1_op2_relation): Fold into
	operator_gt::op1_op2_relation.
	(integral_table::integral_table): Remove GT_EXPR.
	(pointer_table::pointer_table): Remove GT_EXPR.
	* range-op.h (gt_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 31 ++
 gcc/range-op.cc   | 40 +++--
 gcc/range-op.h|  1 -
 4 files changed, 54 insertions(+), 70 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index a480f1641d2..2f090e75245 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -961,32 +961,10 @@ operator_le::op2_range (frange ,
   return true;
 }
 
-class foperator_gt : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return gt_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_gt;
-
 bool
-foperator_gt::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_gt::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_GT))
 return true;
@@ -1004,11 +982,11 @@ foperator_gt::fold_range (irange , tree type,
 }
 
 bool
-foperator_gt::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_gt::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1043,11 +1021,11 @@ foperator_gt::op1_range (frange ,
 }
 
 bool
-foperator_gt::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_gt::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1723,7 +1701,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_gt.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (GT_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2744,7 +2723,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
 
   set (ABS_EXPR, fop_abs);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index dd42d98ca49..1c68d54b085 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -204,4 +204,35 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_gt :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const;
+  bool op1_range (frange , tree type,
+		  const irange , const frange

[COMMITTED 12/15] Unify ABS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 13/15] Unify MINUS_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew



[COMMITTED 10/15] Unify operator_cast range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From ee46a15733524103a9eda433df5dc44cdc055d73 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:39:54 -0400
Subject: [PATCH 10/31] Unify operator_cast range operator

Move the declaration of the class to the range-op-mixed header, and use it
in the new unified table.

	* range-op-mixed.h (class operator_cast): Combined from integer
	and float files.
	* range-op.cc (op_cast): New object.
	(unified_table::unified_table): Add op_cast
	(class operator_cast): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove op_cast
	(pointer_table::pointer_table): Remove op_cast.
---
 gcc/range-op-mixed.h | 24 
 gcc/range-op.cc  | 34 --
 2 files changed, 28 insertions(+), 30 deletions(-)

diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 5b7fbe89856..9de8479cd24 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -304,4 +304,28 @@ public:
 		   relation_trio = TRIO_VARYING) const final override;
 };
 
+
+class operator_cast: public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::lhs_op1_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind) const final override;
+private:
+  bool truncating_cast_p (const irange , const irange ) const;
+  bool inside_domain_p (const wide_int , const wide_int ,
+			const irange ) const;
+  void fold_pair (irange , unsigned index, const irange ,
+			   const irange ) const;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 31d4e1a1739..7d89b633da3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -70,6 +70,7 @@ operator_gt op_gt;
 operator_ge op_ge;
 operator_identity op_ident;
 operator_cst op_cst;
+operator_cast op_cast;
 
 // Invoke the initialization routines for each class of range.
 
@@ -90,6 +91,9 @@ unified_table::unified_table ()
   set (OBJ_TYPE_REF, op_ident);
   set (REAL_CST, op_cst);
   set (INTEGER_CST, op_cst);
+  set (NOP_EXPR, op_cast);
+  set (CONVERT_EXPR, op_cast);
+
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -2868,32 +2872,6 @@ operator_rshift::wi_fold (irange , tree type,
 }
 
 
-class operator_cast: public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::lhs_op1_relation;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-  virtual bool op1_range (irange , tree type,
-			  const irange ,
-			  const irange ,
-			  relation_trio rel = TRIO_VARYING) const;
-  virtual relation_kind lhs_op1_relation (const irange ,
-	  const irange ,
-	  const irange ,
-	  relation_kind) const;
-private:
-  bool truncating_cast_p (const irange , const irange ) const;
-  bool inside_domain_p (const wide_int , const wide_int ,
-			const irange ) const;
-  void fold_pair (irange , unsigned index, const irange ,
-			   const irange ) const;
-} op_cast;
-
 // Add a partial equivalence between the LHS and op1 for casts.
 
 relation_kind
@@ -4744,8 +4722,6 @@ integral_table::integral_table ()
   set (MIN_EXPR, op_min);
   set (MAX_EXPR, op_max);
   set (MULT_EXPR, op_mult);
-  set (NOP_EXPR, op_cast);
-  set (CONVERT_EXPR, op_cast);
   set (BIT_AND_EXPR, op_bitwise_and);
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
@@ -4784,8 +4760,6 @@ pointer_table::pointer_table ()
   set (MAX_EXPR, op_ptr_min_max);
 
   set (ADDR_EXPR, op_addr);
-  set (NOP_EXPR, op_cast);
-  set (CONVERT_EXPR, op_cast);
 
   set (BIT_NOT_EXPR, op_bitwise_not);
   set (BIT_XOR_EXPR, op_bitwise_xor);
-- 
2.40.1



[COMMITTED 5/15] Unify LE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the LE_EXPR opcode.

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 9de70a61ca83d50c35f73eafaaa7276d8f0ad211 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:30:56 -0400
Subject: [PATCH 05/31] Unify LE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_le): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_le::fold_range): Rename from foperator_le.
	(operator_le::op1_range): Ditto.
	(float_table::float_table): Remove LE_EXPR.
	* range-op-mixed.h (class operator_le): Combined from integer
	and float files.
	* range-op.cc (op_le): New object.
	(unified_table::unified_table): Add LE_EXPR.
	(class operator_le): Move to range-op-mixed.h.
	(le_op1_op2_relation): Fold into
	operator_le::op1_op2_relation.
	(integral_table::integral_table): Remove LE_EXPR.
	(pointer_table::pointer_table): Remove LE_EXPR.
	* range-op.h (le_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 33 +++
 gcc/range-op.cc   | 39 +++-
 gcc/range-op.h|  1 -
 4 files changed, 56 insertions(+), 69 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 1b0ac9a7fc2..a480f1641d2 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -873,32 +873,10 @@ operator_lt::op2_range (frange ,
   return true;
 }
 
-class foperator_le : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio rel = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return le_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override;
-} fop_le;
-
 bool
-foperator_le::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_le::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_LE))
 return true;
@@ -916,11 +894,11 @@ foperator_le::fold_range (irange , tree type,
 }
 
 bool
-foperator_le::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_le::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -949,11 +927,11 @@ foperator_le::op1_range (frange ,
 }
 
 bool
-foperator_le::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_le::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1637,7 +1615,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_le.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (LE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2765,7 +2744,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
 
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index bc93ab5be06..dd42d98ca49 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -171,4 +171,37 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_le :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type

[COMMITTED 7/15] Unify GE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify GE_EXPR the range operator

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 364b936b8d82e86c73b2b964d4c8a2c16dcbedf8 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:33:33 -0400
Subject: [PATCH 07/31] Unify GE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_ge): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_ge::fold_range): Rename from foperator_ge.
	(operator_ge::op1_range): Ditto.
	(float_table::float_table): Remove GE_EXPR.
	* range-op-mixed.h (class operator_ge): Combined from integer
	and float files.
	* range-op.cc (op_ge): New object.
	(unified_table::unified_table): Add GE_EXPR.
	(class operator_ge): Move to range-op-mixed.h.
	(ge_op1_op2_relation): Fold into
	operator_ge::op1_op2_relation.
	(integral_table::integral_table): Remove GE_EXPR.
	(pointer_table::pointer_table): Remove GE_EXPR.
	* range-op.h (ge_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 54 +++
 gcc/range-op-mixed.h  | 33 ++
 gcc/range-op.cc   | 39 +++
 gcc/range-op.h|  3 ---
 4 files changed, 55 insertions(+), 74 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 2f090e75245..4faca62c48f 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -1059,32 +1059,10 @@ operator_gt::op2_range (frange ,
   return true;
 }
 
-class foperator_ge : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return ge_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_ge;
-
 bool
-foperator_ge::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_ge::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_GE))
 return true;
@@ -1102,11 +1080,11 @@ foperator_ge::fold_range (irange , tree type,
 }
 
 bool
-foperator_ge::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_ge::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1137,10 +1115,10 @@ foperator_ge::op1_range (frange ,
 }
 
 bool
-foperator_ge::op2_range (frange , tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_ge::op2_range (frange , tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1813,7 +1791,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_ge.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (GE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2720,11 +2699,6 @@ float_table::float_table ()
   set (OBJ_TYPE_REF, fop_identity);
   set (REAL_CST, fop_identity);
 
-  // All the relational operators are expected to work, because the
-  // calculation of ranges on outgoing edges expect the handlers to be
-  // present.
-  set (GE_EXPR, fop_ge);
-
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
   set (PLUS_EXPR, fop_plus);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 1c68d54b085..d6cd3683932 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -235,4 +235,37 @@ public:
   relation_kind op1_op2_relation (const irange ) const final override;
   void update_bitmask (irange , const irange , const irange ) const;
 };
+
+class operator_ge :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range

[COMMITTED 9/15] Unify operator_cst range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches
THis patch move the CONST operator into the mixed header.  It also sets 
REAL_CST to use this instead, as it has no op1_range routines.



Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 35a580f09eaceda5b0dd370b1e39fe05ba0a154f Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:37:11 -0400
Subject: [PATCH 09/31] Unify operator_cst range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (operator_cst::fold_range): New.
	* range-op-mixed.h (class operator_cst): Move from integer file.
	* range-op.cc (op_cst): New object.
	(unified_table::unified_table): Add op_cst. Also use for REAL_CST.
	(class operator_cst): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove op_cst.
	(pointer_table::pointer_table): Remove op_cst.
---
 gcc/range-op-float.cc |  7 +++
 gcc/range-op-mixed.h  | 12 
 gcc/range-op.cc   | 16 +++-
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index bc8ecc61bce..11d76f2ef25 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -557,6 +557,13 @@ operator_identity::op1_range (frange , tree, const frange ,
   return true;
 }
 
+bool
+operator_cst::fold_range (frange , tree, const frange ,
+			  const frange &, relation_trio) const
+{
+  r = op1;
+  return true;
+}
 
 bool
 operator_equal::op2_range (frange , tree type,
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index f30f7d019ee..5b7fbe89856 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -292,4 +292,16 @@ public:
   relation_kind rel) const final override;
 };
 
+class operator_cst : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool fold_range (frange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 70684b4c7f7..31d4e1a1739 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -69,6 +69,7 @@ operator_le op_le;
 operator_gt op_gt;
 operator_ge op_ge;
 operator_identity op_ident;
+operator_cst op_cst;
 
 // Invoke the initialization routines for each class of range.
 
@@ -87,7 +88,8 @@ unified_table::unified_table ()
   set (SSA_NAME, op_ident);
   set (PAREN_EXPR, op_ident);
   set (OBJ_TYPE_REF, op_ident);
-  set (REAL_CST, op_ident);
+  set (REAL_CST, op_cst);
+  set (INTEGER_CST, op_cst);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4224,16 +4226,6 @@ operator_bitwise_not::op1_range (irange , tree type,
 }
 
 
-class operator_cst : public range_operator
-{
-  using range_operator::fold_range;
-public:
-  virtual bool fold_range (irange , tree type,
-			   const irange ,
-			   const irange ,
-			   relation_trio rel = TRIO_VARYING) const;
-} op_integer_cst;
-
 bool
 operator_cst::fold_range (irange , tree type ATTRIBUTE_UNUSED,
 			  const irange ,
@@ -4758,7 +4750,6 @@ integral_table::integral_table ()
   set (BIT_IOR_EXPR, op_bitwise_or);
   set (BIT_XOR_EXPR, op_bitwise_xor);
   set (BIT_NOT_EXPR, op_bitwise_not);
-  set (INTEGER_CST, op_integer_cst);
   set (ABS_EXPR, op_abs);
   set (NEGATE_EXPR, op_negate);
   set (ADDR_EXPR, op_addr);
@@ -4792,7 +4783,6 @@ pointer_table::pointer_table ()
   set (MIN_EXPR, op_ptr_min_max);
   set (MAX_EXPR, op_ptr_min_max);
 
-  set (INTEGER_CST, op_integer_cst);
   set (ADDR_EXPR, op_addr);
   set (NOP_EXPR, op_cast);
   set (CONVERT_EXPR, op_cast);
-- 
2.40.1



[COMMITTED 8/15] Unify Identity range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches
This unifies the identity operation, which is used by SSA_NAME, 
PAREN_EXPR, OBJ_TYPE_REF and REAL_CST.


REAL_CST is using it incorrectly, but preserves current functionality.  
There will not be an SSA_NAME in the op1 position, so there is no point 
in having an op1_range routine.  That will be corrected in the next patch.


Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 60b00c6f187450e1f3ffac1b64986ae74b8b948b Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:35:24 -0400
Subject: [PATCH 08/31] Unify Identity range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_identity): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_identity::fold_range): Rename from foperator_identity.
	(operator_identity::op1_range): Ditto.
	(float_table::float_table): Remove fop_identity.
	* range-op-mixed.h (class operator_identity): Combined from integer
	and float files.
	* range-op.cc (op_identity): New object.
	(unified_table::unified_table): Add op_identity.
	(class operator_identity): Move to range-op-mixed.h.
	(integral_table::integral_table): Remove identity.
	(pointer_table::pointer_table): Remove identity.
---
 gcc/range-op-float.cc | 40 +++-
 gcc/range-op-mixed.h  | 24 
 gcc/range-op.cc   | 29 +
 3 files changed, 44 insertions(+), 49 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 4faca62c48f..bc8ecc61bce 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -541,27 +541,22 @@ build_gt (frange , tree type, const frange )
 }
 
 
-class foperator_identity : public range_operator
+bool
+operator_identity::fold_range (frange , tree, const frange ,
+			   const frange &, relation_trio) const
 {
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-public:
-  bool fold_range (frange , tree type ATTRIBUTE_UNUSED,
-		   const frange , const frange  ATTRIBUTE_UNUSED,
-		   relation_trio = TRIO_VARYING) const final override
-  {
-r = op1;
-return true;
-  }
-  bool op1_range (frange , tree type ATTRIBUTE_UNUSED,
-		  const frange , const frange  ATTRIBUTE_UNUSED,
-		  relation_trio = TRIO_VARYING) const final override
-  {
-r = lhs;
-return true;
-  }
-public:
-} fop_identity;
+  r = op1;
+  return true;
+}
+
+bool
+operator_identity::op1_range (frange , tree, const frange ,
+			  const frange &, relation_trio) const
+{
+  r = lhs;
+  return true;
+}
+
 
 bool
 operator_equal::op2_range (frange , tree type,
@@ -2694,11 +2689,6 @@ private:
 
 float_table::float_table ()
 {
-  set (SSA_NAME, fop_identity);
-  set (PAREN_EXPR, fop_identity);
-  set (OBJ_TYPE_REF, fop_identity);
-  set (REAL_CST, fop_identity);
-
   set (ABS_EXPR, fop_abs);
   set (NEGATE_EXPR, fop_negate);
   set (PLUS_EXPR, fop_plus);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index d6cd3683932..f30f7d019ee 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -268,4 +268,28 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_identity : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::lhs_op1_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+  bool fold_range (frange , tree type ATTRIBUTE_UNUSED,
+		   const frange , const frange  ATTRIBUTE_UNUSED,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio rel = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type ATTRIBUTE_UNUSED,
+		  const frange , const frange  ATTRIBUTE_UNUSED,
+		  relation_trio = TRIO_VARYING) const final override;
+  relation_kind lhs_op1_relation (const irange ,
+  const irange , const irange ,
+  relation_kind rel) const final override;
+};
+
 #endif // GCC_RANGE_OP_MIXED_H
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index a127da22006..70684b4c7f7 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -68,6 +68,7 @@ operator_lt op_lt;
 operator_le op_le;
 operator_gt op_gt;
 operator_ge op_ge;
+operator_identity op_ident;
 
 // Invoke the initialization routines for each class of range.
 
@@ -83,6 +84,10 @@ unified_table::unified_table ()
   set (LE_EXPR, op_le);
   set (GT_EXPR, op_gt);
   set (GE_EXPR, op_ge);
+  set (SSA_NAME, op_ident);
+  set (PAREN_EXPR, op_ident);
+  set (OBJ_TYPE_REF, op_ident);
+  set (REAL_CST, op_ident);
 }
 
 // The tables are hidden and accessed via a simple extern function.
@@ -4240,26 +4245,6 @@ operator_cst::fold_range (irange , tree type ATTRIBUTE_UNUSED,
 }
 

[PATCH 2/15] Unify EQ_EXPR range operator.

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the EQ_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From 684959c5c058c2368e65c4c308a2cb3e3912782e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:18:39 -0400
Subject: [PATCH 02/31] Unify EQ_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_equal): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_equal::fold_range): Rename from foperator_equal.
	(operator_equal::op1_range): Ditto.
	(float_table::float_table): Remove EQ_EXPR.
	* range-op-mixed.h (class operator_equal): Combined from integer
	and float files.
	* range-op.cc (op_equal): New object.
	(unified_table::unified_table): Add EQ_EXPR.
	(class operator_equal): Move to range-op-mixed.h.
	(equal_op1_op2_relation): Fold into
	operator_equal::op1_op2_relation.
	(integral_table::integral_table): Remove EQ_EXPR.
	(pointer_table::pointer_table): Remove EQ_EXPR.
	* range-op.h (equal_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 41 ---
 gcc/range-op-mixed.h  | 37 +++
 gcc/range-op.cc   | 45 +--
 gcc/range-op.h|  1 -
 4 files changed, 62 insertions(+), 62 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 8659217659c..98636cec8cf 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -563,35 +563,18 @@ public:
 public:
 } fop_identity;
 
-class foperator_equal : public range_operator
+bool
+operator_equal::op2_range (frange , tree type,
+			   const irange , const frange ,
+			   relation_trio rel) const
 {
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return equal_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio rel = TRIO_VARYING) const final override
-  {
-return op1_range (r, type, lhs, op1, rel.swap_op1_op2 ());
-  }
-} fop_equal;
+  return op1_range (r, type, lhs, op1, rel.swap_op1_op2 ());
+}
 
 bool
-foperator_equal::fold_range (irange , tree type,
-			 const frange , const frange ,
-			 relation_trio rel) const
+operator_equal::fold_range (irange , tree type,
+			const frange , const frange ,
+			relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_EQ))
 return true;
@@ -644,7 +627,7 @@ foperator_equal::fold_range (irange , tree type,
 }
 
 bool
-foperator_equal::op1_range (frange , tree type,
+operator_equal::op1_range (frange , tree type,
 			const irange ,
 			const frange ,
 			relation_trio trio) const
@@ -2021,7 +2004,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_equal.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (EQ_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2819,7 +2803,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (EQ_EXPR, fop_equal);
   set (NE_EXPR, fop_not_equal);
   set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index a78bc2ba59c..79e2cbd8532 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -75,4 +75,41 @@ relop_early_resolve (irange , tree type, const vrange ,
   return false;
 }
 
+// --
+//  Mixed Mode Operators.
+// --
+
+class operator_equal : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING

[COMMITTED 4/15] Unify LT_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the LT_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From f7c1366a89edf1ffdd9c495cff544358f2ff395e Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:29:15 -0400
Subject: [PATCH 04/31] Unify LT_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_lt): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_lt::fold_range): Rename from foperator_lt.
	(operator_lt::op1_range): Ditto.
	(float_table::float_table): Remove LT_EXPR.
	* range-op-mixed.h (class operator_lt): Combined from integer
	and float files.
	* range-op.cc (op_lt): New object.
	(unified_table::unified_table): Add LT_EXPR.
	(class operator_lt): Move to range-op-mixed.h.
	(lt_op1_op2_relation): Fold into
	operator_lt::op1_op2_relation.
	(integral_table::integral_table): Remove LT_EXPR.
	(pointer_table::pointer_table): Remove LT_EXPR.
	* range-op.h (lt_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 52 +--
 gcc/range-op-mixed.h  | 30 +
 gcc/range-op.cc   | 39 +++-
 gcc/range-op.h|  1 -
 4 files changed, 53 insertions(+), 69 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index ec24167a8c5..1b0ac9a7fc2 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -779,32 +779,10 @@ operator_not_equal::op1_range (frange , tree type,
   return true;
 }
 
-class foperator_lt : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op2_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return lt_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-  bool op2_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_lt;
-
 bool
-foperator_lt::fold_range (irange , tree type,
-			  const frange , const frange ,
-			  relation_trio rel) const
+operator_lt::fold_range (irange , tree type,
+			 const frange , const frange ,
+			 relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_LT))
 return true;
@@ -822,11 +800,11 @@ foperator_lt::fold_range (irange , tree type,
 }
 
 bool
-foperator_lt::op1_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_lt::op1_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -859,11 +837,11 @@ foperator_lt::op1_range (frange ,
 }
 
 bool
-foperator_lt::op2_range (frange ,
-			 tree type,
-			 const irange ,
-			 const frange ,
-			 relation_trio) const
+operator_lt::op2_range (frange ,
+			tree type,
+			const irange ,
+			const frange ,
+			relation_trio) const
 {
   switch (get_bool_state (r, lhs, type))
 {
@@ -1547,7 +1525,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_lt.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (LT_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2786,7 +2765,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
   set (GE_EXPR, fop_ge);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 03a988d9c8a..bc93ab5be06 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -141,4 +141,34 @@ public:
   void update_bitmask (irange , const irange ,
 		   const irange ) const final override;
 };
+
+class operator_lt :  public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range

[COMMITTED 3/15] Unify NE_EXPR range operator

2023-06-09 Thread Andrew MacLeod via Gcc-patches

Unify the NE_EXPR opcode

Bootstrap on x86_64-pc-linux-gnu and pass all regressions. Pushed.

Andrew
From cb409a3b3367109944ff332899ec401dc60f678c Mon Sep 17 00:00:00 2001
From: Andrew MacLeod 
Date: Fri, 9 Jun 2023 13:25:49 -0400
Subject: [PATCH 03/31] Unify NE_EXPR range operator

Move the declaration of the class to the range-op-mixed header, add the
floating point prototypes as well, and use it in the new unified table.

	* range-op-float.cc (foperator_not_equal): Remove.  Move prototypes
	to range-op-mixed.h
	(operator_equal::fold_range): Rename from foperator_not_equal.
	(operator_equal::op1_range): Ditto.
	(float_table::float_table): Remove NE_EXPR.
	* range-op-mixed.h (class operator_not_equal): Combined from integer
	and float files.
	* range-op.cc (op_equal): New object.
	(unified_table::unified_table): Add NE_EXPR.
	(class operator_not_equal): Move to range-op-mixed.h.
	(not_equal_op1_op2_relation): Fold into
	operator_not_equal::op1_op2_relation.
	(integral_table::integral_table): Remove NE_EXPR.
	(pointer_table::pointer_table): Remove NE_EXPR.
	* range-op.h (not_equal_op1_op2_relation): Delete.
---
 gcc/range-op-float.cc | 36 +---
 gcc/range-op-mixed.h  | 29 +
 gcc/range-op.cc   | 41 ++---
 gcc/range-op.h|  1 -
 4 files changed, 48 insertions(+), 59 deletions(-)

diff --git a/gcc/range-op-float.cc b/gcc/range-op-float.cc
index 98636cec8cf..ec24167a8c5 100644
--- a/gcc/range-op-float.cc
+++ b/gcc/range-op-float.cc
@@ -675,28 +675,10 @@ operator_equal::op1_range (frange , tree type,
   return true;
 }
 
-class foperator_not_equal : public range_operator
-{
-  using range_operator::fold_range;
-  using range_operator::op1_range;
-  using range_operator::op1_op2_relation;
-public:
-  bool fold_range (irange , tree type,
-		   const frange , const frange ,
-		   relation_trio rel = TRIO_VARYING) const final override;
-  relation_kind op1_op2_relation (const irange ) const final override
-  {
-return not_equal_op1_op2_relation (lhs);
-  }
-  bool op1_range (frange , tree type,
-		  const irange , const frange ,
-		  relation_trio = TRIO_VARYING) const final override;
-} fop_not_equal;
-
 bool
-foperator_not_equal::fold_range (irange , tree type,
- const frange , const frange ,
- relation_trio rel) const
+operator_not_equal::fold_range (irange , tree type,
+const frange , const frange ,
+relation_trio rel) const
 {
   if (frelop_early_resolve (r, type, op1, op2, rel, VREL_NE))
 return true;
@@ -750,10 +732,10 @@ foperator_not_equal::fold_range (irange , tree type,
 }
 
 bool
-foperator_not_equal::op1_range (frange , tree type,
-const irange ,
-const frange ,
-relation_trio trio) const
+operator_not_equal::op1_range (frange , tree type,
+			   const irange ,
+			   const frange ,
+			   relation_trio trio) const
 {
   relation_kind rel = trio.op1_op2 ();
   switch (get_bool_state (r, lhs, type))
@@ -2086,7 +2068,8 @@ public:
   op1_no_nan.clear_nan ();
 if (op2.maybe_isnan ())
   op2_no_nan.clear_nan ();
-if (!fop_not_equal.fold_range (r, type, op1_no_nan, op2_no_nan, rel))
+if (!range_op_handler (NE_EXPR).fold_range (r, type, op1_no_nan,
+		op2_no_nan, rel))
   return false;
 // The result is the same as the ordered version when the
 // comparison is true or when the operands cannot be NANs.
@@ -2803,7 +2786,6 @@ float_table::float_table ()
   // All the relational operators are expected to work, because the
   // calculation of ranges on outgoing edges expect the handlers to be
   // present.
-  set (NE_EXPR, fop_not_equal);
   set (LT_EXPR, fop_lt);
   set (LE_EXPR, fop_le);
   set (GT_EXPR, fop_gt);
diff --git a/gcc/range-op-mixed.h b/gcc/range-op-mixed.h
index 79e2cbd8532..03a988d9c8a 100644
--- a/gcc/range-op-mixed.h
+++ b/gcc/range-op-mixed.h
@@ -112,4 +112,33 @@ public:
 		   const irange ) const final override;
 };
 
+class operator_not_equal : public range_operator
+{
+public:
+  using range_operator::fold_range;
+  using range_operator::op1_range;
+  using range_operator::op2_range;
+  using range_operator::op1_op2_relation;
+  bool fold_range (irange , tree type,
+		   const irange , const irange ,
+		   relation_trio = TRIO_VARYING) const final override;
+  bool fold_range (irange , tree type,
+		   const frange , const frange ,
+		   relation_trio rel = TRIO_VARYING) const final override;
+
+  bool op1_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+  bool op1_range (frange , tree type,
+		  const irange , const frange ,
+		  relation_trio = TRIO_VARYING) const final override;
+
+  bool op2_range (irange , tree type,
+		  const irange , const irange ,
+		  relation_trio = TRIO_VARYING) const final override;
+
+  relation_kind op1_op2_relation (const irange ) const final override;
+  void update_bitmask (irange , const

  1   2   3   4   5   6   7   8   9   10   >