This PR demonstrated another case where trying to save a few cycles
bites us in the end.
As mentioned in the PR, it was causing an infinite loop in rangers'
cache. The cache propagation mechanism expects to converge, but
instead we were experiencing a flip-flop between 2 values.
Starting with : edge 421->413 :[irange] unsigned int [0, 0][4, 6]
MASK 0x4 VALUE 0x0
The [5,6] part of that range is impossible due ot the mask, and should
have been removed. When this value is propagated through a few blocks,
it is eventually transformed into [irange] unsigned int [0, 0][4, 6]
MASK 0x7 VALUE 0x0, which now incorporates those invalid values.
When this value is fed back in, we again get [irange] unsigned int [0,
0][4, 6] MASK 0x4 VALUE 0x0 (which should again have been [0,0[4,4] )
and the algorithm continues to oscillate between these 2 values.
If those extraneous values had been removed from the range as they are
suppose to be, we would have had edge 421->413 :[irange] unsigned int
[0, 0][4, 4] MASK 0x4 VALUE 0x0 and then iteration converges as it
should and there's no problem.
This is very similar to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119712 And its time to
simply eliminate this issue.
This patch changes the bitmask_intersect () routine to adjust range
bounds to match the bitmask if the bitmask changes at all.
Performance impact is < 0.5% to VRP and eliminates this particular
problem going forward.. presumably.
Unfortunately, I could not find a way to create a reduced testcase for
this. However, 121987 turns out to also be this same problem, and it
DOES have a reduced testcase, so I incorporated it as the testcase.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121468 is related to
bitmasks, but is a different issue.
Bootstraps on build-x86_64-pc-linux-gnu with no new regressions.
Pushed.
Andrew
From f8d4c829f10229a2d5ab5268870e93f7ac4b55e0 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod <[email protected]>
Date: Thu, 2 Oct 2025 11:51:01 -0400
Subject: [PATCH] If a range's bitmask changes, reflect it in the bounds.
Rather than trying to be smart, if the bitmask changes, adjust all range
bounds to satisfy the bitmask requirements.
PR tree-optimization/121206
gcc/
* value-range.cc (irange::intersect_bitmask): Always call
set_range_from_bitmask if the bitmask changes.
gcc/testsuite
* gcc.dg/pr121987.c: New.
---
gcc/testsuite/gcc.dg/pr121987.c | 16 ++++++++++++++++
gcc/value-range.cc | 11 ++++-------
2 files changed, 20 insertions(+), 7 deletions(-)
create mode 100644 gcc/testsuite/gcc.dg/pr121987.c
diff --git a/gcc/testsuite/gcc.dg/pr121987.c b/gcc/testsuite/gcc.dg/pr121987.c
new file mode 100644
index 00000000000..04845a4d8ca
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr121987.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+int a, b, c, d;
+int main() {
+ unsigned long e = 10000000000;
+ unsigned f;
+ int g;
+ while (a) {
+ c = 1;
+ d = f;
+ f = ~(~(~(b && g) % a * ~e) << c);
+ b = e && f % e + ~f;
+ g = a;
+ }
+ return 0;
+}
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index dc6909e77c5..d34a2623db4 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -2552,22 +2552,19 @@ irange::intersect_bitmask (const irange &r)
{
gcc_checking_assert (!undefined_p () && !r.undefined_p ());
+ // If the bitmasks are the same, do nothing.
if (m_bitmask == r.m_bitmask)
return false;
irange_bitmask bm = get_bitmask ();
irange_bitmask save = bm;
bm.intersect (r.get_bitmask ());
- // Use ths opportunity to make sure mask always reflects the
- // best mask we have.
- m_bitmask = bm;
- // Updating m_bitmask may still yield a semantic bitmask (as
- // returned by get_bitmask) which is functionally equivalent to what
- // we originally had. In which case, there's still no change.
- if (save == bm || save == get_bitmask ())
+ // If the new mask is the same, there is no change.
+ if (m_bitmask == bm)
return false;
+ m_bitmask = bm;
if (!set_range_from_bitmask ())
normalize_kind ();
if (flag_checking)
--
2.45.0