When performing final value replacement we guard against exponential
(temporary) code growth due to unsharing of trees (SCEV heavily
relies on tree sharing).  The following relaxes this a tiny bit
to cover some more optimizations and puts in comments as to what
the real fix would be.

Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.

        PR tree-optimization/113080
        * tree-scalar-evolution.cc (expression_expensive_p): Allow
        a tiny bit of growth due to expansion of shared trees.
        (final_value_replacement_loop): Add comment.

        * gcc.dg/tree-ssa/sccp-3.c: New testcase.
---
 gcc/testsuite/gcc.dg/tree-ssa/sccp-3.c | 20 ++++++++++++++++++++
 gcc/tree-scalar-evolution.cc           | 11 ++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)
 create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/sccp-3.c

diff --git a/gcc/testsuite/gcc.dg/tree-ssa/sccp-3.c 
b/gcc/testsuite/gcc.dg/tree-ssa/sccp-3.c
new file mode 100644
index 00000000000..b8c67427f16
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/sccp-3.c
@@ -0,0 +1,20 @@
+/* PR/113080 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+int a,b,n;
+int w;
+void fun1(int t)
+{
+  for(int i=0;i<100;i++)
+    {
+      a+=w;
+      b-=w;
+      t+=a+b;
+    }
+  n=t;
+}
+
+/* We should apply final value replacement to all reductions and
+   elide the loop.  */
+/* { dg-final { scan-tree-dump-times "<bb" 1 "optimized" } } */
diff --git a/gcc/tree-scalar-evolution.cc b/gcc/tree-scalar-evolution.cc
index 94250b143b3..d743402d1c8 100644
--- a/gcc/tree-scalar-evolution.cc
+++ b/gcc/tree-scalar-evolution.cc
@@ -3529,7 +3529,12 @@ expression_expensive_p (tree expr, bool *cond_overflow_p)
   uint64_t expanded_size = 0;
   *cond_overflow_p = false;
   return (expression_expensive_p (expr, cond_overflow_p, cache, expanded_size)
-         || expanded_size > cache.elements ());
+         /* ???  Both the explicit unsharing and gimplification of expr will
+            expand shared trees to multiple copies.
+            Guard against exponential growth by counting the visits and
+            comparing againt the number of original nodes.  Allow a tiny
+            bit of duplication to catch some additional optimizations.  */
+         || expanded_size > (cache.elements () + 1));
 }
 
 /* Match.pd function to match bitwise inductive expression.
@@ -3867,6 +3872,10 @@ final_value_replacement_loop (class loop *loop)
          fprintf (dump_file, "\n");
        }
       any = true;
+      /* ???  Here we'd like to have a unshare_expr that would assign
+        shared sub-trees to new temporary variables either gimplified
+        to a GIMPLE sequence or to a statement list (keeping this a
+        GENERIC interface).  */
       def = unshare_expr (def);
       remove_phi_node (&psi, false);
 
-- 
2.35.3

Reply via email to