https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92868

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
That code has lots of issues.  Some in patch form as I was going through them:
gimple_call_alloc_size: for the case when size is INTEGER_CST and it is the one
argument alloc_size, I don't see a reason not to return size earlier, there is
no need to wi::to_wide.  I don't see a point in computing rng1[0] and rng2[0]
when nothing really uses it (write-only element).  Some formatting and typo in
comment.

I don't understand why you are using wi::sign_mask, the normal test for
negative wide_int when it is treated as signed integer of the corresponding
precision is wi::neg_p, that is more readable and descriptive.  If something is
tested on the wide ints with extra precision and the ::from was UNSIGNED, that
is of course not possible, but sign_mask will not work in that case either. 
And if it is ::from with SIGNED, neg_p should work fine too.
There is a weird:
          wide_int declsize = wi::to_wide (size);
          if (wi::sign_mask (dstoffrng[0]) > 0)
            declsize += dstoffrng[0];
that condition is never true, sign_mask will only ever return 0 or -1:
  return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
None of this fixes the ICE.  How exactly to fix that depends on whether *poff
can be something other than INTEGER_CST or not.  If it can be only INTEGER_CST,
then whatever code is setting *poff to non-INTEGER_CST should instead punt or
set to some safe value, whatever, if it can be anything, then while it is fine
to call functions like integer_zerop etc. on that, tree_int_cst_sgn requires
the argument to be INTEGER_CST only, so there needs to be TREE_CODE (*poff) ==
INTEGER_CST && tree_int_csgn (*poff) < 0 instead.  Or perhaps you want
!tree_expr_nonnegative_p (*poff) instead?

--- gcc/builtins.c.jj   2019-12-05 09:47:23.178710510 +0100
+++ gcc/builtins.c      2019-12-09 15:52:55.951404452 +0100
@@ -3746,36 +3746,33 @@ gimple_call_alloc_size (gimple *stmt)
     }

   tree size = gimple_call_arg (stmt, argidx1);
+  if (argidx2 > nargs && TREE_CODE (size) == INTEGER_CST)
+    return size;

   wide_int rng1[2];
   if (TREE_CODE (size) == INTEGER_CST)
-    rng1[0] = rng1[1] = wi::to_wide (size);
+    rng1[1] = wi::to_wide (size);
   else if (TREE_CODE (size) != SSA_NAME
           || get_range_info (size, rng1, rng1 + 1) != VR_RANGE)
     return NULL_TREE;

-  if (argidx2 > nargs && TREE_CODE (size) == INTEGER_CST)
-    return size;
-
   /* To handle ranges do the math in wide_int and return the product
      of the upper bounds as a constant.  Ignore anti-ranges.  */
-  tree n = argidx2 < nargs ? gimple_call_arg (stmt, argidx2) :
integer_one_node;
+  tree n
+    = argidx2 < nargs ? gimple_call_arg (stmt, argidx2) : integer_one_node;
   wide_int rng2[2];
   if (TREE_CODE (n) == INTEGER_CST)
-    rng2[0] = rng2[1] = wi::to_wide (n);
+    rng2[1] = wi::to_wide (n);
   else if (TREE_CODE (n) != SSA_NAME
           || get_range_info (n, rng2, rng2 + 1) != VR_RANGE)
     return NULL_TREE;

-  /* Extend to the maximum precsion to avoid overflow.  */
+  /* Extend to the maximum precision to avoid overflow.  */
   const int prec = ADDR_MAX_PRECISION;
-  rng1[0] = wide_int::from (rng1[0], prec, UNSIGNED);
   rng1[1] = wide_int::from (rng1[1], prec, UNSIGNED);
-  rng2[0] = wide_int::from (rng2[0], prec, UNSIGNED);
   rng2[1] = wide_int::from (rng2[1], prec, UNSIGNED);

   /* Return the lesser of SIZE_MAX and the product of the upper bounds.  */
-  rng1[0] = rng1[0] * rng2[0];
   rng1[1] = rng1[1] * rng2[1];
   tree size_max = TYPE_MAX_VALUE (sizetype);
   if (wi::gtu_p (rng1[1], wi::to_wide (size_max, prec)))
@@ -3853,7 +3850,7 @@ compute_objsize (tree dest, int ostype,
                  /* Ignore negative offsets for now.  For others,
                     use the lower bound as the most optimistic
                     estimate of the (remaining) size.  */
-                 if (wi::sign_mask (wioff))
+                 if (wi::neg_p (wioff))
                    ;
                  else if (wi::ltu_p (wioff, wisiz))
                    {
@@ -3882,9 +3879,8 @@ compute_objsize (tree dest, int ostype,

                      /* Ignore negative offsets for now.  For others,
                         use the lower bound as the most optimistic
-                        estimate of the (remaining)size.  */
-                     if (wi::sign_mask (min)
-                         || wi::sign_mask (max))
+                        estimate of the (remaining) size.  */
+                     if (wi::neg_p (min) || wi::neg_p (max))
                        ;
                      else if (wi::ltu_p (min, wisiz))
                        {
@@ -3912,8 +3908,7 @@ compute_objsize (tree dest, int ostype,
   if (!ostype)
     return NULL_TREE;

-  if (TREE_CODE (dest) == ARRAY_REF
-      || TREE_CODE (dest) == MEM_REF)
+  if (TREE_CODE (dest) == ARRAY_REF || TREE_CODE (dest) == MEM_REF)
     {
       tree ref = TREE_OPERAND (dest, 0);
       tree off = TREE_OPERAND (dest, 1);
@@ -3972,7 +3967,7 @@ compute_objsize (tree dest, int ostype,
              *poff = size_binop (PLUS_EXPR, *poff, off);
            }

-         if (wi::sign_mask (offrng[0]) >= 0)
+         if (!wi::neg_p (offrng[0]))
            {
              if (TREE_CODE (size) != INTEGER_CST)
                return NULL_TREE;
@@ -4007,7 +4002,7 @@ compute_objsize (tree dest, int ostype,
            declsize += dstoffrng[0];

          offrng[1] += dstoffrng[1];
-         if (wi::sign_mask (offrng[1]) < 0)
+         if (wi::neg_p (offrng[1]))
            return size_zero_node;

          return wide_int_to_tree (sizetype, declsize);

Reply via email to