https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81162

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ASSIGNED                    |NEW
                 CC|                            |rguenth at gcc dot gnu.org,
                   |                            |wschmidt at gcc dot gnu.org
           Assignee|rguenth at gcc dot gnu.org         |unassigned at gcc dot 
gnu.org

--- Comment #5 from Richard Biener <rguenth at gcc dot gnu.org> ---
So we go i1 && i2 -> FAIL.  After VRP1:

  # RANGE [0, 255] NONZERO 255
  _2 = (int) uc.0_1;
...
  if (i1 && i2)
  <bb 5> [100.00%] [count: INV]:
  # RANGE [0, 1]
  # iftmp.1_14 = PHI <1(3), 0(4)>
  # RANGE [0, 1] NONZERO 1
  _7 = (int) iftmp.1_14;
  # RANGE [0, 256] NONZERO 511
  _10 = _7 + _2;
  # RANGE [-256, 0]
  _11 = 0 - _10;
  _12 = _11 ^ -21096;
  # RANGE ~[2147483648, 18446744071562067967]
  _13 = (long long unsigned int) _12;
  if (_13 <= 9031239389974324562)
    OK
  else
    FAIL

the unfolded 0 - _10 gets cleaned up by forwprop3 to

  # RANGE [-256, 0]
  _11 = -_10;

PRE then takes the opportunity to optimize this to

  <bb 5> [100.00%] [count: INV]:
  # RANGE [0, 1]
  # iftmp.1_14 = PHI <1(4), 0(3), 0(2)>
  # RANGE [-256, 0]
  # prephitmp_27 = PHI <_26(4), _3(3), _3(2)>
  _12 = prephitmp_27 ^ -21096;
  # RANGE ~[2147483648, 18446744071562067967]
  _13 = (long long unsigned int) _12;
  if (_13 <= 9031239389974324562)

which still looks correct to me.  Then slsr seems to fold

  <bb 2> [100.00%] [count: INV]:
  uc.0_1 = uc;
  # RANGE [0, 255] NONZERO 255
  _2 = (int) uc.0_1;
  # RANGE [-255, 0]
  _3 = -_2;
  # RANGE [2147483392, 2147483647] NONZERO 2147483647
  _17 = _3 + 2147483647;
...
   <bb 4> [25.00%] [count: INV]:
   _24 = _2 + 1;
-  _26 = -_24;
+  _26 = _17 - -2147483648;

somehow which looks really odd... and from which VRP2 rightfully derives

_26: [2147483647, +INF]

Strength reduction candidate vector:

  1  [2] _3 = -_2;
     MULT : (_2 + 0) * -1 : int
     basis: 0  dependent: 2  sibling: 0
     next-interp: 0  dead-savings: 0

  2  [2] _17 = _3 + 2147483647;
     MULT : (_2 + -2147483647) * -1 : int
     basis: 1  dependent: 4  sibling: 0
     next-interp: 0  dead-savings: 0

I think this is an invalid candidate given undefined integer overflow
semantics.
We have to be very careful here or emit all new expressions in unsigned
arithmetic.

  3  [4] _24 = _2 + 1;
     ADD  : _2 + (1 * 1) : int
     basis: 0  dependent: 0  sibling: 0
     next-interp: 0  dead-savings: 0

  4  [4] _26 = -_24;
     MULT : (_2 + 1) * -1 : int
     basis: 2  dependent: 0  sibling: 0
     next-interp: 0  dead-savings: 4

that looks ok but we seem to end up relating this to [2]:

Strength reduction candidate chains:

_2 -> 1 -> 4 -> 3 -> 2
_12 -> 5 -> 6


Processing dependency tree rooted at 1.
Replacing: _26 = -_24;
With: _26 = _17 - -2147483648;

cost-wise we see that _24 is eliminated by the transform I guess.

Bill?

Reply via email to