https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92758

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Target|                            |powerpc
             Status|UNCONFIRMED                 |ASSIGNED
   Last reconfirmed|                            |2019-12-03
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot 
gnu.org
   Target Milestone|---                         |10.0
     Ever confirmed|0                           |1

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
OK, so the testcases look like

testf_00 (__vector float x)
{
  vector(4) int _1;
  vector(4) int _2;
  int _4;
  __vector float _5;

  <bb 2> :
  _1 = VIEW_CONVERT_EXPR<__vector signed int>(x_3(D));
  _4 = BIT_FIELD_REF <x_3(D), 32, 0>;
  _2 = {_4, _4, _4, _4};
  _5 = VIEW_CONVERT_EXPR<__vector float>(_2);
  return _5;

which we previously optimized to

  <bb 2> :
  _1 = VIEW_CONVERT_EXPR<__vector signed int>(x_3(D));
  _4 = BIT_FIELD_REF <x_3(D), 32, 0>;
  _7 = VIEW_CONVERT_EXPR<vector(4) int>(x_3(D));
  _8 = VEC_PERM_EXPR <_7, _7, { 0, 0, 0, 0 }>;
  _5 = VIEW_CONVERT_EXPR<__vector float>(_8);
  return _5;

so power doesn't have a special splat instruction then it seems.

I'll simply revert the change to not consider uniform vector CTORs.

Reply via email to