A gentle reminder for the review of this patch

Thanks,

Manjunath S Matti

On 11/12/25 2:57 pm, Manjunath S Matti wrote:
This patch is to remove mask P8_VECTOR from rs6000_isa_flags,
remove all uses of OPTION_MASK_P8_VECTOR and replace all uses
of TARGET_P8_VECTOR with (TARGET_POWER8 && TARGET_VSX) or
(TARGET_POWER8 && TARGET_ALTIVEC) according to the context.

gcc/ChangeLog:

         * config/rs6000/vsx.md (*vspltisw_v2di_split): Move to ...
         * config/rs6000/altivec.md (*vspltisw_v2di_split): ... here. Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_ALTIVEC.
         (mulv4si3_p8, altivec_vmsumudm, p8_vmrgew_<mode>, p8_vmrgow_<mode>,
         p8_vmrgew_<mode>_direct, p8_vmrgow_<mode>_direct,
         vec_widen_umult_even_v4si, vec_widen_smult_even_v4si,
         vec_widen_umult_odd_v4si, vec_widen_smult_odd_v4si,
         altivec_vmuleuw, altivec_vmulouw, altivec_vmulesw, altivec_vmulosw,
         altivec_vpermxor, vec_unpacku_float_lo_v8hi, *p9v_ctz<mode>2,
         p8v_vgbbd, altivec_vbpermq, altivec_vbpermq2, bcd<bcd_add_sub>_<mode>,
         *bcd<bcd_add_sub>_test_<mode>, *bcd<bcd_add_sub>_test2_<mode>,
         bcd<bcd_add_sub>_<code>_<mode>,*bcdinvalid_<mode>,bcdinvalid_<mode>,
         bcdshift_v16qi, define_peephole2 to combine a bcdadd/bcdsub): Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_ALTIVEC.
         (mulv4si3): Replace TARGET_P8_VECTOR with TARGET_POWER8.
         * config/rs6000/constraints.md (define_constraint wB): Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_ALTIVEC.
         (define_constraint wL): Likewise.
         * config/rs6000/crypto.md (crypto_vpmsum<CR_char>): Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_ALTIVEC.
         (crypto_vpermxor_<mode>): Likewise.
         * config/rs6000/rs6000-builtin.cc (rs6000_builtin_is_supported):
         Replace TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_VSX for ENB_P8V.
         * config/rs6000/rs6000-c.cc (rs6000_target_modify_macros): Remove some
         stale comments on TARGET_P8_VECTOR setting, replace the check for
         OPTION_MASK_P8_VECTOR with checks for OPTION_MASK_POWER8 and
         OPTION_MASK_VSX.
         * config/rs6000/rs6000-cpus.def (ISA_2_7_MASKS_SERVER): Remove
         OPTION_MASK_P8_VECTOR.
         (OTHER_VSX_VECTOR_MASKS): Likewise.
         (POWERPC_MASKS): Likewise.
         * config/rs6000/rs6000-string.cc (emit_final_compare_vec): Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_VSX.
         (expand_block_compare): Replace TARGET_P8_VECTOR with TARGET_POWER8
         since TARGET_EFFICIENT_UNALIGNED_VSX ensure VSX enabled.
         (expand_strn_compare): Likewise.
         * config/rs6000/rs6000.cc (rs6000_clone_map): Use OPTION_MASK_POWER8
         rather than TARGET_P8_VECTOR for arch_2_07.
         (rs6000_setup_reg_addr_masks): Replace !TARGET_P8_VECTOR with
         !TARGET_POWER8 || !TARGET_VSX.
         (rs6000_init_hard_regno_mode_ok): Replace TARGET_P8_VECTOR with
         TARGET_POWER8 for rs6000_vector_unit as VSX is guaranteed , and
         replace TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_VSX for the else.
         (rs6000_option_override_internal): Remove OPTION_MASK_P8_VECTOR use,
         simplify TARGET_P8_VECTOR || TARGET_POWER8 as TARGET_POWER8, remove
         the handling of TARGET_P8_VECTOR if !TARGET_VSX and adjust the handling
         of TARGET_P9_VECTOR if !TARGET_P8_VECTOR by checking !TARGET_VSX
         instead.
         (vspltisw_vupkhsw_constant_p): Replace !TARGET_P8_VECTOR check with
         !TARGET_POWER8 and !TARGET_ALTIVEC checks.
         (output_vec_const_move): Replace TARGET_P8_VECTOR with TARGET_POWER8
         and TARGET_VSX.
         (rs6000_expand_vector_init): Likewise.
         (rs6000_secondary_reload_simple_move): Likewise.
         (rs6000_preferred_reload_class): Likewise.
         (rs6000_expand_vector_set): Replace TARGET_P8_VECTOR with TARGET_POWER8
         and TARGET_ALTIVEC.
         (altivec_expand_vec_perm_le): Likewise.
         (altivec_expand_vec_perm_const): Replace OPTION_MASK_P8_VECTOR with
         OPTION_MASK_VSX | OPTION_MASK_POWER8 and adjust the code checking
         patterns mask.
         (rs6000_opt_masks): Remove entry for power8-vector.
         * config/rs6000/rs6000.h (TARGET_DIRECT_MOVE): Replace TARGET_P8_VECTOR
         with TARGET_POWER8 && TARGET_VSX.
         (TARGET_XSCVDPSPN): Likewise.
         (TARGET_XSCVSPDPN): Likewise.
         (TARGET_VADDUQM): Replace TARGET_P8_VECTOR with TARGET_POWER8 &&
         TARGET_ALTIVEC.
         * config/rs6000/rs6000.md (isa attr p8v): Replace
          TARGET_P8_VECTOR with
         TARGET_POWER8 && TARGET_VSX.
         (define_mode_iterator ALTIVEC_DFORM): Likewise.
         (floatsi<mode>2_lfiwax_mem, floatunssi<mode>2_lfiwzx_mem,
         *fix<uns>_trunc<SFDF:mode><QHSI:mode>2_mem, fixuns_trunc<mode>si2,
         define_split SI to DI sign_extend, define_peephole2 for SF unless
         register move): Replace TARGET_P8_VECTOR with TARGET_POWER8 &&
         TARGET_VSX.
         (eqv<mode>3, nand<mode>3, iorn<mode>3, *boolc<mode>3_internal1,
         *boolc<mode>3_internal2, *boolcc<mode>3_internal1,
         *boolcc<mode>3_internal2, *eqv<mode>3_internal1,
         *eqv<mode>3_internal2): Replace TARGET_P8_VECTOR with TARGET_POWER8 &&
         TARGET_ALTIVEC, and simplify split condition with && if it can.
         * config/rs6000/rs6000.opt (mpower8-vector): Remove from
         rs6000_isa_flags.
         * config/rs6000/vector.md (cr6_test_for_lt_reverse): Replace
         TARGET_P8_VECTOR with TARGET_POWER8 && TARGET_ALTIVEC.
         (ctz<mode>2): Likewise.
---
  gcc/config/rs6000/altivec.md        | 90 +++++++++++++++++++----------
  gcc/config/rs6000/constraints.md    |  4 +-
  gcc/config/rs6000/crypto.md         |  4 +-
  gcc/config/rs6000/predicates.md     |  2 +-
  gcc/config/rs6000/rs6000-builtin.cc |  2 +-
  gcc/config/rs6000/rs6000-c.cc       | 35 +----------
  gcc/config/rs6000/rs6000-cpus.def   |  3 -
  gcc/config/rs6000/rs6000-string.cc  | 14 ++---
  gcc/config/rs6000/rs6000.cc         | 48 ++++++++-------
  gcc/config/rs6000/rs6000.h          | 10 ++--
  gcc/config/rs6000/rs6000.md         | 53 ++++++++---------
  gcc/config/rs6000/rs6000.opt        |  2 +-
  gcc/config/rs6000/vector.md         |  4 +-
  gcc/config/rs6000/vsx.md            | 24 --------
  14 files changed, 132 insertions(+), 163 deletions(-)

diff --git a/gcc/config/rs6000/altivec.md b/gcc/config/rs6000/altivec.md
index 3336b0c75dd..056b1e413cb 100644
--- a/gcc/config/rs6000/altivec.md
+++ b/gcc/config/rs6000/altivec.md
@@ -800,7 +800,7 @@
    [(set (match_operand:V4SI 0 "register_operand" "=v")
          (mult:V4SI (match_operand:V4SI 1 "register_operand" "v")
                     (match_operand:V4SI 2 "register_operand" "v")))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmuluwm %0,%1,%2"
    [(set_attr "type" "veccomplex")])
@@ -819,7 +819,7 @@
    rtx low_product;
    rtx high_product;
- if (TARGET_P8_VECTOR)
+  if (TARGET_POWER8)
      {
        emit_insn (gen_mulv4si3_p8 (operands[0], operands[1], operands[2]));
        DONE;
@@ -1018,7 +1018,7 @@
                      (match_operand:V2DI 2 "register_operand" "v")
                      (match_operand:V1TI 3 "register_operand" "v")]
                     UNSPEC_VMSUMUDM))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmsumudm %0,%1,%2,%3"
    [(set_attr "type" "veccomplex")])
@@ -1484,7 +1484,7 @@
            (match_operand:VSX_W 2 "register_operand" "v"))
          (parallel [(const_int 0) (const_int 4)
                     (const_int 2) (const_int 6)])))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (BYTES_BIG_ENDIAN)
      return "vmrgew %0,%1,%2";
@@ -1501,7 +1501,7 @@
            (match_operand:VSX_W 2 "register_operand" "v"))
          (parallel [(const_int 1) (const_int 5)
                     (const_int 3) (const_int 7)])))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (BYTES_BIG_ENDIAN)
      return "vmrgow %0,%1,%2";
@@ -1532,7 +1532,7 @@
        (unspec:VSX_W [(match_operand:VSX_W 1 "register_operand" "v")
                       (match_operand:VSX_W 2 "register_operand" "v")]
                     UNSPEC_VMRGEW_DIRECT))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmrgew %0,%1,%2"
    [(set_attr "type" "vecperm")])
@@ -1541,7 +1541,7 @@
        (unspec:VSX_W [(match_operand:VSX_W 1 "register_operand" "v")
                       (match_operand:VSX_W 2 "register_operand" "v")]
                     UNSPEC_VMRGOW_DIRECT))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmrgow %0,%1,%2"
    [(set_attr "type" "vecperm")])
@@ -1601,7 +1601,7 @@
    [(use (match_operand:V2DI 0 "register_operand"))
     (use (match_operand:V4SI 1 "register_operand"))
     (use (match_operand:V4SI 2 "register_operand"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
   if (BYTES_BIG_ENDIAN)
      emit_insn (gen_altivec_vmuleuw (operands[0], operands[1], operands[2]));
@@ -1627,7 +1627,7 @@
    [(use (match_operand:V2DI 0 "register_operand"))
     (use (match_operand:V4SI 1 "register_operand"))
     (use (match_operand:V4SI 2 "register_operand"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (BYTES_BIG_ENDIAN)
      emit_insn (gen_altivec_vmulesw (operands[0], operands[1], operands[2]));
@@ -1705,7 +1705,7 @@
    [(use (match_operand:V2DI 0 "register_operand"))
     (use (match_operand:V4SI 1 "register_operand"))
     (use (match_operand:V4SI 2 "register_operand"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (BYTES_BIG_ENDIAN)
      emit_insn (gen_altivec_vmulouw (operands[0], operands[1], operands[2]));
@@ -1731,7 +1731,7 @@
    [(use (match_operand:V2DI 0 "register_operand"))
     (use (match_operand:V4SI 1 "register_operand"))
     (use (match_operand:V4SI 2 "register_operand"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (BYTES_BIG_ENDIAN)
      emit_insn (gen_altivec_vmulosw (operands[0], operands[1], operands[2]));
@@ -1830,7 +1830,7 @@
         (unspec:V2DI [(match_operand:V4SI 1 "register_operand" "v")
                       (match_operand:V4SI 2 "register_operand" "v")]
                      UNSPEC_VMULEUW))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmuleuw %0,%1,%2"
    [(set_attr "type" "veccomplex")])
@@ -1848,7 +1848,7 @@
         (unspec:V2DI [(match_operand:V4SI 1 "register_operand" "v")
                       (match_operand:V4SI 2 "register_operand" "v")]
                      UNSPEC_VMULOUW))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmulouw %0,%1,%2"
    [(set_attr "type" "veccomplex")])
@@ -1866,7 +1866,7 @@
         (unspec:V2DI [(match_operand:V4SI 1 "register_operand" "v")
                       (match_operand:V4SI 2 "register_operand" "v")]
                      UNSPEC_VMULESW))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmulesw %0,%1,%2"
    [(set_attr "type" "veccomplex")])
@@ -1884,7 +1884,7 @@
         (unspec:V2DI [(match_operand:V4SI 1 "register_operand" "v")
                       (match_operand:V4SI 2 "register_operand" "v")]
                      UNSPEC_VMULOSW))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vmulosw %0,%1,%2"
    [(set_attr "type" "veccomplex")])
@@ -2181,7 +2181,7 @@
  ;; VSPLTISW or XXSPLTIB to load up the constant, and not worry about the bits
  ;; that the vector shift instructions will not use.
  (define_mode_iterator VSHIFT_MODE     [(V4SI "TARGET_P9_VECTOR")
-                                        (V2DI "TARGET_P8_VECTOR")])
+                                        (V2DI "TARGET_POWER8 && 
TARGET_ALTIVEC")])
(define_code_iterator vshift_code [ashift ashiftrt lshiftrt])
  (define_code_attr vshift_attr         [(ashift   "ashift")
@@ -2194,7 +2194,7 @@
         (match_operand:VSHIFT_MODE 1 "register_operand" "v")
         (match_operand:VSHIFT_MODE 2 "vector_shift_constant" "")))
     (clobber (match_scratch:VSHIFT_MODE 3 "=&v"))]
-  "((<MODE>mode == V2DImode && TARGET_P8_VECTOR)
+  "((<MODE>mode == V2DImode && TARGET_POWER8 && TARGET_ALTIVEC)
      || (<MODE>mode == V4SImode && TARGET_P9_VECTOR))"
    "#"
    "&& 1"
@@ -2216,7 +2216,7 @@
    [(set (match_operand:VSHIFT_MODE 0 "register_operand" "=v")
        (unspec:VSHIFT_MODE [(match_operand 1 "const_int_operand" "n")]
                            UNSPEC_VECTOR_SHIFT))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (UINTVAL (operands[1]) <= 15)
      return "vspltisw %0,%1";
@@ -2754,6 +2754,32 @@
  }
    [(set_attr "type" "vecperm")])
+(define_insn_and_split "*vspltisw_v2di_split"
+  [(set (match_operand:V2DI 0 "altivec_register_operand" "=v")
+       (match_operand:V2DI 1 "vspltisw_vupkhsw_constant_split" "W"))]
+  "TARGET_POWER8 && TARGET_ALTIVEC
+   && vspltisw_vupkhsw_constant_split (operands[1], V2DImode)"
+  "#"
+  "&& 1"
+  [(const_int 0)]
+{
+  rtx op0 = operands[0];
+  rtx op1 = operands[1];
+  rtx tmp = can_create_pseudo_p ()
+           ? gen_reg_rtx (V4SImode)
+           : gen_lowpart (V4SImode, op0);
+  int value;
+
+  vspltisw_vupkhsw_constant_p (op1, V2DImode, &value);
+  emit_insn (gen_altivec_vspltisw (tmp, GEN_INT (value)));
+  emit_insn (gen_altivec_vupkhsw_direct (op0, tmp));
+
+  DONE;
+}
+  [(set_attr "type" "vecperm")
+   (set_attr "length" "8")])
+
+
  /* The cbranch_optab doesn't allow FAIL, so old cpus which are
     inefficient on unaligned vsx are disabled as the cost is high
     for unaligned load/store.  */
@@ -4132,7 +4158,7 @@
     (use (match_operand:V16QI 1 "register_operand"))
     (use (match_operand:V16QI 2 "register_operand"))
     (use (match_operand:V16QI 3 "register_operand"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (!BYTES_BIG_ENDIAN)
      {
@@ -4364,7 +4390,7 @@
  (define_insn "*p8v_clz<mode>2"
    [(set (match_operand:VI2 0 "register_operand" "=v")
        (clz:VI2 (match_operand:VI2 1 "register_operand" "v")))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vclz<wd> %0,%1"
    [(set_attr "type" "vecsimple")])
@@ -4394,7 +4420,7 @@
  (define_insn "*p8v_popcount<mode>2"
    [(set (match_operand:VI2 0 "register_operand" "=v")
          (popcount:VI2 (match_operand:VI2 1 "register_operand" "v")))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vpopcnt<wd> %0,%1"
    [(set_attr "type" "vecsimple")])
@@ -4413,7 +4439,7 @@
    [(set (match_operand:V16QI 0 "register_operand" "=v")
        (unspec:V16QI [(match_operand:V16QI 1 "register_operand" "v")]
                      UNSPEC_VGBBD))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vgbbd %0,%1"
    [(set_attr "type" "vecsimple")])
@@ -4504,7 +4530,7 @@
        (unspec:V2DI [(match_operand:V16QI 1 "register_operand" "v")
                      (match_operand:V16QI 2 "register_operand" "v")]
                     UNSPEC_VBPERMQ))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vbpermq %0,%1,%2"
    [(set_attr "type" "vecperm")])
@@ -4514,7 +4540,7 @@
        (unspec:V16QI [(match_operand:V16QI 1 "register_operand" "v")
                       (match_operand:V16QI 2 "register_operand" "v")]
                      UNSPEC_VBPERMQ))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vbpermq %0,%1,%2"
    [(set_attr "type" "vecperm")])
@@ -4586,7 +4612,7 @@
                      (match_operand:QI 3 "const_0_to_1_operand" "n")]
                     UNSPEC_BCD_ADD_SUB))
     (clobber (reg:CCFP CR6_REGNO))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "bcd<bcd_add_sub>. %0,%1,%2,%3"
    [(set_attr "type" "vecsimple")])
@@ -4604,7 +4630,7 @@
                      UNSPEC_BCD_ADD_SUB)
         (match_operand:V2DF 4 "zero_constant" "j")))
     (clobber (match_scratch:VBCD 0 "=v"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "bcd<bcd_add_sub>. %0,%1,%2,%3"
    [(set_attr "type" "vecsimple")])
@@ -4621,7 +4647,7 @@
                       (match_dup 3)]
                      UNSPEC_BCD_ADD_SUB)
         (match_operand:V2DF 4 "zero_constant" "j")))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "bcd<bcd_add_sub>. %0,%1,%2,%3"
    [(set_attr "type" "vecsimple")])
@@ -4719,7 +4745,7 @@
     (set (match_operand:SI 0 "register_operand")
        (BCD_TEST:SI (reg:CCFP CR6_REGNO)
                     (const_int 0)))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    operands[4] = CONST0_RTX (V2DFmode);
  })
@@ -4731,7 +4757,7 @@
                      UNSPEC_BCDSUB)
         (match_operand:V2DF 2 "zero_constant" "j")))
     (clobber (match_scratch:VBCD 0 "=v"))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "bcdsub. %0,%1,%1,0"
    [(set_attr "type" "vecsimple")])
@@ -4745,7 +4771,7 @@
     (set (match_operand:SI 0 "register_operand")
        (unordered:SI (reg:CCFP CR6_REGNO)
                      (const_int 0)))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    operands[2] = CONST0_RTX (V2DFmode);
  })
@@ -4757,7 +4783,7 @@
                       (match_operand:QI 3 "const_0_to_1_operand" "n")]
                     UNSPEC_BCDSHIFT))
     (clobber (reg:CCFP CR6_REGNO))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "bcds. %0,%1,%2,%3"
    [(set_attr "type" "vecsimple")])
@@ -4813,7 +4839,7 @@
                                 UNSPEC_BCD_ADD_SUB)
                    (match_operand:V2DF 4 "zero_constant")))
              (clobber (match_operand:V1TI 5 "register_operand"))])]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    [(parallel [(set (match_dup 0)
                   (unspec:V1TI [(match_dup 1)
                                 (match_dup 2)
diff --git a/gcc/config/rs6000/constraints.md b/gcc/config/rs6000/constraints.md
index 4875895e4f5..4db386184b0 100644
--- a/gcc/config/rs6000/constraints.md
+++ b/gcc/config/rs6000/constraints.md
@@ -104,7 +104,7 @@
    "@internal Signed 5-bit constant integer that can be loaded into an
     Altivec register."
    (and (match_code "const_int")
-       (match_test "TARGET_P8_VECTOR")
+       (match_test "TARGET_POWER8 && TARGET_ALTIVEC")
         (match_operand 0 "s5bit_cint_operand")))
(define_constraint "wE"
@@ -127,7 +127,7 @@
  (define_constraint "wM"
    "@internal Match vector constant with all 1's if the XXLORC instruction
     is available."
-  (and (match_test "TARGET_P8_VECTOR")
+  (and (match_test "TARGET_POWER8 && TARGET_VSX")
         (match_operand 0 "all_ones_constant")))
;; ISA 3.0 vector d-form addresses
diff --git a/gcc/config/rs6000/crypto.md b/gcc/config/rs6000/crypto.md
index 11e472ba00e..75ef6817b07 100644
--- a/gcc/config/rs6000/crypto.md
+++ b/gcc/config/rs6000/crypto.md
@@ -77,7 +77,7 @@
        (unspec:CR_mode [(match_operand:CR_mode 1 "register_operand" "v")
                         (match_operand:CR_mode 2 "register_operand" "v")]
                        UNSPEC_VPMSUM))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vpmsum<CR_char> %0,%1,%2"
    [(set_attr "type" "crypto")])
@@ -88,7 +88,7 @@
                         (match_operand:CR_mode 2 "register_operand" "v")
                         (match_operand:CR_mode 3 "register_operand" "v")]
                        UNSPEC_VPERMXOR))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
    "vpermxor %0,%1,%2,%3"
    [(set_attr "type" "vecperm")])
diff --git a/gcc/config/rs6000/predicates.md b/gcc/config/rs6000/predicates.md
index 5133dacd794..2adb43ebdaa 100644
--- a/gcc/config/rs6000/predicates.md
+++ b/gcc/config/rs6000/predicates.md
@@ -885,7 +885,7 @@
    if (mode == V2DImode)
      {
        min_value = 0;
-      if (!TARGET_P8_VECTOR)
+      if (!(TARGET_POWER8 && TARGET_VSX))
        return 0;
      }
    else if (mode == V4SImode)
diff --git a/gcc/config/rs6000/rs6000-builtin.cc 
b/gcc/config/rs6000/rs6000-builtin.cc
index bc1580f051b..9fe05b97f41 100644
--- a/gcc/config/rs6000/rs6000-builtin.cc
+++ b/gcc/config/rs6000/rs6000-builtin.cc
@@ -167,7 +167,7 @@ rs6000_builtin_is_supported (enum rs6000_gen_builtins 
fncode)
      case ENB_P8:
        return TARGET_POWER8;
      case ENB_P8V:
-      return TARGET_P8_VECTOR;
+      return TARGET_POWER8 && TARGET_VSX;
      case ENB_P9:
        return TARGET_MODULO;
      case ENB_P9_64:
diff --git a/gcc/config/rs6000/rs6000-c.cc b/gcc/config/rs6000/rs6000-c.cc
index d3b0a566821..270bdbd40b3 100644
--- a/gcc/config/rs6000/rs6000-c.cc
+++ b/gcc/config/rs6000/rs6000-c.cc
@@ -473,12 +473,7 @@ rs6000_target_modify_macros (bool define_p, HOST_WIDE_INT 
flags)
        if (rs6000_aix_extabi)
        rs6000_define_or_undefine_macro (define_p, "__EXTABI__");
      }
-  /* Note that the OPTION_MASK_VSX flag is automatically turned on in
-     the following conditions:
-     1. TARGET_P8_VECTOR is explicitly turned on and the OPTION_MASK_VSX
-        was not explicitly turned off.  Hereafter, the OPTION_MASK_VSX
-        flag is considered to have been explicitly turned on.
-     Note that the OPTION_MASK_VSX flag is automatically turned off in
+  /* Note that the OPTION_MASK_VSX flag is automatically turned off in
       the following conditions:
       1. The operating system does not support saving of AltiVec
        registers (OS_MISSING_ALTIVEC).
@@ -507,33 +502,9 @@ rs6000_target_modify_macros (bool define_p, HOST_WIDE_INT 
flags)
        /* Tell the user that our HTM insn patterns act as memory barriers.  */
        rs6000_define_or_undefine_macro (define_p, "__TM_FENCE__");
      }
-  /* Note that the OPTION_MASK_P8_VECTOR flag is automatically turned
-     on in the following conditions:
-     1. TARGET_P9_VECTOR is explicitly turned on and
-        OPTION_MASK_P8_VECTOR is not explicitly turned off.
-        Hereafter, the OPTION_MASK_P8_VECTOR flag is considered to
-        have been turned off explicitly.
-     Note that the OPTION_MASK_P8_VECTOR flag is automatically turned
-     off in the following conditions:
-     1. If any of TARGET_HARD_FLOAT, TARGET_ALTIVEC, or TARGET_VSX
-       were turned off explicitly and OPTION_MASK_P8_VECTOR flag was
-       not turned on explicitly.
-     2. If TARGET_ALTIVEC is turned off.  Hereafter, the
-       OPTION_MASK_P8_VECTOR flag is considered to have been turned off
-       explicitly.
-     3. If TARGET_VSX is turned off and OPTION_MASK_P8_VECTOR was not
-        explicitly enabled.  If TARGET_VSX is explicitly enabled, the
-        OPTION_MASK_P8_VECTOR flag is hereafter also considered to
-       have been turned off explicitly.  */
-  if ((flags & OPTION_MASK_P8_VECTOR) != 0)
+  if ((flags & OPTION_MASK_POWER8) != 0 && (flags & OPTION_MASK_VSX) != 0)
      rs6000_define_or_undefine_macro (define_p, "__POWER8_VECTOR__");
-  /* Note that the OPTION_MASK_P9_VECTOR flag is automatically turned
-     off in the following conditions:
-     1. If TARGET_P8_VECTOR is turned off and OPTION_MASK_P9_VECTOR is
-        not turned on explicitly. Hereafter, if OPTION_MASK_P8_VECTOR
-        was turned on explicitly, the OPTION_MASK_P9_VECTOR flag is
-        also considered to have been turned off explicitly.
-     Note that the OPTION_MASK_P9_VECTOR is automatically turned on
+  /* Note that the OPTION_MASK_P9_VECTOR is automatically turned on
       in the following conditions:
       1. If TARGET_P9_MINMAX was turned on explicitly.
          Hereafter, THE OPTION_MASK_P9_VECTOR flag is considered to
diff --git a/gcc/config/rs6000/rs6000-cpus.def 
b/gcc/config/rs6000/rs6000-cpus.def
index 4a1037616d7..b5e210cdaf0 100644
--- a/gcc/config/rs6000/rs6000-cpus.def
+++ b/gcc/config/rs6000/rs6000-cpus.def
@@ -48,7 +48,6 @@
     system.  */
  #define ISA_2_7_MASKS_SERVER  (ISA_2_6_MASKS_SERVER                   \
                                 | OPTION_MASK_POWER8                   \
-                                | OPTION_MASK_P8_VECTOR                \
                                 | OPTION_MASK_CRYPTO                   \
                                 | OPTION_MASK_EFFICIENT_UNALIGNED_VSX  \
                                 | OPTION_MASK_QUAD_MEMORY              \
@@ -86,7 +85,6 @@
  /* Flags that need to be turned off if -mno-vsx.  */
  #define OTHER_VSX_VECTOR_MASKS        (OPTION_MASK_EFFICIENT_UNALIGNED_VSX    
\
                                 | OPTION_MASK_FLOAT128_KEYWORD         \
-                                | OPTION_MASK_P8_VECTOR                \
                                 | OPTION_MASK_CRYPTO                   \
                                 | OPTION_MASK_P9_VECTOR                \
                                 | OPTION_MASK_FLOAT128_HW              \
@@ -131,7 +129,6 @@
                                 | OPTION_MASK_NO_UPDATE                \
                                 | OPTION_MASK_POWER8                   \
                                 | OPTION_MASK_P8_FUSION                \
-                                | OPTION_MASK_P8_VECTOR                \
                                 | OPTION_MASK_P9_MINMAX                \
                                 | OPTION_MASK_P9_MISC                  \
                                 | OPTION_MASK_P9_VECTOR                \
diff --git a/gcc/config/rs6000/rs6000-string.cc 
b/gcc/config/rs6000/rs6000-string.cc
index 3d2911ca08a..e3267f4f11a 100644
--- a/gcc/config/rs6000/rs6000-string.cc
+++ b/gcc/config/rs6000/rs6000-string.cc
@@ -847,7 +847,7 @@ emit_final_compare_vec (rtx str1, rtx str2, rtx result,
      }
    else
      {
-      gcc_assert (TARGET_P8_VECTOR);
+      gcc_assert (TARGET_POWER8 && TARGET_VSX);
        rtx diffix = gen_reg_rtx (DImode);
        rtx result_gbbd = gen_reg_rtx (V16QImode);
        /* Since each byte of the input is either 00 or FF, the bytes in
@@ -2005,10 +2005,10 @@ expand_block_compare (rtx operands[])
       at least POWER8.  That way we can rely on overlapping compares to
       do the final comparison of less than 16 bytes.  Also I do not
       want to deal with making this work for 32 bits.  In addition, we
-     have to make sure that we have at least P8_VECTOR (we don't allow
-     P9_VECTOR without P8_VECTOR).  */
+     have to make sure that we have at least P8 vector support (we
+     don't allow P9 vector without P8 vector support).  */
    int use_vec = (bytes >= 33 && !TARGET_32BIT
-                && TARGET_EFFICIENT_UNALIGNED_VSX && TARGET_P8_VECTOR);
+                && TARGET_EFFICIENT_UNALIGNED_VSX && TARGET_POWER8);
/* We don't want to generate too much code. The loop code can take
       over for lengths greater than 31 bytes.  */
@@ -2473,10 +2473,10 @@ expand_strn_compare (rtx operands[], int no_length)
       at least POWER8.  That way we can rely on overlapping compares to
       do the final comparison of less than 16 bytes.  Also I do not
       want to deal with making this work for 32 bits.  In addition, we
-     have to make sure that we have at least P8_VECTOR (we don't allow
-     P9_VECTOR without P8_VECTOR).  */
+     have to make sure that we have at least P8 vector support (we
+     don't allow P9 vector without P8 vector support).  */
    int use_vec = (bytes >= 16 && !TARGET_32BIT
-                && TARGET_EFFICIENT_UNALIGNED_VSX && TARGET_P8_VECTOR);
+                && TARGET_EFFICIENT_UNALIGNED_VSX && TARGET_POWER8);
if (use_vec)
      required_align = 16;
diff --git a/gcc/config/rs6000/rs6000.cc b/gcc/config/rs6000/rs6000.cc
index bf899adc531..49b93948e6b 100644
--- a/gcc/config/rs6000/rs6000.cc
+++ b/gcc/config/rs6000/rs6000.cc
@@ -259,7 +259,7 @@ static const struct clone_map rs6000_clone_map[CLONE_MAX] = 
{
    { 0,                                "" },         /* Default options.  */
    { OPTION_MASK_CMPB,         "arch_2_05" },        /* ISA 2.05 (power6).  */
    { OPTION_MASK_POPCNTD,      "arch_2_06" },        /* ISA 2.06 (power7).  */
-  { OPTION_MASK_P8_VECTOR,     "arch_2_07" },        /* ISA 2.07 (power8).  */
+  { OPTION_MASK_POWER8,                "arch_2_07" },        /* ISA 2.07 
(power8).  */
    { OPTION_MASK_P9_VECTOR,    "arch_3_00" },        /* ISA 3.0 (power9).  */
    { OPTION_MASK_POWER10,      "arch_3_1" }, /* ISA 3.1 (power10).  */
  };
@@ -2672,7 +2672,7 @@ rs6000_setup_reg_addr_masks (void)
                  && !VECTOR_ALIGNMENT_P (m2)
                  && !complex_p
                  && (m != E_DFmode || !TARGET_VSX)
-                 && (m != E_SFmode || !TARGET_P8_VECTOR)
+                 && (m != E_SFmode || !TARGET_VSX || !TARGET_POWER8)
                  && !small_int_vsx_p)
                {
                  addr_mask |= RELOAD_REG_PRE_INCDEC;
@@ -2911,12 +2911,12 @@ rs6000_init_hard_regno_mode_ok (bool global_init_p)
      {
        rs6000_vector_mem[V2DImode] = VECTOR_VSX;
        rs6000_vector_unit[V2DImode]
-       = (TARGET_P8_VECTOR) ? VECTOR_P8_VECTOR : VECTOR_NONE;
+       = (TARGET_POWER8) ? VECTOR_P8_VECTOR : VECTOR_NONE;
        rs6000_vector_align[V2DImode] = align64;
rs6000_vector_mem[V1TImode] = VECTOR_VSX;
        rs6000_vector_unit[V1TImode]
-       = (TARGET_P8_VECTOR) ? VECTOR_P8_VECTOR : VECTOR_NONE;
+       = (TARGET_POWER8) ? VECTOR_P8_VECTOR : VECTOR_NONE;
        rs6000_vector_align[V1TImode] = 128;
      }
@@ -2929,7 +2929,7 @@ rs6000_init_hard_regno_mode_ok (bool global_init_p)
      }
/* SFmode, see if we want to use the VSX unit. */
-  if (TARGET_P8_VECTOR)
+  if (TARGET_POWER8 && TARGET_VSX)
      {
        rs6000_vector_unit[SFmode] = VECTOR_VSX;
        rs6000_vector_align[SFmode] = 32;
@@ -3147,7 +3147,7 @@ rs6000_init_hard_regno_mode_ok (bool global_init_p)
        reg_addr[DFmode].scalar_in_vmx_p = true;
        reg_addr[DImode].scalar_in_vmx_p = true;
- if (TARGET_P8_VECTOR)
+      if (TARGET_POWER8 && TARGET_VSX)
        {
          reg_addr[SFmode].scalar_in_vmx_p = true;
          reg_addr[SImode].scalar_in_vmx_p = true;
@@ -3867,8 +3867,7 @@ rs6000_option_override_internal (bool global_init_p)
        && (rs6000_isa_flags_explicit & (OPTION_MASK_SOFT_FLOAT
                                       | OPTION_MASK_ALTIVEC
                                       | OPTION_MASK_VSX)) != 0)
-    rs6000_isa_flags &= ~((OPTION_MASK_P8_VECTOR | OPTION_MASK_CRYPTO)
-                        & ~rs6000_isa_flags_explicit);
+    rs6000_isa_flags &= ~(OPTION_MASK_CRYPTO & ~rs6000_isa_flags_explicit);
if (TARGET_DEBUG_REG || TARGET_DEBUG_TARGET)
      rs6000_print_isa_options (stderr, 0, "before defaults", rs6000_isa_flags);
@@ -3911,7 +3910,7 @@ rs6000_option_override_internal (bool global_init_p)
        else
        rs6000_isa_flags |= ISA_3_0_MASKS_SERVER;
      }
-  else if (TARGET_P8_VECTOR || TARGET_POWER8 || TARGET_CRYPTO)
+  else if (TARGET_POWER8 || TARGET_CRYPTO)
      rs6000_isa_flags |= (ISA_2_7_MASKS_SERVER & ~ignore_masks);
    else if (TARGET_VSX)
      rs6000_isa_flags |= (ISA_2_6_MASKS_SERVER & ~ignore_masks);
@@ -3960,9 +3959,6 @@ rs6000_option_override_internal (bool global_init_p)
       based on either !TARGET_VSX or !TARGET_ALTIVEC concise.  */
    gcc_assert (TARGET_ALTIVEC || !TARGET_VSX);
- if (TARGET_P8_VECTOR && !TARGET_VSX)
-    rs6000_isa_flags &= ~OPTION_MASK_P8_VECTOR;
-
    if (TARGET_DFP && !TARGET_HARD_FLOAT)
      {
        if (rs6000_isa_flags_explicit & OPTION_MASK_DFP)
@@ -4047,14 +4043,14 @@ rs6000_option_override_internal (bool global_init_p)
      rs6000_isa_flags |= OPTION_MASK_P8_FUSION_SIGN;
/* ISA 3.0 vector instructions include ISA 2.07. */
-  if (TARGET_P9_VECTOR && !TARGET_P8_VECTOR)
+  if (TARGET_P9_VECTOR && !TARGET_VSX)
      rs6000_isa_flags &= ~OPTION_MASK_P9_VECTOR;
/* Set -mallow-movmisalign to explicitly on if we have full ISA 2.07
       support. If we only have ISA 2.06 support, and the user did not specify
       the switch, leave it set to -1 so the movmisalign patterns are enabled,
       but we don't enable the full vectorization support  */
-  if (TARGET_ALLOW_MOVMISALIGN == -1 && TARGET_P8_VECTOR)
+  if (TARGET_ALLOW_MOVMISALIGN == -1 && TARGET_POWER8 && TARGET_VSX)
      TARGET_ALLOW_MOVMISALIGN = 1;
else if (TARGET_ALLOW_MOVMISALIGN && !TARGET_VSX)
@@ -6649,7 +6645,10 @@ vspltisw_vupkhsw_constant_p (rtx op, machine_mode mode, 
int *constant_ptr)
    HOST_WIDE_INT value;
    rtx elt;
- if (!TARGET_P8_VECTOR)
+  if (!TARGET_POWER8)
+    return false;
+
+  if (!TARGET_ALTIVEC)
      return false;
if (mode != V2DImode)
@@ -6705,7 +6704,7 @@ output_vec_const_move (rtx *operands)
          else if (dest_vmx_p)
            return "vspltisw %0,-1";
- else if (TARGET_P8_VECTOR)
+         else if (TARGET_POWER8 && TARGET_VSX)
            return "xxlorc %x0,%x0,%x0";
else
@@ -6954,7 +6953,7 @@ rs6000_expand_vector_init (rtx target, rtx vals)
        }
        else
        {
-         if (TARGET_P8_VECTOR && TARGET_POWERPC64)
+         if (TARGET_POWER8 && TARGET_VSX && TARGET_POWERPC64)
            {
              rtx tmp_sf[4];
              rtx tmp_si[4];
@@ -7559,7 +7558,7 @@ rs6000_expand_vector_set (rtx target, rtx val, rtx 
elt_rtx)
             that future fusion opportunities can kick in, but must
             generate VNOR elsewhere.  */
          rtx notx = gen_rtx_NOT (V16QImode, force_reg (V16QImode, x));
-         rtx iorx = (TARGET_P8_VECTOR
+         rtx iorx = (TARGET_POWER8 && TARGET_ALTIVEC
                      ? gen_rtx_IOR (V16QImode, notx, notx)
                      : gen_rtx_AND (V16QImode, notx, notx));
          rtx tmp = gen_reg_rtx (V16QImode);
@@ -12657,7 +12656,7 @@ rs6000_secondary_reload_simple_move (enum 
rs6000_reg_type to_type,
        }
/* ISA 2.07: MTVSRWZ or MFVSRWZ. */
-      if (TARGET_P8_VECTOR)
+      if (TARGET_POWER8 && TARGET_VSX)
        {
          if (mode == SImode)
            return true;
@@ -13408,7 +13407,7 @@ rs6000_preferred_reload_class (rtx x, enum reg_class 
rclass)
                 VSPLTI<x>.  */
              if (value == -1)
                {
-                 if (TARGET_P8_VECTOR)
+                 if (TARGET_POWER8 && TARGET_VSX)
                    return rclass;
                  else if (rclass == ALTIVEC_REGS || rclass == VSX_REGS)
                    return ALTIVEC_REGS;
@@ -23419,7 +23418,7 @@ altivec_expand_vec_perm_le (rtx operands[4])
        /* Invert the selector with a VNAND if available, else a VNOR.
         The VNAND is preferred for future fusion opportunities.  */
        notx = gen_rtx_NOT (V16QImode, sel);
-      iorx = (TARGET_P8_VECTOR
+      iorx = (TARGET_POWER8 && TARGET_ALTIVEC
              ? gen_rtx_IOR (V16QImode, notx, notx)
              : gen_rtx_AND (V16QImode, notx, notx));
        emit_insn (gen_rtx_SET (norreg, iorx));
@@ -23485,11 +23484,11 @@ altivec_expand_vec_perm_const (rtx target, rtx op0, 
rtx op1,
       BYTES_BIG_ENDIAN ? CODE_FOR_altivec_vmrglw_direct_v4si_be
                      : CODE_FOR_altivec_vmrghw_direct_v4si_le,
       {8, 9, 10, 11, 24, 25, 26, 27, 12, 13, 14, 15, 28, 29, 30, 31}},
-    {OPTION_MASK_P8_VECTOR,
+    {OPTION_MASK_VSX | OPTION_MASK_POWER8,
       BYTES_BIG_ENDIAN ? CODE_FOR_p8_vmrgew_v4sf_direct
                      : CODE_FOR_p8_vmrgow_v4sf_direct,
       {0, 1, 2, 3, 16, 17, 18, 19, 8, 9, 10, 11, 24, 25, 26, 27}},
-    {OPTION_MASK_P8_VECTOR,
+    {OPTION_MASK_VSX | OPTION_MASK_POWER8,
       BYTES_BIG_ENDIAN ? CODE_FOR_p8_vmrgow_v4sf_direct
                      : CODE_FOR_p8_vmrgew_v4sf_direct,
       {4, 5, 6, 7, 20, 21, 22, 23, 12, 13, 14, 15, 28, 29, 30, 31}},
@@ -23597,7 +23596,7 @@ altivec_expand_vec_perm_const (rtx target, rtx op0, rtx 
op1,
      {
        bool swapped;
- if ((patterns[j].mask & rs6000_isa_flags) == 0)
+      if ((patterns[j].mask & rs6000_isa_flags) != patterns[j].mask)
        continue;
elt = patterns[j].perm[0];
@@ -24480,7 +24479,6 @@ static struct rs6000_opt_mask const rs6000_opt_masks[] =
    { "popcntd",                      OPTION_MASK_POPCNTD,            false, 
true  },
    { "power8-fusion",                OPTION_MASK_P8_FUSION,          false, 
true  },
    { "power8-fusion-sign",   OPTION_MASK_P8_FUSION_SIGN,     false, true  },
-  { "power8-vector",         OPTION_MASK_P8_VECTOR,          false, true  },
    { "power9-minmax",                OPTION_MASK_P9_MINMAX,          false, 
true  },
    { "power9-misc",          OPTION_MASK_P9_MISC,            false, true  },
    { "power9-vector",                OPTION_MASK_P9_VECTOR,          false, 
true  },
diff --git a/gcc/config/rs6000/rs6000.h b/gcc/config/rs6000/rs6000.h
index db6112a09e1..fb15cd48553 100644
--- a/gcc/config/rs6000/rs6000.h
+++ b/gcc/config/rs6000/rs6000.h
@@ -467,11 +467,11 @@ extern int rs6000_vector_align[];
  #define TARGET_EXTSWSLI       (TARGET_MODULO && TARGET_POWERPC64)
  #define TARGET_MADDLD TARGET_MODULO
-/* TARGET_DIRECT_MOVE is redundant to TARGET_P8_VECTOR, so alias it to that. */
-#define TARGET_DIRECT_MOVE     TARGET_P8_VECTOR
-#define TARGET_XSCVDPSPN       TARGET_P8_VECTOR
-#define TARGET_XSCVSPDPN       TARGET_P8_VECTOR
-#define TARGET_VADDUQM         (TARGET_P8_VECTOR && TARGET_POWERPC64)
+#define TARGET_DIRECT_MOVE     (TARGET_POWER8 && TARGET_VSX)
+#define TARGET_XSCVDPSPN       (TARGET_POWER8 && TARGET_VSX)
+#define TARGET_XSCVSPDPN       (TARGET_POWER8 && TARGET_VSX)
+#define TARGET_VADDUQM         (TARGET_POWER8 && TARGET_ALTIVEC \
+                                && TARGET_POWERPC64)
  #define TARGET_DIRECT_MOVE_128        (TARGET_P9_VECTOR && 
TARGET_DIRECT_MOVE_64BIT)
  #define TARGET_VEXTRACTUB     (TARGET_P9_VECTOR && TARGET_DIRECT_MOVE_64BIT)
diff --git a/gcc/config/rs6000/rs6000.md b/gcc/config/rs6000/rs6000.md
index ff085bf9bb1..d1b3c8b9b32 100644
--- a/gcc/config/rs6000/rs6000.md
+++ b/gcc/config/rs6000/rs6000.md
@@ -399,7 +399,7 @@
       (const_int 1)
(and (eq_attr "isa" "p8v")
-         (match_test "TARGET_P8_VECTOR"))
+         (match_test "TARGET_POWER8 && TARGET_VSX"))
       (const_int 1)
(and (eq_attr "isa" "p9")
@@ -854,7 +854,7 @@
  ;;    D-form load to FPR register & move to Altivec register
  ;;    Move Altivec register to FPR register and store
  (define_mode_iterator ALTIVEC_DFORM [DF
-                                    (SF "TARGET_P8_VECTOR")
+                                    (SF "TARGET_POWER8 && TARGET_VSX")
                                     (DI "TARGET_POWERPC64")])
(include "darwin.md")
@@ -1189,7 +1189,7 @@
  (define_split
    [(set (match_operand:DI 0 "altivec_register_operand")
        (sign_extend:DI (match_operand:SI 1 "altivec_register_operand")))]
-  "TARGET_P8_VECTOR && !TARGET_P9_VECTOR && reload_completed"
+  "TARGET_POWER8 && TARGET_VSX && !TARGET_P9_VECTOR && reload_completed"
    [(const_int 0)]
  {
    rtx dest = operands[0];
@@ -5925,7 +5925,7 @@
    operands[1] = rs6000_force_indexed_or_indirect_mem (operands[1]);
    if (GET_CODE (operands[2]) == SCRATCH)
      operands[2] = gen_reg_rtx (DImode);
-  if (TARGET_P8_VECTOR)
+  if (TARGET_POWER8 && TARGET_VSX)
      emit_insn (gen_extendsidi2 (operands[2], operands[1]));
    else
      emit_insn (gen_lfiwax (operands[2], operands[1]));
@@ -6021,7 +6021,7 @@
    operands[1] = rs6000_force_indexed_or_indirect_mem (operands[1]);
    if (GET_CODE (operands[2]) == SCRATCH)
      operands[2] = gen_reg_rtx (DImode);
-  if (TARGET_P8_VECTOR)
+  if (TARGET_POWER8 && TARGET_VSX)
      emit_insn (gen_zero_extendsidi2 (operands[2], operands[1]));
    else
      emit_insn (gen_lfiwzx (operands[2], operands[1]));
@@ -6435,7 +6435,8 @@
    [(set (match_operand:QHSI 0 "memory_operand" "=Z")
        (any_fix:QHSI (match_operand:SFDF 1 "gpc_reg_operand" "wa")))
     (clobber (match_scratch:SI 2 "=wa"))]
-    "(<QHSI:MODE>mode == SImode && TARGET_P8_VECTOR) || TARGET_P9_VECTOR"
+    "(<QHSI:MODE>mode == SImode && TARGET_POWER8 && TARGET_VSX)
+     || TARGET_P9_VECTOR"
    "#"
    "&& reload_completed"
    [(set (match_dup 2)
@@ -6453,7 +6454,7 @@
        (unsigned_fix:SI (match_operand:SFDF 1 "gpc_reg_operand")))]
    "TARGET_HARD_FLOAT && TARGET_FCTIWUZ && TARGET_STFIWX"
  {
-  if (!TARGET_P8_VECTOR)
+  if (!TARGET_POWER8 || !TARGET_VSX)
      {
        emit_insn (gen_fixuns_trunc<mode>si2_stfiwx (operands[0], operands[1]));
        DONE;
@@ -7347,7 +7348,8 @@
        (not:BOOL_128
         (xor:BOOL_128 (match_operand:BOOL_128 1 "vlogical_operand")
                       (match_operand:BOOL_128 2 "vlogical_operand"))))]
-  "<MODE>mode == TImode || <MODE>mode == PTImode || TARGET_P8_VECTOR"
+  "<MODE>mode == TImode || <MODE>mode == PTImode
+   || (TARGET_POWER8 && TARGET_ALTIVEC)"
    "")
;; Rewrite nand into canonical form
@@ -7356,7 +7358,8 @@
        (ior:BOOL_128
         (not:BOOL_128 (match_operand:BOOL_128 1 "vlogical_operand"))
         (not:BOOL_128 (match_operand:BOOL_128 2 "vlogical_operand"))))]
-  "<MODE>mode == TImode || <MODE>mode == PTImode || TARGET_P8_VECTOR"
+  "<MODE>mode == TImode || <MODE>mode == PTImode
+   || (TARGET_POWER8 && TARGET_ALTIVEC)"
    "")
;; The canonical form is to have the negated element first, so we need to
@@ -7366,7 +7369,8 @@
        (ior:BOOL_128
         (not:BOOL_128 (match_operand:BOOL_128 2 "vlogical_operand"))
         (match_operand:BOOL_128 1 "vlogical_operand")))]
-  "<MODE>mode == TImode || <MODE>mode == PTImode || TARGET_P8_VECTOR"
+  "<MODE>mode == TImode || <MODE>mode == PTImode
+   || (TARGET_POWER8 && TARGET_ALTIVEC)"
    "")
;; 128-bit logical operations insns and split operations
@@ -7448,7 +7452,7 @@
         [(not:BOOL_128
           (match_operand:BOOL_128 2 "vlogical_operand" "<BOOL_REGS_OP2>"))
          (match_operand:BOOL_128 1 "vlogical_operand" "<BOOL_REGS_OP1>")]))]
-  "TARGET_P8_VECTOR || (GET_CODE (operands[3]) == AND)"
+  "(TARGET_POWER8 && TARGET_ALTIVEC) || (GET_CODE (operands[3]) == AND)"
  {
    if (TARGET_VSX && vsx_register_operand (operands[0], <MODE>mode))
      return "xxl%q3 %x0,%x1,%x2";
@@ -7458,8 +7462,7 @@
return "#";
  }
-  "(TARGET_P8_VECTOR || (GET_CODE (operands[3]) == AND))
-   && reload_completed && int_reg_operand (operands[0], <MODE>mode)"
+  "&& reload_completed && int_reg_operand (operands[0], <MODE>mode)"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, GET_CODE (operands[3]), false, false, true);
@@ -7485,9 +7488,9 @@
         [(not:TI2
           (match_operand:TI2 2 "int_reg_operand" "r,0,r"))
          (match_operand:TI2 1 "int_reg_operand" "r,r,0")]))]
-  "!TARGET_P8_VECTOR && (GET_CODE (operands[3]) != AND)"
+  "!(TARGET_POWER8 && TARGET_ALTIVEC) && (GET_CODE (operands[3]) != AND)"
    "#"
-  "reload_completed && !TARGET_P8_VECTOR && (GET_CODE (operands[3]) != AND)"
+  "&& reload_completed"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, GET_CODE (operands[3]), false, false, true);
@@ -7508,7 +7511,7 @@
           (match_operand:BOOL_128 1 "vlogical_operand" "<BOOL_REGS_OP1>"))
          (not:BOOL_128
           (match_operand:BOOL_128 2 "vlogical_operand" "<BOOL_REGS_OP2>"))]))]
-  "TARGET_P8_VECTOR || (GET_CODE (operands[3]) == AND)"
+  "(TARGET_POWER8 && TARGET_ALTIVEC) || (GET_CODE (operands[3]) == AND)"
  {
    if (TARGET_VSX && vsx_register_operand (operands[0], <MODE>mode))
      return "xxl%q3 %x0,%x1,%x2";
@@ -7518,8 +7521,7 @@
return "#";
  }
-  "(TARGET_P8_VECTOR || (GET_CODE (operands[3]) == AND))
-   && reload_completed && int_reg_operand (operands[0], <MODE>mode)"
+  "&& reload_completed && int_reg_operand (operands[0], <MODE>mode)"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, GET_CODE (operands[3]), false, true, true);
@@ -7546,9 +7548,9 @@
           (match_operand:TI2 1 "int_reg_operand" "r,0,r"))
          (not:TI2
           (match_operand:TI2 2 "int_reg_operand" "r,r,0"))]))]
-  "!TARGET_P8_VECTOR && (GET_CODE (operands[3]) != AND)"
+  "!(TARGET_POWER8 && TARGET_ALTIVEC) && (GET_CODE (operands[3]) != AND)"
    "#"
-  "reload_completed && !TARGET_P8_VECTOR && (GET_CODE (operands[3]) != AND)"
+  "&& reload_completed"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, GET_CODE (operands[3]), false, true, true);
@@ -7569,7 +7571,7 @@
         (xor:BOOL_128
          (match_operand:BOOL_128 1 "vlogical_operand" "<BOOL_REGS_OP1>")
          (match_operand:BOOL_128 2 "vlogical_operand" "<BOOL_REGS_OP2>"))))]
-  "TARGET_P8_VECTOR"
+  "TARGET_POWER8 && TARGET_ALTIVEC"
  {
    if (TARGET_VSX && vsx_register_operand (operands[0], <MODE>mode))
      return "xxleqv %x0,%x1,%x2";
@@ -7579,8 +7581,7 @@
return "#";
  }
-  "TARGET_P8_VECTOR && reload_completed
-   && int_reg_operand (operands[0], <MODE>mode)"
+  "&& reload_completed && int_reg_operand (operands[0], <MODE>mode)"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, XOR, true, false, false);
@@ -7606,9 +7607,9 @@
         (xor:TI2
          (match_operand:TI2 1 "int_reg_operand" "r,0,r")
          (match_operand:TI2 2 "int_reg_operand" "r,r,0"))))]
-  "!TARGET_P8_VECTOR"
+  "!(TARGET_POWER8 && TARGET_ALTIVEC)"
    "#"
-  "reload_completed && !TARGET_P8_VECTOR"
+  "&& reload_completed"
    [(const_int 0)]
  {
    rs6000_split_logical (operands, XOR, true, false, false);
@@ -10551,7 +10552,7 @@
        (match_operand:SF 1 "any_operand"))
     (set (match_operand:SF 2 "gpc_reg_operand")
        (match_dup 0))]
-  "!TARGET_P8_VECTOR
+  "!(TARGET_POWER8 && TARGET_VSX)
     && peep2_reg_dead_p (2, operands[0])"
    [(set (match_dup 2) (match_dup 1))])
diff --git a/gcc/config/rs6000/rs6000.opt b/gcc/config/rs6000/rs6000.opt
index 88cf16ca581..34f780ce84f 100644
--- a/gcc/config/rs6000/rs6000.opt
+++ b/gcc/config/rs6000/rs6000.opt
@@ -487,7 +487,7 @@ Target Undocumented Mask(P8_FUSION_SIGN) 
Var(rs6000_isa_flags)
  Allow sign extension in fusion operations.
mpower8-vector
-Target Undocumented Mask(P8_VECTOR) Var(rs6000_isa_flags) WarnRemoved
+Target Undocumented WarnRemoved
  Use vector and scalar instructions added in ISA 2.07.
mpower10-fusion
diff --git a/gcc/config/rs6000/vector.md b/gcc/config/rs6000/vector.md
index f5797387ca7..fbe9648713e 100644
--- a/gcc/config/rs6000/vector.md
+++ b/gcc/config/rs6000/vector.md
@@ -1035,7 +1035,7 @@
  (define_expand "clz<mode>2"
    [(set (match_operand:VEC_I 0 "register_operand")
        (clz:VEC_I (match_operand:VEC_I 1 "register_operand")))]
-  "TARGET_P8_VECTOR")
+  "TARGET_POWER8 && TARGET_ALTIVEC")
;; Vector count trailing zeros
  (define_expand "ctz<mode>2"
@@ -1047,7 +1047,7 @@
  (define_expand "popcount<mode>2"
    [(set (match_operand:VEC_I 0 "register_operand")
          (popcount:VEC_I (match_operand:VEC_I 1 "register_operand")))]
-  "TARGET_P8_VECTOR")
+  "TARGET_POWER8 && TARGET_ALTIVEC")
;; Vector parity
  (define_expand "parity<mode>2"
diff --git a/gcc/config/rs6000/vsx.md b/gcc/config/rs6000/vsx.md
index 4d47833c944..332a2664551 100644
--- a/gcc/config/rs6000/vsx.md
+++ b/gcc/config/rs6000/vsx.md
@@ -1207,30 +1207,6 @@
    [(set_attr "type" "vecperm")
     (set_attr "length" "8")])
-(define_insn_and_split "*vspltisw_v2di_split"
-  [(set (match_operand:V2DI 0 "altivec_register_operand" "=v")
-       (match_operand:V2DI 1 "vspltisw_vupkhsw_constant_split" "W"))]
-  "TARGET_P8_VECTOR && vspltisw_vupkhsw_constant_split (operands[1], V2DImode)"
-  "#"
-  "&& 1"
-  [(const_int 0)]
-{
-  rtx op0 = operands[0];
-  rtx op1 = operands[1];
-  rtx tmp = can_create_pseudo_p ()
-           ? gen_reg_rtx (V4SImode)
-           : gen_lowpart (V4SImode, op0);
-  int value;
-
-  vspltisw_vupkhsw_constant_p (op1, V2DImode, &value);
-  emit_insn (gen_altivec_vspltisw (tmp, GEN_INT (value)));
-  emit_insn (gen_altivec_vupkhsw_direct (op0, tmp));
-
-  DONE;
-}
-  [(set_attr "type" "vecperm")
-   (set_attr "length" "8")])
-
;; Prefer using vector registers over GPRs. Prefer using ISA 3.0's XXSPLTISB
  ;; or Altivec VSPLITW 0/-1 over XXLXOR/XXLORC to set a register to all 0's or

Reply via email to