This patch adds support for the SVE_AES2 and SSVE_AES extensions,
including the following new instructions:

- PMULL (multi-vector, outputting to a pair of vectors)
- PMLAL (multi-vector, accumulating into a pair of vectors)
- AESE (indexed, two and four register variants)
- AESD (indexed, two and four register variants)
- AESEMC (indexed, two and four register variants)
- AESDIMC (indexed, two and four register variants)

It also makes the existing SVE2 AES instructions (AESE, AESD, AESMC,
AESIMC, PMULLB, PMULLT and their pair variants) available in streaming
SVE mode when SSVE_AES is enabled.

gcc/ChangeLog:

        * config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins): Define
        __ARM_FEATURE_SVE_AES2 and __ARM_FEATURE_SSVE_AES.
        * config/aarch64/aarch64-sve-builtins-functions.h
        (unspec_based_aes_lane_function): New typedef.
        * config/aarch64/aarch64-sve-builtins-shapes.cc
        (binary_tuple_uint64_n_def): New shape.
        (ternary_tuple_uint64_n_def): New shape.
        (binary_aes_lane_def): New shape.
        * config/aarch64/aarch64-sve-builtins-shapes.h
        (binary_aes_lane): New declaration.
        (binary_tuple_uint64_n): Likewise.
        (ternary_tuple_uint64_n): Likewise.
        * config/aarch64/aarch64-sve-builtins-sve2.cc
        (svaese_lane): New function.
        (svaesd_lane): Likewise.
        (svaesemc_lane): Likewise.
        (svaesdimc_lane): Likewise.
        (svpmull_pair): Likewise.
        (svpmlal_pair): Likewise.
        * config/aarch64/aarch64-sve-builtins-sve2.def: Add new builtins for
        AES lane, PMULL pair and PMLAL pair.  Update existing SVE2 AES
        builtins to be streaming-compatible.
        * config/aarch64/aarch64-sve-builtins-sve2.h
        (svaese_lane): New declaration.
        (svaesd_lane): Likewise.
        (svaesemc_lane): Likewise.
        (svaesdimc_lane): Likewise.
        (svpmull_pair): Likewise.
        (svpmlal_pair): Likewise.
        * config/aarch64/aarch64-sve2.md
        (aarch64_sve2_aes<aes_op>): Use TARGET_SSVE2_AES.
        (aarch64_sve2_aes<aesmc_op>): Likewise.
        (aarch64_sve2_aese_aesmc): Likewise.
        (aarch64_sve2_aesd_aesimc): Likewise.
        (*aarch64_sve2_pmull<mode>): Likewise.
        (*aarch64_sve2_pmull_pair<mode>): Likewise.
        (aarch64_sve_pmull_pair): New pattern.
        (aarch64_sve_pmlal_pair): New pattern.
        (@aarch64_sve2_aes<aes_op>_lane<mode>): New pattern.
        (@aarch64_sve2_aes<aesemc_op>_lane<mode>): New pattern.
        * config/aarch64/aarch64.h (TARGET_SVE2_AES): Remove
        TARGET_NON_STREAMING guard.
        (TARGET_SSVE2_AES): New macro.
        (TARGET_SVE_AES2): New macro.
        (TARGET_SSVE_AES): New macro.
        * config/aarch64/iterators.md (SVE_QIx24): New mode iterator.
        (UNSPEC_PMULL_PAIR): New unspec.
        (UNSPEC_PMLAL_PAIR): Likewise.
        (UNSPEC_AESEMC): Likewise.
        (UNSPEC_AESDIMC): Likewise.
        (CRYPTO_AESMCI): New int iterator.
        (aesemc_op): New int attribute.
        * config/aarch64/predicates.md (const_0_to_3_operand): New predicate.

gcc/testsuite/ChangeLog:

        * g++.target/aarch64/sve/aarch64-ssve.exp: Add ssve-aes to
        target pragma.
        * gcc.target/aarch64/pragma_cpp_predefs_5.c: Add tests for
        __ARM_FEATURE_SSVE_AES.
        * gcc.target/aarch64/sve/acle/asm/test_sve_acle.h
        (TEST_XN_INDEXED): New test macro.
        * gcc.target/aarch64/sve2/acle/asm/aesd_lane_u8.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/aesd_u8.c: Update for
        streaming-compatible support.
        * gcc.target/aarch64/sve2/acle/asm/aesdimc_lane_u8.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/aese_lane_u8.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/aese_u8.c: Update for
        streaming-compatible support.
        * gcc.target/aarch64/sve2/acle/asm/aesemc_lane_u8.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/aesimc_u8.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/aesmc_u8.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmlal_pair_u64.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/pmull_pair_u64.c: New test.
        * gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u32.c: Update for
        streaming-compatible support.
        * gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u64.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u8.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullb_u16.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullb_u64.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u32.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u64.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u8.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullt_u16.c: Likewise.
        * gcc.target/aarch64/sve2/acle/asm/pmullt_u64.c: Likewise.
        * lib/target-supports.exp: Add ssve-aes to exts_sve2.
---
 gcc/config/aarch64/aarch64-c.cc               |   3 +
 .../aarch64/aarch64-sve-builtins-functions.h  |   3 +
 .../aarch64/aarch64-sve-builtins-shapes.cc    | 142 ++++++++++++++++++
 .../aarch64/aarch64-sve-builtins-shapes.h     |   3 +
 .../aarch64/aarch64-sve-builtins-sve2.cc      |   8 +
 .../aarch64/aarch64-sve-builtins-sve2.def     |  14 +-
 .../aarch64/aarch64-sve-builtins-sve2.h       |   6 +
 gcc/config/aarch64/aarch64-sve2.md            |  90 ++++++++++-
 gcc/config/aarch64/aarch64.h                  |  17 ++-
 gcc/config/aarch64/iterators.md               |  10 ++
 gcc/config/aarch64/predicates.md              |   4 +
 .../g++.target/aarch64/sve/aarch64-ssve.exp   |  14 +-
 .../gcc.target/aarch64/pragma_cpp_predefs_5.c |  18 +++
 .../aarch64/sve/acle/asm/test_sve_acle.h      |  15 ++
 .../aarch64/sve2/acle/asm/aesd_lane_u8.c      |  67 +++++++++
 .../aarch64/sve2/acle/asm/aesd_u8.c           |   5 +-
 .../aarch64/sve2/acle/asm/aesdimc_lane_u8.c   |  67 +++++++++
 .../aarch64/sve2/acle/asm/aese_lane_u8.c      |  67 +++++++++
 .../aarch64/sve2/acle/asm/aese_u8.c           |   5 +-
 .../aarch64/sve2/acle/asm/aesemc_lane_u8.c    |  67 +++++++++
 .../aarch64/sve2/acle/asm/aesimc_u8.c         |   5 +-
 .../aarch64/sve2/acle/asm/aesmc_u8.c          |   5 +-
 .../aarch64/sve2/acle/asm/pmlal_pair_u64.c    |  69 +++++++++
 .../aarch64/sve2/acle/asm/pmull_pair_u64.c    |  45 ++++++
 .../aarch64/sve2/acle/asm/pmullb_pair_u32.c   |   4 +
 .../aarch64/sve2/acle/asm/pmullb_pair_u64.c   |   5 +-
 .../aarch64/sve2/acle/asm/pmullb_pair_u8.c    |   4 +
 .../aarch64/sve2/acle/asm/pmullb_u16.c        |   4 +
 .../aarch64/sve2/acle/asm/pmullb_u64.c        |   3 +
 .../aarch64/sve2/acle/asm/pmullt_pair_u32.c   |   4 +
 .../aarch64/sve2/acle/asm/pmullt_pair_u64.c   |   5 +-
 .../aarch64/sve2/acle/asm/pmullt_pair_u8.c    |   3 +
 .../aarch64/sve2/acle/asm/pmullt_u16.c        |   4 +
 .../aarch64/sve2/acle/asm/pmullt_u64.c        |   4 +
 gcc/testsuite/lib/target-supports.exp         |   2 +-
 35 files changed, 761 insertions(+), 30 deletions(-)
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_lane_u8.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesdimc_lane_u8.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_lane_u8.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesemc_lane_u8.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmlal_pair_u64.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmull_pair_u64.c

diff --git a/gcc/config/aarch64/aarch64-c.cc b/gcc/config/aarch64/aarch64-c.cc
index a55028dcd0a..4874ab4acf4 100644
--- a/gcc/config/aarch64/aarch64-c.cc
+++ b/gcc/config/aarch64/aarch64-c.cc
@@ -225,6 +225,7 @@ aarch64_update_cpp_builtins (cpp_reader *pfile)
                        "__ARM_FEATURE_SVE_B16B16", pfile);
   aarch64_def_or_undef (TARGET_SVE2, "__ARM_FEATURE_SVE2", pfile);
   aarch64_def_or_undef (TARGET_SVE2_AES, "__ARM_FEATURE_SVE2_AES", pfile);
+  aarch64_def_or_undef (TARGET_SVE_AES2, "__ARM_FEATURE_SVE_AES2", pfile);
   aarch64_def_or_undef (TARGET_SVE2_BITPERM,
                        "__ARM_FEATURE_SVE2_BITPERM", pfile);
   aarch64_def_or_undef (TARGET_SVE2_SHA3, "__ARM_FEATURE_SVE2_SHA3", pfile);
@@ -320,6 +321,8 @@ aarch64_update_cpp_builtins (cpp_reader *pfile)
                        "__ARM_FEATURE_SSVE_BITPERM", pfile);
   aarch64_def_or_undef (AARCH64_HAVE_ISA (SSVE_FEXPA),
                        "__ARM_FEATURE_SSVE_FEXPA", pfile);
+  aarch64_def_or_undef (AARCH64_HAVE_ISA (SSVE_AES), "__ARM_FEATURE_SSVE_AES",
+                       pfile);
 
   // Function multi-versioning defines
   aarch64_def_or_undef (targetm.has_ifunc_p (),
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-functions.h 
b/gcc/config/aarch64/aarch64-sve-builtins-functions.h
index 521bea72c25..8c67d60707c 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-functions.h
+++ b/gcc/config/aarch64/aarch64-sve-builtins-functions.h
@@ -397,6 +397,9 @@ typedef 
unspec_based_function_exact_insn<code_for_aarch64_sve_sub>
 typedef unspec_based_function_exact_insn<code_for_aarch64_sve_sub_lane>
   unspec_based_sub_lane_function;
 
+typedef unspec_based_function_exact_insn<code_for_aarch64_sve2_aes_lane>
+  unspec_based_aes_lane_function;
+
 /* A function that has conditional and unconditional forms, with both
    forms being associated with a single unspec each.  */
 class cond_or_uncond_unspec_function : public function_base
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc 
b/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc
index d9a03686314..1af0f9717b1 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc
@@ -1502,6 +1502,148 @@ struct binary_int_opt_single_n_def : public 
overloaded_base<0>
 };
 SHAPE (binary_int_opt_single_n)
 
+/* sv<t0>x<g>_t svfoo[_t0_g](sv<t0>_t, sv<t0>_t)
+   sv<t0>x<g>_t svfoo[_n_t0_g](sv<t0>_t, <t0>_t).  */
+struct binary_tuple_uint64_n_def : public overloaded_base<0>
+{
+  bool explicit_group_suffix_p () const override { return false; }
+
+  void
+  build (function_builder &b, const function_group_info &group) const override
+  {
+    b.add_overloaded_functions (group, MODE_none);
+    build_all (b, "t0,v0,v0", group, MODE_none);
+    build_all (b, "t0,v0,s0", group, MODE_n);
+  }
+
+  tree
+  resolve (function_resolver &r) const override
+  {
+    if (!r.check_num_arguments (2))
+      return error_mark_node;
+
+    if (!r.require_vector_type (0, VECTOR_TYPE_svuint64_t))
+      return error_mark_node;
+
+    mode_suffix_index mode;
+    if (r.scalar_argument_p (1))
+    {
+      if (!r.require_scalar_type (1, "uint64_t"))
+       return error_mark_node;
+      mode = MODE_n;
+    }
+    else
+    {
+      if (!r.require_vector_type (1, VECTOR_TYPE_svuint64_t))
+       return error_mark_node;
+      mode = MODE_none;
+    }
+
+    return r.resolve_to (mode, { TYPE_SUFFIX_u64, 2 });
+  }
+};
+SHAPE (binary_tuple_uint64_n)
+
+/* sv<t0>x<g>_t svfoo[_t0_g](sv<t0>x<g>_t, sv<t0>_t, sv<t0>_t)
+   sv<t0>x<g>_t svfoo[_n_t0_g](sv<t0>x<g>_t, sv<t0>_t, <t0>_t).  */
+struct ternary_tuple_uint64_n_def : public overloaded_base<0>
+{
+  bool explicit_group_suffix_p () const override { return false; }
+
+  void
+  build (function_builder &b, const function_group_info &group) const override
+  {
+    b.add_overloaded_functions (group, MODE_none);
+    build_all (b, "t0,t0,v0,v0", group, MODE_none);
+    build_all (b, "t0,t0,v0,s0", group, MODE_n);
+  }
+
+  tree
+  resolve (function_resolver &r) const override
+  {
+    if (!r.check_num_arguments (3))
+      return error_mark_node;
+
+    sve_type type = r.infer_sve_type (0);
+    if (!type)
+      return error_mark_node;
+
+    if (type.num_vectors != 2)
+      {
+       r.report_incorrect_num_vectors (0, type, 2);
+       return error_mark_node;
+      }
+
+    if (!r.require_vector_type (1, VECTOR_TYPE_svuint64_t))
+      return error_mark_node;
+
+    mode_suffix_index mode;
+    if (r.scalar_argument_p (2))
+    {
+      if (!r.require_scalar_type (2, "uint64_t"))
+       return error_mark_node;
+      mode = MODE_n;
+    }
+    else
+    {
+      if (!r.require_vector_type (2, VECTOR_TYPE_svuint64_t))
+       return error_mark_node;
+      mode = MODE_none;
+    }
+
+    return r.resolve_to (mode, type);
+  }
+};
+SHAPE (ternary_tuple_uint64_n)
+
+/* svuint8x2_t svaes<...>_lane[_u8_x2] (svuint8x2_t zdn, svuint8_t zm, uint64_t
+   index);
+   and
+   svuint8x4_t svaes<...>_lane[_u8_x4] (svuint8x4_t zdn, svuint8_t zm, uint64_t
+   index);
+   When index is in range[0-3]
+*/
+struct binary_aes_lane_def : public overloaded_base<0>
+{
+  bool explicit_group_suffix_p () const override { return false; }
+
+  void
+  build (function_builder &b, const function_group_info &group) const override
+  {
+    b.add_overloaded_functions (group, MODE_none);
+    build_all (b, "t0,t0,v0,su64", group, MODE_none);
+  }
+
+  tree
+  resolve (function_resolver &r) const override
+  {
+    if (!r.check_num_arguments (3))
+      return error_mark_node;
+
+    sve_type type = r.infer_sve_type (0);
+    if (!type)
+      return error_mark_node;
+
+    if (type.num_vectors != 2 && type.num_vectors != 4)
+      return error_mark_node;
+
+    if (!r.require_vector_type (1, VECTOR_TYPE_svuint8_t))
+      return error_mark_node;
+
+    if (!r.require_integer_immediate (2))
+      return error_mark_node;
+
+    return r.resolve_to (MODE_none, type);
+  }
+
+  bool
+  check (function_checker &c) const override
+  {
+    return c.require_immediate_lane_index (2, 0, 4);
+  }
+};
+SHAPE (binary_aes_lane)
+
 /* sv<t0>_t svfoo_<t0>(sv<t0>_t, sv<t0>_t, uint64_t)
 
    where the final argument is an integer constant expression in the
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-shapes.h 
b/gcc/config/aarch64/aarch64-sve-builtins-shapes.h
index 1aafbd05a94..343c267a1cd 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-shapes.h
+++ b/gcc/config/aarch64/aarch64-sve-builtins-shapes.h
@@ -85,6 +85,7 @@ namespace aarch64_sve
     extern const function_shape *const binary_int_opt_single_n;
     extern const function_shape *const binary_lane;
     extern const function_shape *const binary_long_lane;
+    extern const function_shape *const binary_aes_lane;
     extern const function_shape *const binary_long_opt_n;
     extern const function_shape *const binary_n;
     extern const function_shape *const binary_narrowb_opt_n;
@@ -95,6 +96,7 @@ namespace aarch64_sve
     extern const function_shape *const binary_rotate;
     extern const function_shape *const binary_scalar;
     extern const function_shape *const binary_single;
+    extern const function_shape *const binary_tuple_uint64_n;
     extern const function_shape *const binary_to_uint;
     extern const function_shape *const binary_uint;
     extern const function_shape *const binary_uint_n;
@@ -238,6 +240,7 @@ namespace aarch64_sve
     extern const function_shape *const ternary_uintq_intq_lane;
     extern const function_shape *const ternary_uintq_intq_opt_n;
     extern const function_shape *const ternary_za_uint_dual_single;
+    extern const function_shape *const ternary_tuple_uint64_n;
     extern const function_shape *const tmad;
     extern const function_shape *const unary;
     extern const function_shape *const unary_convert;
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-sve2.cc 
b/gcc/config/aarch64/aarch64-sve-builtins-sve2.cc
index 5ea08056ae3..33cd75784ae 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-sve2.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins-sve2.cc
@@ -1045,6 +1045,12 @@ FUNCTION (svaesd, fixed_insn_function, 
(CODE_FOR_aarch64_sve2_aesd))
 FUNCTION (svaese, fixed_insn_function, (CODE_FOR_aarch64_sve2_aese))
 FUNCTION (svaesimc, fixed_insn_function, (CODE_FOR_aarch64_sve2_aesimc))
 FUNCTION (svaesmc, fixed_insn_function, (CODE_FOR_aarch64_sve2_aesmc))
+FUNCTION (svaese_lane, unspec_based_aes_lane_function, (-1, UNSPEC_AESE, -1))
+FUNCTION (svaesd_lane, unspec_based_aes_lane_function, (-1, UNSPEC_AESD, -1))
+FUNCTION (svaesemc_lane, unspec_based_aes_lane_function, (-1, UNSPEC_AESEMC, \
+  -1))
+FUNCTION (svaesdimc_lane, unspec_based_aes_lane_function, (-1, UNSPEC_AESDIMC, 
\
+  -1))
 FUNCTION (svamax, faminmaximpl, (UNSPEC_COND_FAMAX, UNSPEC_FAMAX))
 FUNCTION (svamin, faminmaximpl, (UNSPEC_COND_FAMIN, UNSPEC_FAMIN))
 FUNCTION (svandqv, reduction, (UNSPEC_ANDQV, UNSPEC_ANDQV, -1))
@@ -1172,6 +1178,8 @@ FUNCTION (svpmullb, unspec_based_function, (-1, 
UNSPEC_PMULLB, -1))
 FUNCTION (svpmullb_pair, unspec_based_function, (-1, UNSPEC_PMULLB_PAIR, -1))
 FUNCTION (svpmullt, unspec_based_function, (-1, UNSPEC_PMULLT, -1))
 FUNCTION (svpmullt_pair, unspec_based_function, (-1, UNSPEC_PMULLT_PAIR, -1))
+FUNCTION (svpmull_pair, fixed_insn_function, (CODE_FOR_aarch64_sve_pmull_pair))
+FUNCTION (svpmlal_pair, fixed_insn_function, (CODE_FOR_aarch64_sve_pmlal_pair))
 FUNCTION (svpsel_lane, svpsel_lane_impl,)
 FUNCTION (svqabs, rtx_code_function, (SS_ABS, UNKNOWN, UNKNOWN))
 FUNCTION (svqcadd, svqcadd_impl,)
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-sve2.def 
b/gcc/config/aarch64/aarch64-sve-builtins-sve2.def
index 1a55de890cf..47f01a19c2e 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-sve2.def
+++ b/gcc/config/aarch64/aarch64-sve-builtins-sve2.def
@@ -195,8 +195,8 @@ DEF_SVE_FUNCTION (svstnt1w_scatter, 
store_scatter_index_restricted, d_integer, i
 DEF_SVE_FUNCTION (svstnt1w_scatter, store_scatter_offset_restricted, 
d_integer, implicit)
 #undef REQUIRED_EXTENSIONS
 
-#define REQUIRED_EXTENSIONS nonstreaming_sve (AARCH64_FL_SVE2 \
-                                             | AARCH64_FL_SVE_AES)
+#define REQUIRED_EXTENSIONS streaming_compatible (AARCH64_FL_SVE2 \
+                               | AARCH64_FL_SVE_AES, AARCH64_FL_SSVE_AES)
 DEF_SVE_FUNCTION (svaesd, binary, b_unsigned, none)
 DEF_SVE_FUNCTION (svaese, binary, b_unsigned, none)
 DEF_SVE_FUNCTION (svaesimc, unary, b_unsigned, none)
@@ -205,6 +205,16 @@ DEF_SVE_FUNCTION (svpmullb_pair, binary_opt_n, d_unsigned, 
none)
 DEF_SVE_FUNCTION (svpmullt_pair, binary_opt_n, d_unsigned, none)
 #undef REQUIRED_EXTENSIONS
 
+#define REQUIRED_EXTENSIONS \
+  streaming_compatible (AARCH64_FL_SVE_AES2, AARCH64_FL_SSVE_AES)
+DEF_SVE_FUNCTION_GS (svaese_lane, binary_aes_lane, b_unsigned, x24, none)
+DEF_SVE_FUNCTION_GS (svaesd_lane, binary_aes_lane, b_unsigned, x24, none)
+DEF_SVE_FUNCTION_GS (svaesemc_lane, binary_aes_lane, b_unsigned, x24, none)
+DEF_SVE_FUNCTION_GS (svaesdimc_lane, binary_aes_lane, b_unsigned, x24, none)
+DEF_SVE_FUNCTION_GS (svpmull_pair, binary_tuple_uint64_n, d_unsigned, x2, none)
+DEF_SVE_FUNCTION_GS (svpmlal_pair, ternary_tuple_uint64_n, d_unsigned, x2, 
none)
+#undef REQUIRED_EXTENSIONS
+
 #define REQUIRED_EXTENSIONS streaming_compatible (AARCH64_FL_SVE2 \
                                                  | AARCH64_FL_SVE_BITPERM, \
                                                  AARCH64_FL_SSVE_BITPERM)
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-sve2.h 
b/gcc/config/aarch64/aarch64-sve-builtins-sve2.h
index b2f2698b880..48f2e80baae 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-sve2.h
+++ b/gcc/config/aarch64/aarch64-sve-builtins-sve2.h
@@ -45,6 +45,10 @@ namespace aarch64_sve
     extern const function_base *const svaese;
     extern const function_base *const svaesimc;
     extern const function_base *const svaesmc;
+    extern const function_base *const svaese_lane;
+    extern const function_base *const svaesd_lane;
+    extern const function_base *const svaesemc_lane;
+    extern const function_base *const svaesdimc_lane;
     extern const function_base *const svandqv;
     extern const function_base *const svbcax;
     extern const function_base *const svbdep;
@@ -143,6 +147,8 @@ namespace aarch64_sve
     extern const function_base *const svpmullb_pair;
     extern const function_base *const svpmullt;
     extern const function_base *const svpmullt_pair;
+    extern const function_base *const svpmull_pair;
+    extern const function_base *const svpmlal_pair;
     extern const function_base *const svpsel_lane;
     extern const function_base *const svqabs;
     extern const function_base *const svqcadd;
diff --git a/gcc/config/aarch64/aarch64-sve2.md 
b/gcc/config/aarch64/aarch64-sve2.md
index 1970f882bfe..1f1a2c919bf 100644
--- a/gcc/config/aarch64/aarch64-sve2.md
+++ b/gcc/config/aarch64/aarch64-sve2.md
@@ -4062,6 +4062,8 @@ (define_insn "*cond_<sve_fp_op><mode>_strict"
 ;; - PMUL
 ;; - PMULLB
 ;; - PMULLT
+;; - PMULL
+;; - PMLAL
 ;; -------------------------------------------------------------------------
 
 ;; Uniform PMUL.
@@ -4084,7 +4086,7 @@ (define_insn "@aarch64_sve_<optab><mode>"
          [(match_operand:<VNARROW> 1 "register_operand" "w")
           (match_operand:<VNARROW> 2 "register_operand" "w")]
          SVE2_PMULL))]
-  "TARGET_SVE2"
+  "TARGET_SSVE2_AES"
   "<sve_int_op>\t%0.<Vetype>, %1.<Ventype>, %2.<Ventype>"
   [(set_attr "sve_type" "sve_int_pmul")]
 )
@@ -4098,11 +4100,46 @@ (define_insn "@aarch64_sve_<optab><mode>"
          [(match_operand:SVE2_PMULL_PAIR_I 1 "register_operand" "w")
           (match_operand:SVE2_PMULL_PAIR_I 2 "register_operand" "w")]
          SVE2_PMULL_PAIR))]
-  "TARGET_SVE2"
+  "TARGET_SSVE2_AES"
   "<sve_int_op>\t%0.<Vewtype>, %1.<Vetype>, %2.<Vetype>"
   [(set_attr "sve_type" "sve_int_pmul")]
 )
 
+;; Output to a tuple of vectors.
+;;   PMULL { <Zd1>.Q-<Zd2>.Q }, <Zn>.D, <Zm>.D
+;;     <Zd1> must be a multiple of 2 (0, 2, ..., 30)
+;;     <Zd2> must be must be Zdn1 + 1
+;;     <Zn> (0-31) scalable vector
+;;     <Zm> (0-31) scalable vector -or- an uint64 (broadcast to a vector)
+(define_insn "aarch64_sve_pmull_pair"
+  [(set (match_operand:VNx4DI 0 "aligned_register_operand" "=Uw2")
+       (unspec:VNx4DI
+         [(match_operand:VNx2DI 1 "register_operand" "w")
+          (match_operand:VNx2DI 2 "register_operand" "w")]
+         UNSPEC_PMULL_PAIR))]
+  "TARGET_SSVE_AES"
+  "pmull\t{%S0.q - %T0.q}, %1.d, %2.d"
+  [(set_attr "sve_type" "sve_int_pmul")]
+)
+
+;; Output to a tuple of vectors, accumulating.
+;;   PMLAL { <Zda1>.Q-<Zda2>.Q }, <Zn>.D, <Zm>.D
+;;     <Zda1> must be a multiple of 2 (0, 2, ..., 30)
+;;     <Zda2> must be must be Zdn1 + 1
+;;     <Zn> (0-31) scalable vector
+;;     <Zm> (0-31) scalable vector -or- an uint64 (broadcast to a vector)
+(define_insn "aarch64_sve_pmlal_pair"
+  [(set (match_operand:VNx4DI 0 "aligned_register_operand" "=Uw2")
+       (unspec:VNx4DI
+         [(match_operand:VNx4DI 1 "aligned_register_operand" "0")
+          (match_operand:VNx2DI 2 "register_operand" "w")
+          (match_operand:VNx2DI 3 "register_operand" "w")]
+         UNSPEC_PMLAL_PAIR))]
+  "TARGET_SSVE_AES"
+  "pmlal\t{%S0.q - %T0.q}, %2.d, %3.d"
+  [(set_attr "sve_type" "sve_int_pmul")]
+)
+
 ;; =========================================================================
 ;; == Comparisons and selects
 ;; =========================================================================
@@ -4709,6 +4746,10 @@ (define_insn "@aarch64_sve_luti<LUTI_BITS><mode>"
 ;; - AESE
 ;; - AESIMC
 ;; - AESMC
+;; - AESD (indexed, two registers and four registers)
+;; - AESE (indexed, two registers and four registers)
+;; - AESEMC (indexed, two registers and four registers)
+;; - AESDIMC (indexed, two registers and four registers)
 ;; -------------------------------------------------------------------------
 
 ;; AESD and AESE.
@@ -4719,7 +4760,7 @@ (define_insn "aarch64_sve2_aes<aes_op>"
             (match_operand:VNx16QI 1 "register_operand" "%0")
             (match_operand:VNx16QI 2 "register_operand" "w"))]
           CRYPTO_AES))]
-  "TARGET_SVE2_AES"
+  "TARGET_SSVE2_AES"
   "aes<aes_op>\t%0.b, %0.b, %2.b"
   [(set_attr "type" "crypto_aese")]
 )
@@ -4730,7 +4771,7 @@ (define_insn "aarch64_sve2_aes<aesmc_op>"
        (unspec:VNx16QI
          [(match_operand:VNx16QI 1 "register_operand" "0")]
          CRYPTO_AESMC))]
-  "TARGET_SVE2_AES"
+  "TARGET_SSVE2_AES"
   "aes<aesmc_op>\t%0.b, %0.b"
   [(set_attr "type" "crypto_aesmc")]
 )
@@ -4749,7 +4790,7 @@ (define_insn "*aarch64_sve2_aese_fused"
                (match_operand:VNx16QI 2 "register_operand" "w"))]
             UNSPEC_AESE)]
          UNSPEC_AESMC))]
-  "TARGET_SVE2_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)"
+  "TARGET_SSVE2_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)"
   "aese\t%0.b, %0.b, %2.b\;aesmc\t%0.b, %0.b"
   [(set_attr "type" "crypto_aese")
    (set_attr "length" "8")]
@@ -4764,12 +4805,49 @@ (define_insn "*aarch64_sve2_aesd_fused"
                (match_operand:VNx16QI 2 "register_operand" "w"))]
             UNSPEC_AESD)]
          UNSPEC_AESIMC))]
-  "TARGET_SVE2_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)"
+  "TARGET_SSVE2_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)"
   "aesd\t%0.b, %0.b, %2.b\;aesimc\t%0.b, %0.b"
   [(set_attr "type" "crypto_aese")
    (set_attr "length" "8")]
 )
 
+;; AESE and AESD, indexed, two registers and four registers.
+;;   AES<E/D> { <Zdn1>.B-<Zdn(2/4)>.B }, { <Zdn1>.B-<Zdn(2/4)>.B }, 
<Zm>.Q[<index>]
+;;     <Zdn1> must be a multiple of 2 (0, 2, ..., 30) or 4 (0, 4, ..., 28)
+;;     <Zdn4> must be must be Zdn1 + 1 or Zdn1 + 3
+;;     <Zm> (0-31)
+;;     <Index> (0-3)
+
+(define_insn "@aarch64_sve2_aes<aes_op>_lane<mode>"
+  [(set (match_operand:SVE_QIx24 0 "aligned_register_operand" 
"=Uw<vector_count>")
+       (unspec:SVE_QIx24
+         [(match_operand:SVE_QIx24 1 "aligned_register_operand" "0")
+          (match_operand:VNx16QI 2 "register_operand" "w")
+          (match_operand:SI 3 "const_0_to_3_operand")]
+         CRYPTO_AES))]
+  "TARGET_SSVE_AES"
+  "aes<aes_op>\t%0, %0, %2.q[%3]"
+  [(set_attr "type" "crypto_aese")]
+)
+
+;; AESEMC and AESDIMC, indexed, two registers and four registers.
+;;   AESEMC/AESDIMC { <Zdn1>.B-<Zdn (2/4)>.B }, { <Zdn1>.B-<Zdn (2/4)>.B }, 
<Zm>.Q[<index>]
+;;     <Zdn1> must be a multiple of 2 (0, 2, ..., 30)
+;;     <Zdn2> must be Zdn1 + 1 or Zdn1 + 3
+;;     <Zm> (0-31)
+
+(define_insn "@aarch64_sve2_aes<aesemc_op>_lane<mode>"
+  [(set (match_operand:SVE_QIx24 0 "aligned_register_operand" 
"=Uw<vector_count>")
+       (unspec:SVE_QIx24
+         [(match_operand:SVE_QIx24 1 "aligned_register_operand" "0")
+          (match_operand:VNx16QI 2 "register_operand" "w")
+          (match_operand:SI 3 "const_0_to_3_operand")]
+         CRYPTO_AESMCI))]
+  "TARGET_SSVE_AES"
+  "aes<aesemc_op>\t%0, %0, %2.q[%3]"
+  [(set_attr "type" "crypto_aesmc")]
+)
+
 ;; -------------------------------------------------------------------------
 ;; ---- Optional SHA-3 extensions
 ;; -------------------------------------------------------------------------
diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
index 0a53b42b199..7a411ad9b76 100644
--- a/gcc/config/aarch64/aarch64.h
+++ b/gcc/config/aarch64/aarch64.h
@@ -292,8 +292,21 @@ constexpr auto AARCH64_FL_DEFAULT_ISA_MODE ATTRIBUTE_UNUSED
 
 /* SVE2 AES instructions, enabled through +sve2-aes.  */
 #define TARGET_SVE2_AES (AARCH64_HAVE_ISA (SVE2) \
-                        && AARCH64_HAVE_ISA (SVE_AES) \
-                        && TARGET_NON_STREAMING)
+                        && AARCH64_HAVE_ISA (SVE_AES))
+
+/* SVE2 AES instructions enabled through +sve-aes2 for non-streaming
+   and +ssve-aes for streaming.  */
+#define TARGET_SSVE2_AES ((TARGET_SVE2_AES) \
+               && (AARCH64_HAVE_ISA (SSVE_AES) || TARGET_NON_STREAMING))
+
+/* sve_aes2 instructions are enabled through +sve-aes2.  */
+#define TARGET_SVE_AES2 (AARCH64_HAVE_ISA (SVE2) \
+                        && AARCH64_HAVE_ISA (SVE_AES2))
+
+/* SVE AES2 instructions enabled through +sve-aes2 for non-streaming
+   and +ssve-aes for streaming.  */
+#define TARGET_SSVE_AES ((TARGET_SVE_AES2) \
+               && (AARCH64_HAVE_ISA (SSVE_AES) || TARGET_NON_STREAMING))
 
 /* Checks if FEAT_SVE2_BitPerm is supported which is aliased to
    SVE2 + SVE_BitPerm.  */
diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
index a469d082bbb..7972c7a650b 100644
--- a/gcc/config/aarch64/iterators.md
+++ b/gcc/config/aarch64/iterators.md
@@ -697,6 +697,9 @@ (define_mode_iterator SVE_SI [VNx2SI VNx4SI])
 
 (define_mode_iterator SVE_DIx24 [VNx4DI VNx8DI])
 
+;; SVE integer vector modes with 2 and 4 vectors of 8-bit elements.
+(define_mode_iterator SVE_QIx24 [VNx32QI VNx64QI])
+
 ;; SVE modes with 2 or 4 elements.
 (define_mode_iterator SVE_24 [VNx2QI VNx2HI VNx2HF VNx2BF VNx2SI VNx2SF
                              VNx2DI VNx2DF
@@ -887,6 +890,8 @@ (define_c_enum "unspec"
     UNSPEC_SQDMULH     ; Used in aarch64-simd.md.
     UNSPEC_SQRDMULH    ; Used in aarch64-simd.md.
     UNSPEC_PMUL                ; Used in aarch64-simd.md.
+    UNSPEC_PMULL_PAIR  ; Used in aarch64-sve2.md.
+    UNSPEC_PMLAL_PAIR  ; Used in aarch64-sve2.md.
     UNSPEC_FMULX       ; Used in aarch64-simd.md.
     UNSPEC_USQADD      ; Used in aarch64-simd.md.
     UNSPEC_SUQADD      ; Used in aarch64-simd.md.
@@ -938,6 +943,8 @@ (define_c_enum "unspec"
     UNSPEC_AESD         ; Used in aarch64-simd.md.
     UNSPEC_AESMC        ; Used in aarch64-simd.md.
     UNSPEC_AESIMC       ; Used in aarch64-simd.md.
+    UNSPEC_AESEMC       ; Used in aarch64-sve2.md.
+    UNSPEC_AESDIMC      ; Used in aarch64-sve2.md.
     UNSPEC_SHA1C       ; Used in aarch64-simd.md.
     UNSPEC_SHA1M        ; Used in aarch64-simd.md.
     UNSPEC_SHA1P        ; Used in aarch64-simd.md.
@@ -3546,6 +3553,8 @@ (define_int_iterator CRC [UNSPEC_CRC32B UNSPEC_CRC32H 
UNSPEC_CRC32W
 
 (define_int_iterator CRYPTO_AES [UNSPEC_AESE UNSPEC_AESD])
 (define_int_iterator CRYPTO_AESMC [UNSPEC_AESMC UNSPEC_AESIMC])
+;; Indexed versions
+(define_int_iterator CRYPTO_AESMCI [UNSPEC_AESEMC UNSPEC_AESDIMC])
 
 (define_int_iterator CRYPTO_SHA1 [UNSPEC_SHA1C UNSPEC_SHA1M UNSPEC_SHA1P])
 
@@ -4681,6 +4690,7 @@ (define_int_attr crc_mode [(UNSPEC_CRC32B "QI") 
(UNSPEC_CRC32H "HI")
 
 (define_int_attr aes_op [(UNSPEC_AESE "e") (UNSPEC_AESD "d")])
 (define_int_attr aesmc_op [(UNSPEC_AESMC "mc") (UNSPEC_AESIMC "imc")])
+(define_int_attr aesemc_op [(UNSPEC_AESEMC "emc") (UNSPEC_AESDIMC "dimc")])
 
 (define_int_attr sha1_op [(UNSPEC_SHA1C "c") (UNSPEC_SHA1P "p")
                          (UNSPEC_SHA1M "m")])
diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
index 40b0e8b9f02..89e392367a9 100644
--- a/gcc/config/aarch64/predicates.md
+++ b/gcc/config/aarch64/predicates.md
@@ -50,6 +50,10 @@ (define_predicate "const0_to_1_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 0, 1)")))
 
+(define_predicate "const_0_to_3_operand"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), 0, 3)")))
+
 (define_predicate "const_0_to_7_operand"
   (and (match_code "const_int")
        (match_test "IN_RANGE (INTVAL (op), 0, 7)")))
diff --git a/gcc/testsuite/g++.target/aarch64/sve/aarch64-ssve.exp 
b/gcc/testsuite/g++.target/aarch64/sve/aarch64-ssve.exp
index 1ba48591ef8..baba7282b97 100644
--- a/gcc/testsuite/g++.target/aarch64/sve/aarch64-ssve.exp
+++ b/gcc/testsuite/g++.target/aarch64/sve/aarch64-ssve.exp
@@ -37,7 +37,7 @@ gcc_parallel_test_enable 0
 set preamble {
 #include <arm_sve.h>
 
-#pragma GCC target 
"+i8mm+f32mm+f64mm+sve2+sve2-bitperm+sve2-sm4+sve2-aes+sve2-sha3+sme+ssve-bitperm+sme2p2+ssve-fexpa"
+#pragma GCC target 
"+i8mm+f32mm+f64mm+sve2+sve2-bitperm+sve2-sm4+sve2-aes+sve2-sha3+sme+ssve-bitperm+sme2p2+ssve-fexpa+ssve-aes"
 
 extern svbool_t &pred;
 
@@ -150,6 +150,12 @@ set streaming_ok {
     u8 = svbgrp (u8, u8)
     u32 = svcompact (pred, u32)
     f32 = svexpa (u32)
+    u8 = svaesd (u8, u8)
+    u8 = svaese (u8, u8)
+    u8 = svaesimc (u8)
+    u8 = svaesmc (u8)
+    u64 = svpmullb_pair (u64, u64)
+    u64 = svpmullt_pair (u64, u64)
 }
 
 # This order follows the list in the SME manual.
@@ -162,10 +168,6 @@ set nonstreaming_only {
     u64 = svadrw_index (u64, u64)
     u32 = svadrd_index (u32, u32)
     u64 = svadrd_index (u64, u64)
-    u8 = svaesd (u8, u8)
-    u8 = svaese (u8, u8)
-    u8 = svaesimc (u8)
-    u8 = svaesmc (u8)
     f32 = svbfmmla (f32, bf16, bf16)
     f32 = svadda (pred, 1.0f, f32)
     f32 = svmmla (f32, f32, f32)
@@ -264,8 +266,6 @@ set nonstreaming_only {
     u32 = svldnt1_gather_offset_u32 (pred, u32, 1)
     pred = svmatch (pred, u8, u8)
     pred = svnmatch (pred, u8, u8)
-    u64 = svpmullb_pair (u64, u64)
-    u64 = svpmullt_pair (u64, u64)
     svprfb_gather_offset (pred, void_ptr, u64, SV_PLDL1KEEP)
     svprfb_gather_offset (pred, u64, 1, SV_PLDL1KEEP)
     svprfd_gather_index (pred, void_ptr, u64, SV_PLDL1KEEP)
diff --git a/gcc/testsuite/gcc.target/aarch64/pragma_cpp_predefs_5.c 
b/gcc/testsuite/gcc.target/aarch64/pragma_cpp_predefs_5.c
index bb83ab434b5..708411c4dba 100644
--- a/gcc/testsuite/gcc.target/aarch64/pragma_cpp_predefs_5.c
+++ b/gcc/testsuite/gcc.target/aarch64/pragma_cpp_predefs_5.c
@@ -25,6 +25,10 @@
 #error "__ARM_FEATURE_SVE2_BitPerm is defined but should not be!"
 #endif
 
+#ifdef __ARM_FEATURE_SSVE_AES
+#error "__ARM_FEATURE_SSVE_AES is defined but should not be!"
+#endif
+
 #ifdef __ARM_FEATURE_SM4
 #error "__ARM_FEATURE_SM4 is defined but should not be!"
 #endif
@@ -225,6 +229,20 @@
 #endif
 #pragma GCC pop_options
 
+#pragma GCC push_options
+#pragma GCC target "arch=armv8-a+sve2+sve2-aes+ssve-aes"
+#ifndef __ARM_FEATURE_SSVE_AES
+#error "__ARM_FEATURE_SSVE_AES is not defined but should be!"
+#endif
+#pragma GCC pop_options
+
+#pragma GCC push_options
+#pragma GCC target "arch=armv8-a+sve2+sve-aes2+ssve-aes"
+#ifndef __ARM_FEATURE_SSVE_AES
+#error "__ARM_FEATURE_SSVE_AES is not defined but should be!"
+#endif
+#pragma GCC pop_options
+
 int
 foo (int a)
 {
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/test_sve_acle.h 
b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/test_sve_acle.h
index 8d4ed537c87..9a2e48c6df3 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/test_sve_acle.h
+++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/test_sve_acle.h
@@ -680,6 +680,21 @@
     __asm volatile ("" :: "w" (RES));                          \
   }
 
+#define TEST_XN_INDEXED(NAME, TTYPE, VTYPE, CODE1, CODE2)      \
+       PROTO (NAME, TTYPE, (TTYPE t, VTYPE v)) \
+       {       \
+          register TTYPE z0 __asm ("z0");      \
+          register TTYPE z1 __asm ("z1");      \
+          register TTYPE z2 __asm ("z2");      \
+          register TTYPE z3 __asm ("z3");      \
+          register VTYPE z4 __asm ("z4");      \
+          register VTYPE z5 __asm ("z5");      \
+          register VTYPE z6 __asm ("z6");      \
+          register VTYPE z7 __asm ("z7");      \
+          INVOKE (CODE1, CODE2);       \
+          return t;    \
+       }
+
 #define TEST_DUAL_XN(NAME, TTYPE1, TTYPE2, RES, CODE1, CODE2)  \
   PROTO (NAME, void, ())                                       \
   {                                                            \
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_lane_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_lane_u8.c
new file mode 100644
index 00000000000..de313a0e80d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_lane_u8.c
@@ -0,0 +1,67 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+** test_aesd_lane_u8_x2:
+**     aesd[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z2.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x2, svuint8x2_t, svuint8_t,
+               t = svaesd_lane_u8_x2 (t, v, 0),
+               t = svaesd_lane (t, v, 0))
+
+/*
+** test_aesd_lane_u8_x4:
+**     aesd[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x4, svuint8x4_t, svuint8_t,
+               t = svaesd_lane_u8_x4 (t, v, 0),
+               t = svaesd_lane (t, v, 0))
+
+/*
+** test_aesd_lane_u8_x2_regs:
+**     aesd[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x2_regs, svuint8x2_t, svuint8_t,
+               t = svaesd_lane_u8_x2 (z0, z4, 0),
+               t = svaesd_lane (z0, z4, 0))
+
+/*
+** test_aesd_lane_u8_x4_regs:
+**     aesd[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x4_regs, svuint8x4_t, svuint8_t,
+               t = svaesd_lane_u8_x4 (z0, z4, 0),
+               t = svaesd_lane (z0, z4, 0))
+
+/*
+** test_aesd_lane_u8_x2_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     aesd[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x2_regs_mov, svuint8x2_t, svuint8_t,
+               t = svaesd_lane_u8_x2 (z3, z4, 0),
+               t = svaesd_lane (z3, z4, 0))
+
+/*
+** test_aesd_lane_u8_x4_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     mov     z2.d, z5.d
+**     mov     z3.d, z6.d
+**     aesd[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesd_lane_u8_x4_regs_mov, svuint8x4_t, svuint8_t,
+               t = svaesd_lane_u8_x4 (z3, z4, 0),
+               t = svaesd_lane (z3, z4, 0))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_u8.c
index 65ba09471ac..f46c4adb378 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesd_u8.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** aesd_u8_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesdimc_lane_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesdimc_lane_u8.c
new file mode 100644
index 00000000000..10ebd485613
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesdimc_lane_u8.c
@@ -0,0 +1,67 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+** test_aesdimc_lane_u8_x2:
+**     aesdimc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z2.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x2, svuint8x2_t, svuint8_t,
+               t = svaesdimc_lane_u8_x2 (t, v, 0),
+               t = svaesdimc_lane (t, v, 0))
+
+/*
+** test_aesdimc_lane_u8_x4:
+**     aesdimc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x4, svuint8x4_t, svuint8_t,
+               t = svaesdimc_lane_u8_x4 (t, v, 0),
+               t = svaesdimc_lane (t, v, 0))
+
+/*
+** test_aesdimc_lane_u8_x2_regs:
+**     aesdimc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x2_regs, svuint8x2_t, svuint8_t,
+               t = svaesdimc_lane_u8_x2 (z0, z4, 0),
+               t = svaesdimc_lane (z0, z4, 0))
+
+/*
+** test_aesdimc_lane_u8_x4_regs:
+**     aesdimc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x4_regs, svuint8x4_t, svuint8_t,
+               t = svaesdimc_lane_u8_x4 (z0, z4, 0),
+               t = svaesdimc_lane (z0, z4, 0))
+
+/*
+** test_aesdimc_lane_u8_x2_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     aesdimc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x2_regs_mov, svuint8x2_t, svuint8_t,
+               t = svaesdimc_lane_u8_x2 (z3, z4, 0),
+               t = svaesdimc_lane (z3, z4, 0))
+
+/*
+** test_aesdimc_lane_u8_x4_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     mov     z2.d, z5.d
+**     mov     z3.d, z6.d
+**     aesdimc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesdimc_lane_u8_x4_regs_mov, svuint8x4_t, svuint8_t,
+               t = svaesdimc_lane_u8_x4 (z3, z4, 0),
+               t = svaesdimc_lane (z3, z4, 0))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_lane_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_lane_u8.c
new file mode 100644
index 00000000000..9c083c917c2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_lane_u8.c
@@ -0,0 +1,67 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+** test_aese_lane_u8_x2:
+**     aese[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z2.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x2, svuint8x2_t, svuint8_t,
+               t = svaese_lane_u8_x2 (t, v, 0),
+               t = svaese_lane (t, v, 0))
+
+/*
+** test_aese_lane_u8_x4:
+**     aese[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x4, svuint8x4_t, svuint8_t,
+               t = svaese_lane_u8_x4 (t, v, 0),
+               t = svaese_lane (t, v, 0))
+
+/*
+** test_aese_lane_u8_x2_regs:
+**     aese[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x2_regs, svuint8x2_t, svuint8_t,
+               t = svaese_lane_u8_x2 (z0, z4, 0),
+               t = svaese_lane (z0, z4, 0))
+
+/*
+** test_aese_lane_u8_x4_regs:
+**     aese[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x4_regs, svuint8x4_t, svuint8_t,
+               t = svaese_lane_u8_x4 (z0, z4, 0),
+               t = svaese_lane (z0, z4, 0))
+
+/*
+** test_aese_lane_u8_x2_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     aese[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x2_regs_mov, svuint8x2_t, svuint8_t,
+               t = svaese_lane_u8_x2 (z3, z4, 0),
+               t = svaese_lane (z3, z4, 0))
+
+/*
+** test_aese_lane_u8_x4_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     mov     z2.d, z5.d
+**     mov     z3.d, z6.d
+**     aese[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aese_lane_u8_x4_regs_mov, svuint8x4_t, svuint8_t,
+               t = svaese_lane_u8_x4 (z3, z4, 0),
+               t = svaese_lane (z3, z4, 0))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_u8.c
index f902c3c1d32..385830c7bf5 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aese_u8.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** aese_u8_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesemc_lane_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesemc_lane_u8.c
new file mode 100644
index 00000000000..d2460432e3c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesemc_lane_u8.c
@@ -0,0 +1,67 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+** test_aesemc_lane_u8_x2:
+**     aesemc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z2.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x2, svuint8x2_t, svuint8_t,
+               t = svaesemc_lane_u8_x2 (t, v, 0),
+               t = svaesemc_lane (t, v, 0))
+
+/*
+** test_aesemc_lane_u8_x4:
+**     aesemc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x4, svuint8x4_t, svuint8_t,
+               t = svaesemc_lane_u8_x4 (t, v, 0),
+               t = svaesemc_lane (t, v, 0))
+
+/*
+** test_aesemc_lane_u8_x2_regs:
+**     aesemc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x2_regs, svuint8x2_t, svuint8_t,
+               t = svaesemc_lane_u8_x2 (z0, z4, 0),
+               t = svaesemc_lane (z0, z4, 0))
+
+/*
+** test_aesemc_lane_u8_x4_regs:
+**     aesemc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x4_regs, svuint8x4_t, svuint8_t,
+               t = svaesemc_lane_u8_x4 (z0, z4, 0),
+               t = svaesemc_lane (z0, z4, 0))
+
+/*
+** test_aesemc_lane_u8_x2_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     aesemc[[:space:]]+{z0.b - z1.b}, {z0.b - z1.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x2_regs_mov, svuint8x2_t, svuint8_t,
+               t = svaesemc_lane_u8_x2 (z3, z4, 0),
+               t = svaesemc_lane (z3, z4, 0))
+
+/*
+** test_aesemc_lane_u8_x4_regs_mov:
+**     mov     z0.d, z3.d
+**     mov     z1.d, z4.d
+**     mov     z2.d, z5.d
+**     mov     z3.d, z6.d
+**     aesemc[[:space:]]+{z0.b - z3.b}, {z0.b - z3.b}, z4.q\[0\]
+**     ret
+*/
+TEST_XN_INDEXED (test_aesemc_lane_u8_x4_regs_mov, svuint8x4_t, svuint8_t,
+               t = svaesemc_lane_u8_x4 (z3, z4, 0),
+               t = svaesemc_lane (z3, z4, 0))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesimc_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesimc_u8.c
index dab06b79a95..904c3b7b54c 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesimc_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesimc_u8.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** aesimc_u8_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesmc_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesmc_u8.c
index 7e7cc65be5d..83450d73c0a 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesmc_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/aesmc_u8.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** aesmc_u8_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmlal_pair_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmlal_pair_u64.c
new file mode 100644
index 00000000000..55eedc7d46c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmlal_pair_u64.c
@@ -0,0 +1,69 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+**test_pmlal_pair_u64:
+**     pmlal[[:space:]]+{z0.q - z1.q}, z2.d, z2.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_u64, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_u64_x2(t, v, v), 
+            t = svpmlal_pair(t, v, v))
+
+/*
+**test_pmlal_pair_n_u64_regs:
+**     pmlal[[:space:]]+{z0.q - z1.q}, z4.d, z5.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_n_u64_regs, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_u64_x2(z0, z4, z5), 
+            t = svpmlal_pair(z0, z4, z5))
+
+/*
+**test_pmlal_pair_n_u64_regs_imm:
+**     movi[[:space:]]+d31, #0
+**     pmlal[[:space:]]+{z0.q - z1.q}, z4.d, z31.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_n_u64_regs_imm, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_n_u64_x2(z0, z4, 0x0), 
+            t = svpmlal_pair(z0, z4, 0x0))
+
+/*
+**test_pmlal_pair_n_u64_regs_imm_1:
+**     mov[[:space:]]+z31.d, #65535
+**     pmlal[[:space:]]+{z0.q - z1.q}, z4.d, z31.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_n_u64_regs_imm_1, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_n_u64_x2(z0, z4, 0xFFFF), 
+            t = svpmlal_pair(z0, z4, 0xFFFF))
+
+/*
+**test_pmlal_pair_n_u64_regs_mov:
+**     mov[[:space:]]+z0.d, z3.d
+**     mov[[:space:]]+z1.d, z4.d
+**     movi[[:space:]]+d31, #0
+**     pmlal[[:space:]]+{z0.q - z1.q}, z4.d, z31.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_n_u64_regs_mov, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_n_u64_x2(z3, z4, 0x0),
+            t = svpmlal_pair(z3, z4, 0x0))
+
+/*
+**test_pmlal_pair_n_u64_regs_mov_1:
+**     mov[[:space:]]+z0.d, z3.d
+**     mov[[:space:]]+z1.d, z4.d
+**     mov[[:space:]]+z31.d, #65535
+**     pmlal[[:space:]]+{z0.q - z1.q}, z4.d, z31.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmlal_pair_n_u64_regs_mov_1, svuint64x2_t, svuint64_t,
+            t = svpmlal_pair_n_u64_x2(z3, z4, 0xFFFF),
+            t = svpmlal_pair(z3, z4, 0xFFFF))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmull_pair_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmull_pair_u64.c
new file mode 100644
index 00000000000..f7347d99ec8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmull_pair_u64.c
@@ -0,0 +1,45 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
+/* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
+
+#include "test_sve_acle.h"
+
+#pragma GCC target "+sve-aes2+ssve-aes"
+
+/*
+**test_pmull_pair_u64:
+**     pmull[[:space:]]+{z0.q - z1.q}, z2.d, z2.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmull_pair_u64, svuint64x2_t, svuint64_t,
+            t = svpmull_pair_u64_x2(v, v), 
+            t = svpmull_pair(v, v))
+
+/*
+**test_pmull_pair_n_u64:
+**     movi[[:space:]]+d0, #0
+**     pmull[[:space:]]+{z0.q - z1.q}, z2.d, z0.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmull_pair_n_u64, svuint64x2_t, svuint64_t, 
+            t = svpmull_pair_n_u64_x2(v, 0x0), 
+            t = svpmull_pair(v, 0x0))
+
+/*
+**test_pmull_pair_u64_regs:
+**     pmull[[:space:]]+{z0.q - z1.q}, z4.d, z5.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmull_pair_u64_regs, svuint64x2_t, svuint64_t, 
+            t = svpmull_pair_u64_x2(z4, z5), 
+            t = svpmull_pair(z4, z5))
+
+/*
+**test_pmull_pair_n_u64_regs:
+**     movi[[:space:]]+d0, #0
+**     pmull[[:space:]]+{z0.q - z1.q}, z4.d, z0.d
+**     ret
+*/
+TEST_XN_INDEXED(test_pmull_pair_n_u64_regs, svuint64x2_t, svuint64_t, 
+            t = svpmull_pair_n_u64_x2(z4, 0x0), 
+            t = svpmull_pair(z4, 0x0))
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u32.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u32.c
index f627fca5d6c..4dd504615bb 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u32.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u32.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullb_pair_u32_tied1:
 **     pmullb  z0\.d, z0\.s, z1\.s
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u64.c
index 1fd85e0ce80..f705488578a 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u64.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u64.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** pmullb_pair_u64_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u8.c
index f41ae800390..af50d887474 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_pair_u8.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullb_pair_u8_tied1:
 **     pmullb  z0\.h, z0\.b, z1\.b
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u16.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u16.c
index f960fcadf06..cf7c9b531fb 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u16.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u16.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullb_u16_tied1:
 **     pmullb  z0\.h, z0\.b, z1\.b
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u64.c
index 3e6698a8afa..6bb0e2248a0 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u64.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullb_u64.c
@@ -1,7 +1,10 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
 /*
 ** pmullb_u64_tied1:
 **     pmullb  z0\.d, z0\.s, z1\.s
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u32.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u32.c
index ed0a54767e7..465041d92b7 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u32.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u32.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullt_pair_u32_tied1:
 **     pmullt  z0\.d, z0\.s, z1\.s
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u64.c
index 300d885abb0..7805bb98539 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u64.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u64.c
@@ -1,9 +1,10 @@
-/* { dg-skip-if "" { *-*-* } { "-DSTREAMING_COMPATIBLE" } { "" } } */
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
-#pragma GCC target "+sve2-aes"
+#pragma GCC target "+sve2-aes+ssve-aes"
 
 /*
 ** pmullt_pair_u64_tied1:
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u8.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u8.c
index 580f34a86fc..b3e3ba704b1 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u8.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_pair_u8.c
@@ -1,7 +1,10 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
 /*
 ** pmullt_pair_u8_tied1:
 **     pmullt  z0\.h, z0\.b, z1\.b
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u16.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u16.c
index 52ddb40e576..ff712082639 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u16.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u16.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullt_u16_tied1:
 **     pmullt  z0\.h, z0\.b, z1\.b
diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u64.c 
b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u64.c
index 0821e97e378..16db886a35c 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u64.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/pmullt_u64.c
@@ -1,7 +1,11 @@
+/* { dg-do assemble { target aarch64_asm_ssve-aes_ok } } */
+/* { dg-do compile { target { ! aarch64_asm_ssve-aes_ok } } } */
 /* { dg-final { check-function-bodies "**" "" "-DCHECK_ASM" } } */
 
 #include "test_sve_acle.h"
 
+#pragma GCC target "+sve2-aes+ssve-aes"
+
 /*
 ** pmullt_u64_tied1:
 **     pmullt  z0\.d, z0\.s, z1\.s
diff --git a/gcc/testsuite/lib/target-supports.exp 
b/gcc/testsuite/lib/target-supports.exp
index 6ffa40fb1dd..a9c13bdb86e 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -12750,7 +12750,7 @@ set exts_sve2 {
     "sme-f8f16" "sme-f8f32"
     "sme-b16b16" "sme-f16f16" "sme-i16i64" "sme" "sme2" "sme2p1" "sme2p2"
     "ssve-fp8dot2" "ssve-fp8dot4" "ssve-fp8fma" "sve-bfscale"
-    "sme-tmop" "ssve-fexpa" "ssve-bitperm"
+    "sme-tmop" "ssve-fexpa" "ssve-bitperm" "ssve-aes"
 }
 
 foreach { aarch64_ext } $exts {
-- 
2.43.0

Reply via email to