On Mon, 13 Mar 2023 at 13:03, Richard Biener <rguent...@suse.de> wrote:
>
> On Fri, 10 Mar 2023, Richard Sandiford wrote:
>
> > Sorry for the slow reply.
> >
> > Prathamesh Kulkarni <prathamesh.kulka...@linaro.org> writes:
> > > Unfortunately it regresses code-gen for the following case:
> > >
> > > svint32_t f(int32x4_t x)
> > > {
> > >   return svdupq_s32 (x[0], x[1], x[2], x[3]);
> > > }
> > >
> > > -O2 code-gen with trunk:
> > > f:
> > >         dup     z0.q, z0.q[0]
> > >         ret
> > >
> > > -O2 code-gen with patch:
> > > f:
> > >         dup     s1, v0.s[1]
> > >         mov    v2.8b, v0.8b
> > >         ins     v1.s[1], v0.s[3]
> > >         ins     v2.s[1], v0.s[2]
> > >         zip1    v0.4s, v2.4s, v1.4s
> > >         dup     z0.q, z0.q[0]
> > >         ret
> > >
> > > IIUC, svdupq_impl::expand uses aarch64_expand_vector_init
> > > to initialize the "base 128-bit vector" and then use dupq to replicate it.
> > >
> > > Without patch, aarch64_expand_vector_init generates fallback code, and 
> > > then
> > > combine optimizes a sequence of vec_merge/vec_select pairs into an 
> > > assignment:
> > >
> > > (insn 7 3 8 2 (set (reg:SI 99)
> > >         (vec_select:SI (reg/v:V4SI 97 [ x ])
> > >             (parallel [
> > >                     (const_int 1 [0x1])
> > >                 ]))) "bar.c":6:10 2592 {aarch64_get_lanev4si}
> > >      (nil))
> > >
> > > (insn 13 9 15 2 (set (reg:V4SI 102)
> > >         (vec_merge:V4SI (vec_duplicate:V4SI (reg:SI 99))
> > >             (reg/v:V4SI 97 [ x ])
> > >             (const_int 2 [0x2]))) "bar.c":6:10 1794 
> > > {aarch64_simd_vec_setv4si}
> > >      (expr_list:REG_DEAD (reg:SI 99)
> > >         (expr_list:REG_DEAD (reg/v:V4SI 97 [ x ])
> > >             (nil))))
> > >
> > > into:
> > > Trying 7 -> 13:
> > >     7: r99:SI=vec_select(r97:V4SI,parallel)
> > >    13: r102:V4SI=vec_merge(vec_duplicate(r99:SI),r97:V4SI,0x2)
> > >       REG_DEAD r99:SI
> > >       REG_DEAD r97:V4SI
> > > Successfully matched this instruction:
> > > (set (reg:V4SI 102)
> > >     (reg/v:V4SI 97 [ x ]))
> > >
> > > which eventually results into:
> > > (note 2 25 3 2 NOTE_INSN_DELETED)
> > > (note 3 2 7 2 NOTE_INSN_FUNCTION_BEG)
> > > (note 7 3 8 2 NOTE_INSN_DELETED)
> > > (note 8 7 9 2 NOTE_INSN_DELETED)
> > > (note 9 8 13 2 NOTE_INSN_DELETED)
> > > (note 13 9 15 2 NOTE_INSN_DELETED)
> > > (note 15 13 17 2 NOTE_INSN_DELETED)
> > > (note 17 15 18 2 NOTE_INSN_DELETED)
> > > (note 18 17 22 2 NOTE_INSN_DELETED)
> > > (insn 22 18 23 2 (parallel [
> > >             (set (reg/i:VNx4SI 32 v0)
> > >                 (vec_duplicate:VNx4SI (reg:V4SI 108)))
> > >             (clobber (scratch:VNx16BI))
> > >         ]) "bar.c":7:1 5202 {aarch64_vec_duplicate_vqvnx4si_le}
> > >      (expr_list:REG_DEAD (reg:V4SI 108)
> > >         (nil)))
> > > (insn 23 22 0 2 (use (reg/i:VNx4SI 32 v0)) "bar.c":7:1 -1
> > >      (nil))
> > >
> > > I was wondering if we should add the above special case, of assigning
> > > target = vec in aarch64_expand_vector_init, if initializer is {
> > > vec[0], vec[1], ... } ?
> >
> > I'm not sure it will be easy to detect that.  Won't the inputs to
> > aarch64_expand_vector_init just be plain registers?  It's not a
> > good idea in general to search for definitions of registers
> > during expansion.
> >
> > It would be nice to fix this by lowering svdupq into:
> >
> > (a) a constructor for a 128-bit vector
> > (b) a duplication of the 128-bit vector to fill an SVE vector
> >
> > But I'm not sure what the best way of doing (b) would be.
> > In RTL we can use vec_duplicate, but I don't think gimple
> > has an equivalent construct.  Maybe Richi has some ideas.
>
> On GIMPLE it would be
>
>  _1 = { a, ... }; // (a)
>  _2 = { _1, ... }; // (b)
>
> but I'm not sure if (b), a VL CTOR of fixed len(?) sub-vectors is
> possible?  But at least a CTOR of vectors is what we use to
> concat vectors.
>
> With the recent relaxing of VEC_PERM inputs it's also possible to
> express (b) with a VEC_PERM:
>
>  _2 = VEC_PERM <_1, _1, { 0, 1, 2, 3, 0, 1, 2, 3, ... }>
>
> but again I'm not sure if that repeating 0, 1, 2, 3 is expressible
> for VL vectors (maybe we'd allow "wrapping" here, I'm not sure).
>
Hi,
Thanks for the suggestions and sorry for late response in turn.
The attached patch tries to fix the issue by explicitly constructing a CTOR
from svdupq's arguments and then using VEC_PERM_EXPR with VL mask
having encoded elements {0, 1, ... nargs-1},
npatterns == nargs, and nelts_per_pattern == 1, to replicate the base vector.

So for example, for the above case,
svint32_t f_32(int32x4_t x)
{
  return svdupq_s32 (x[0], x[1], x[2], x[3]);
}

forwprop1 lowers it to:
  svint32_t _6;
  vector(4) int _8;
 <bb 2> :
  _1 = BIT_FIELD_REF <x_5(D), 32, 0>;
  _2 = BIT_FIELD_REF <x_5(D), 32, 32>;
  _3 = BIT_FIELD_REF <x_5(D), 32, 64>;
  _4 = BIT_FIELD_REF <x_5(D), 32, 96>;
  _8 = {_1, _2, _3, _4};
  _6 = VEC_PERM_EXPR <_8, _8, { 0, 1, 2, 3, ... }>;
  return _6;

which is then eventually optimized to:
  svint32_t _6;
  <bb 2> [local count: 1073741824]:
  _6 = VEC_PERM_EXPR <x_5(D), x_5(D), { 0, 1, 2, 3, ... }>;
  return _6;

code-gen:
f_32:
        dup     z0.q, z0.q[0]
        ret

Does it look OK ?

Thanks,
Prathamesh
> Richard.
>
> > We're planning to implement the ACLE's Neon-SVE bridge:
> > https://github.com/ARM-software/acle/blob/main/main/acle.md#neon-sve-bridge
> > and so we'll need (b) to implement the svdup_neonq functions.
> >
> > Thanks,
> > Richard
> >
>
> --
> Richard Biener <rguent...@suse.de>
> SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg,
> Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman;
> HRB 36809 (AG Nuernberg)
[SVE] Fold svld1rq to VEC_PERM_EXPR if elements are not constant.

gcc/ChangeLog:
        * config/aarch64/aarch64-sve-builtins-base.cc
        (svdupq_impl::fold_nonconst_dupq): New method.
        (svdupq_impl::fold): Call fold_nonconst_dupq.

gcc/testsuite/ChangeLog:
        * gcc.target/aarch64/sve/acle/general/dupq_11.c: New test.

diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc 
b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
index cd9cace3c9b..3de79060619 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
@@ -817,6 +817,62 @@ public:
 
 class svdupq_impl : public quiet<function_base>
 {
+private:
+  gimple *
+  fold_nonconst_dupq (gimple_folder &f, unsigned factor) const
+  {
+    /* Lower lhs = svdupq (arg0, arg1, ..., argN} into:
+       tmp = {arg0, arg1, ..., arg<N-1>}
+       lhs = VEC_PERM_EXPR (tmp, tmp, {0, 1, 2, N-1, ...})  */
+
+    /* TODO: Revisit to handle factor by padding zeros.  */
+    if (factor > 1)
+      return NULL;
+
+    if (BYTES_BIG_ENDIAN)
+      return NULL;
+
+    tree lhs = gimple_call_lhs (f.call);
+    if (TREE_CODE (lhs) != SSA_NAME)
+      return NULL;
+
+    tree lhs_type = TREE_TYPE (lhs);
+    tree elt_type = TREE_TYPE (lhs_type);
+    scalar_mode elt_mode = GET_MODE_INNER (TYPE_MODE (elt_type));
+    machine_mode vq_mode = aarch64_vq_mode (elt_mode).require ();
+    tree vq_type = build_vector_type_for_mode (elt_type, vq_mode);
+
+    unsigned nargs = gimple_call_num_args (f.call);
+    vec<constructor_elt, va_gc> *v;
+    vec_alloc (v, nargs);
+    for (unsigned i = 0; i < nargs; i++)
+      CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, gimple_call_arg (f.call, i));
+    tree vec = build_constructor (vq_type, v);
+
+    tree access_type
+      = build_aligned_type (vq_type, TYPE_ALIGN (elt_type));
+    tree tmp = make_ssa_name_fn (cfun, access_type, 0);
+    gimple *g = gimple_build_assign (tmp, vec);
+
+    gimple_seq stmts = NULL;
+    gimple_seq_add_stmt_without_update (&stmts, g);
+
+    int source_nelts = TYPE_VECTOR_SUBPARTS (access_type).to_constant ();
+    poly_uint64 lhs_len = TYPE_VECTOR_SUBPARTS (lhs_type);
+    vec_perm_builder sel (lhs_len, source_nelts, 1);
+    for (int i = 0; i < source_nelts; i++)
+      sel.quick_push (i);
+
+    vec_perm_indices indices (sel, 1, source_nelts);
+    tree mask_type = build_vector_type (ssizetype, lhs_len);
+    tree mask = vec_perm_indices_to_tree (mask_type, indices);
+
+    gimple *g2 = gimple_build_assign (lhs, VEC_PERM_EXPR, tmp, tmp, mask);
+    gimple_seq_add_stmt_without_update (&stmts, g2);
+    gsi_replace_with_seq (f.gsi, stmts, false);
+    return g2;
+  }
+
 public:
   gimple *
   fold (gimple_folder &f) const override
@@ -832,7 +888,7 @@ public:
       {
        tree elt = gimple_call_arg (f.call, i);
        if (!CONSTANT_CLASS_P (elt))
-         return NULL;
+         return fold_nonconst_dupq (f, factor);
        builder.quick_push (elt);
        for (unsigned int j = 1; j < factor; ++j)
          builder.quick_push (build_zero_cst (TREE_TYPE (vec_type)));
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c 
b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c
new file mode 100644
index 00000000000..f19f8deb1e5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/dupq_11.c
@@ -0,0 +1,31 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -fdump-tree-optimized" } */
+
+#include <arm_sve.h>
+#include <arm_neon.h>
+
+svint8_t f_s8(int8x16_t x)
+{
+  return svdupq_s8 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7],
+                   x[8], x[9], x[10], x[11], x[12], x[13], x[14], x[15]);
+}
+
+svint16_t f_s16(int16x8_t x)
+{
+  return svdupq_s16 (x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]);
+}
+
+svint32_t f_s32(int32x4_t x)
+{
+  return svdupq_s32 (x[0], x[1], x[2], x[3]);
+}
+
+svint64_t f_s64(int64x2_t x)
+{
+  return svdupq_s64 (x[0], x[1]);
+}
+
+/* { dg-final { scan-tree-dump "VEC_PERM_EXPR" "optimized" } } */
+/* { dg-final { scan-tree-dump-not "svdupq" "optimized" } } */
+
+/* { dg-final { scan-assembler-times {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]\n} 4 
} } */

Reply via email to