On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
<richard.sandif...@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulka...@linaro.org> writes:
> > Hi Richard,
> > For the following (contrived) test:
> >
> > void foo(int32x4_t v)
> > {
> >   v[3] = 0;
> >   return v;
> > }
> >
> > -O2 code-gen:
> > foo:
> >         fmov    s1, wzr
> >         ins     v0.s[3], v1.s[0]
> >         ret
> >
> > I suppose we can instead emit the following code-gen ?
> > foo:
> >      ins v0.s[3], wzr
> >      ret
> >
> > combine produces:
> > Failed to match this instruction:
> > (set (reg:V4SI 95 [ v ])
> >     (vec_merge:V4SI (const_vector:V4SI [
> >                 (const_int 0 [0]) repeated x4
> >             ])
> >         (reg:V4SI 97)
> >         (const_int 8 [0x8])))
> >
> > So, I wrote the following pattern to match the above insn:
> > (define_insn "aarch64_simd_vec_set_zero<mode>"
> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >         (vec_merge:VALL_F16
> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
> >             (match_operand:VALL_F16 3 "register_operand" "0")
> >             (match_operand:SI 2 "immediate_operand" "i")))]
> >   "TARGET_SIMD"
> >   {
> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >     return "ins\\t%0.<Vetype>[%p2], wzr";
> >   }
> > )
> >
> > which now matches the above insn produced by combine.
> > However, in reload dump, it creates a new insn for assigning
> > register to (const_vector (const_int 0)),
> > which results in:
> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
> >         (const_vector:V4SI [
> >                 (const_int 0 [0]) repeated x4
> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
> >      (nil))
> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
> >             (reg:V4SI 32 v0 [97])
> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> > {aarch64_simd_vec_set_zerov4si}
> >      (nil))
> >
> > and eventually the code-gen:
> > foo:
> >         movi    v1.4s, 0
> >         ins     v0.s[3], wzr
> >         ret
> >
> > To get rid of redundant assignment of 0 to v1, I tried to split the
> > above pattern
> > as in the attached patch. This works to emit code-gen:
> > foo:
> >         ins     v0.s[3], wzr
> >         ret
> >
> > However, I am not sure if this is the right approach. Could you suggest,
> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
>
> The problem is with the "w" constraint on operand 1, which tells LRA
> to force the zero into an FPR.  It should work if you remove the
> constraint.
Ah indeed, sorry about that, changing the constrained works.
Does the attached patch look OK after bootstrap+test ?
Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?

Thanks,
Prathamesh


>
> Also, I think you'll need to use <vwcore>zr for the zero, so that
> it uses xzr for 64-bit elements.
>
> I think this and the existing patterns ought to test
> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
> since there's no guarantee that RTL optimisations won't form
> vec_merges that have other masks.
>
> Thanks,
> Richard
[aarch64] Use wzr/xzr for assigning 0 to vector element.

gcc/ChangeLog:
        * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
        New pattern.
        * config/aarch64/predicates.md (const_dup0_operand): New.

diff --git a/gcc/config/aarch64/aarch64-simd.md 
b/gcc/config/aarch64/aarch64-simd.md
index 104088f67d2..8e54ee4e886 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1083,6 +1083,20 @@
   [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
 )
 
+(define_insn "aarch64_simd_vec_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+       (vec_merge:VALL_F16
+           (match_operand:VALL_F16 1 "const_dup0_operand" "i")
+           (match_operand:VALL_F16 3 "register_operand" "0")
+           (match_operand:SI 2 "immediate_operand" "i")))]
+  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
+  {
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
+    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
+  }
+)
+
 (define_insn "@aarch64_simd_vec_copy_lane<mode>"
   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
        (vec_merge:VALL_F16
diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
index ff7f73d3f30..901fa1bd7f9 100644
--- a/gcc/config/aarch64/predicates.md
+++ b/gcc/config/aarch64/predicates.md
@@ -49,6 +49,13 @@
   return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
 })
 
+(define_predicate "const_dup0_operand"
+  (match_code "const_vector")
+{
+  op = unwrap_const_vec_duplicate (op);
+  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
+})
+
 (define_predicate "subreg_lowpart_operator"
   (ior (match_code "truncate")
        (and (match_code "subreg")

Reply via email to