Hi,

New bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94398 

With -mstrict-align, aarch64_builtin_support_vector_misalignment will returns 
false when misalignment factor is unknown at compile time.
Then vect_supportable_dr_alignment returns dr_unaligned_unsupported, which 
triggers the ICE.  I have pasted the call trace on the bug report.

vect_supportable_dr_alignment is expected to return either dr_aligned or 
dr_unaligned_supported for masked operations.
But it seems that this function only catches internal_fn IFN_MASK_LOAD & 
IFN_MASK_STORE.
We are emitting a mask gather load here for this test case.
As backends have their own vector misalignment support policy, I am supposing 
this should be better handled in the auto-vect shared code.

Proposed fix:
diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
index 0192aa6..67d3345 100644
--- a/gcc/tree-vect-data-refs.c
+++ b/gcc/tree-vect-data-refs.c
@@ -6509,11 +6509,26 @@ vect_supportable_dr_alignment (dr_vec_info *dr_info,

   /* For now assume all conditional loads/stores support unaligned
      access without any special code.  */
-  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
-    if (gimple_call_internal_p (stmt)
-       && (gimple_call_internal_fn (stmt) == IFN_MASK_LOAD
-           || gimple_call_internal_fn (stmt) == IFN_MASK_STORE))
-      return dr_unaligned_supported;
+  gcall *call = dyn_cast <gcall *> (stmt_info->stmt);
+  if (call && gimple_call_internal_p (call))
+    {
+      internal_fn ifn = gimple_call_internal_fn (call);
+      switch (ifn)
+       {
+         case IFN_MASK_LOAD:
+         case IFN_MASK_LOAD_LANES:
+         case IFN_MASK_GATHER_LOAD:
+         case IFN_MASK_STORE:
+         case IFN_MASK_STORE_LANES:
+         case IFN_MASK_SCATTER_STORE:
+           return dr_unaligned_supported;
+         default:
+           break;
+       }
+    }
+
+  if (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
+    return dr_unaligned_supported;

   if (loop_vinfo)
     {

Suggestions?

Thanks,
Felix

Reply via email to