Hi!

> -----Original Message-----
> From: Richard Sandiford [mailto:richard.sandif...@arm.com]
> Sent: Monday, March 30, 2020 8:08 PM
> To: Yangfei (Felix) <felix.y...@huawei.com>
> Cc: gcc-patches@gcc.gnu.org; rguent...@suse.de
> Subject: Re: [PATCH] ICE: in vectorizable_load, at tree-vect-stmts.c:9173
> 
> "Yangfei (Felix)" <felix.y...@huawei.com> writes:
> > Hi!
> >> -----Original Message-----
> >> From: Yangfei (Felix)
> >> Sent: Monday, March 30, 2020 5:28 PM
> >> To: gcc-patches@gcc.gnu.org
> >> Cc: 'rguent...@suse.de' <rguent...@suse.de>
> >> Subject: [PATCH] ICE: in vectorizable_load, at tree-vect-stmts.c:9173
> >>
> >> Hi,
> >>
> >> New bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94398
> >>
> >> With -mstrict-align, aarch64_builtin_support_vector_misalignment will
> >> returns false when misalignment factor is unknown at compile time.
> >> Then vect_supportable_dr_alignment returns dr_unaligned_unsupported,
> >> which triggers the ICE.  I have pasted the call trace on the bug report.
> >>
> >> vect_supportable_dr_alignment is expected to return either dr_aligned
> >> or dr_unaligned_supported for masked operations.
> >> But it seems that this function only catches internal_fn
> >> IFN_MASK_LOAD & IFN_MASK_STORE.
> >> We are emitting a mask gather load here for this test case.
> >> As backends have their own vector misalignment support policy, I am
> >> supposing this should be better handled in the auto-vect shared code.
> >>
> >
> > I can only reply to comment on the bug here as my account got locked by the
> bugzilla system for now.
> 
> Sorry to hear that.  What was the reason?

Looks like it got filtered by spamassassin.  Admin has helped unlocked it.  

> > The way Richard (rsandifo) suggests also works for me and I agree it should
> be more consistent and better for compile time.
> > One question here: when will a IFN_MASK_LOAD/IFN_MASK_STORE be
> passed to vect_supportable_dr_alignment?
> 
> I'd expect that to happen in the same cases as for unmasked load/stores.
> It certainly will for unconditional loads and stores that get masked via 
> full-loop
> masking.
> 
> In principle, the same rules apply to both masked and unmasked accesses.
> But for SVE, both masked and unmasked accesses should support misalignment,
> which is why I think the current target hook is probably wrong for SVE +
> -mstrict-align.

I stopped looking into the backend further when I saw no distinction for 
different type of access
in the target hook aarch64_builtin_support_vector_misalignment. 

> > @@ -8051,8 +8051,15 @@ vectorizable_store (stmt_vec_info stmt_info,
> gimple_stmt_iterator *gsi,
> >    auto_vec<tree> dr_chain (group_size);
> >    oprnds.create (group_size);
> >
> > -  alignment_support_scheme
> > -    = vect_supportable_dr_alignment (first_dr_info, false);
> > +  /* Strided accesses perform only component accesses, alignment
> > +     is irrelevant for them.  */
> > +  if (STMT_VINFO_STRIDED_P (first_dr_info->stmt)
> > +      && !STMT_VINFO_GROUPED_ACCESS (first_dr_info->stmt))
> 
> I think this should be based on memory_access_type ==
> VMAT_GATHER_SCATTER instead.  At this point, STMT_VINFO_* describes
> properties of the original scalar access(es) while memory_access_type
> describes the vector implementation strategy.  It's the latter that matters
> here.
> 
> Same thing for loads.

Yes, I have modified accordingly.  Attached please find the adapted patch.  
Bootstrapped and tested on aarch64-linux-gnu.  Newly add test will fail without 
the patch and pass otherwise.  
I think I need a sponsor if this patch can go separately.  

Thanks,
Felix

Attachment: pr94398-v1.patch
Description: pr94398-v1.patch

Reply via email to