https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113583

--- Comment #1 from JuzheZhong <juzhe.zhong at rivai dot ai> ---
It's interesting, for Clang only RISC-V can vectorize it.

I think there are 2 topics:

1. Support vectorization of this codes of in loop vectorizer.
2. Transform gather/scatter into strided load/store for RISC-V.

For 2nd topic: LLVM does it by RISC-V target specific lowering pass:

RISC-V gather/scatter lowering (riscv-gather-scatter-lowering)

This is the RISC-V LLVM backend codes:

  if (II->getIntrinsicID() == Intrinsic::masked_gather)
    Call = Builder.CreateIntrinsic(
        Intrinsic::riscv_masked_strided_load,
        {DataType, BasePtr->getType(), Stride->getType()},
        {II->getArgOperand(3), BasePtr, Stride, II->getArgOperand(2)});
  else
    Call = Builder.CreateIntrinsic(
        Intrinsic::riscv_masked_strided_store,
        {DataType, BasePtr->getType(), Stride->getType()},
        {II->getArgOperand(0), BasePtr, Stride, II->getArgOperand(3)});

I have ever tried to support strided load/store in GCC loop vectorizer,
but it seems to be unacceptable.  Maybe we can support strided load/stores
by leveraging LLVM approach ???

Btw, LLVM risc-v gather/scatter didn't do a perfect job here:

        vid.v   v8
        vmul.vx v8, v8, a3
....

        vsoxei64.v      v10, (s2), v14

This is in-order indexed store which is very costly in hardware.
It should be unorder indexed store or strided store.

Anyway, I think we should investigate first how to support vectorization of lbm
in loop vectorizer.

Reply via email to