https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108410

--- Comment #5 from Richard Biener <rguenth at gcc dot gnu.org> ---
Btw, for the case we can use the same mask compare type as we use as type for
the IV (so we know we can represent all required values) we can elide the
saturation.  So for example

void foo (double * __restrict a, double *b, double *c, int n)
{
  for (int i = 0; i < n; ++i)
    a[i] = b[i] + c[i];
}

can produce

        testl   %ecx, %ecx
        jle     .L5
        vmovdqa .LC0(%rip), %ymm3
        vpbroadcastd    %ecx, %ymm2
        xorl    %eax, %eax
        subl    $8, %ecx
        vpcmpud $6, %ymm3, %ymm2, %k1
        .p2align 4
        .p2align 3
.L3:
        vmovupd (%rsi,%rax), %zmm1{%k1}
        vmovupd (%rdx,%rax), %zmm0{%k1}
        movl    %ecx, %r8d
        vaddpd  %zmm1, %zmm0, %zmm2{%k1}{z}
        addl    $8, %r8d
        vmovupd %zmm2, (%rdi,%rax){%k1}
        vpbroadcastd    %ecx, %ymm2
        addq    $64, %rax
        subl    $8, %ecx
        vpcmpud $6, %ymm3, %ymm2, %k1
        cmpl    $8, %r8d
        ja      .L3
        vzeroupper
.L5:
        ret

That should work as long as the data size is larger or matches the IV size
which is hopefully the case for all FP testcases.  The trick is going to be
to make this visible to costing - I'm not sure we get to decide whether
to use masking or not when we do not want to decide between vector sizes
(the x86 backend picks the first successful one).  For SVE it's either
masking (with SVE modes) or not masking (with NEON modes) so it's
decided based on mode rather than as additional knob.

Performance-wise the above is likely still slower than not using masking
plus a masked epilog but it would actually save on code-size for -Os
or -O2.  Of course for code-size we might want to stick to SSE/AVX
for the smaller encoding.

Note we have to watch out for all-zero masks for masked stores since
that's very slow (for a reason unknown to me), when we have a stmt
split to multiple vector stmts it's not uncommon (esp. for the epilog)
to have one of them with an all-zero bit mask.  For the loop case and
.MASK_STORE we emit branchy code for this but we might want to avoid
the situation by costing (and not using a masked loop/epilog in that
case).

Reply via email to