https://bugs.llvm.org/show_bug.cgi?id=34642

            Bug ID: 34642
           Summary: [x86] missed optimizations for shift-counts: (32-n)
                    can be (-n)
           Product: new-bugs
           Version: trunk
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: new bugs
          Assignee: unassignedb...@nondot.org
          Reporter: pe...@cordes.ca
                CC: llvm-bugs@lists.llvm.org

unsigned shift(unsigned a, unsigned n){
        n = (32-n);
        // n &= 31;
        return a << n;
}

// https://godbolt.org/g/YFKxQw
// clang version 6.0.0 (trunk 313348) -march=skylake -O3

        mov     eax, 32           # -mno-bmi2 version is essentially the same
        sub     eax, esi
        shlx    eax, edi, eax
        ret

But with n &= 31; we get the obviously-better

        neg     esi
        shlx    eax, edi, esi
        ret

(shlx masks the shift count identically to how shl r,cl does, with &31 for
32-bit operand size.  -mno-bmi2 code-gen is equivalent.)

So clang / llvm doesn't take full advantage of the masking behaviour of x86
shift counts for shlx,  shl r,cl, or presumably  bt / bts (which do also wrap
around at the destination register size with a register destination.)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to