http://llvm.org/bugs/show_bug.cgi?id=8980

           Summary: clang -O3 generates horrible code for std::bitset
           Product: libraries
           Version: trunk
          Platform: PC
        OS/Version: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Scalar Optimizations
        AssignedTo: [email protected]
        ReportedBy: [email protected]
                CC: [email protected]


Created an attachment (id=6006)
 --> (http://llvm.org/bugs/attachment.cgi?id=6006)
IR output from clang -O3

for this piece of c++

===
#include <bitset>

bool foo(unsigned *a, size_t asize, unsigned *b, size_t bsize) {
  std::bitset<32> bits;

  for (unsigned i = 0; i != asize; ++i)
    bits[a[i]] = true;

  for (unsigned i = 0; i != bsize; ++i)
    if (bits[b[i]])
      return true;
  return false;
}
===

clang -O3 (with libstdc++ 4.2) generates much worse IR for the first loop than
llvm-gcc -O3 does. It somehow manages to split the computation for each word
into 3x and,or,and with large constants (or 7x with 64 bit words).

-- 
Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
_______________________________________________
LLVMbugs mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/llvmbugs

Reply via email to