https://bugs.llvm.org/show_bug.cgi?id=41761

            Bug ID: 41761
           Summary: Missed optimization: store coalescing
           Product: new-bugs
           Version: trunk
          Hardware: PC
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: new bugs
          Assignee: unassignedb...@nondot.org
          Reporter: cos...@gmail.com
                CC: htmldevelo...@gmail.com, llvm-bugs@lists.llvm.org

LLVM coalesces consecutive byte loads into larger loads when appropriate, but
doesn't do so for stores. This is unfortunate -- having these optimizations
would allow me to write platform-independent code for loading/storing
{little,big}-endian integers. Without the optimizations, I need to write an
explicit fast path for little-endian platforms.

The details below are also in the Godbolt example at
https://godbolt.org/z/45S0ID


The following functions get optimized to single mov instructions on x864_64.

uint32_t DecodeFixed32(const char* ptr) noexcept {
  const uint8_t* buffer = reinterpret_cast<const uint8_t*>(ptr);
  return ((static_cast<uint32_t>(buffer[0])) |
          (static_cast<uint32_t>(buffer[1]) << 8) |
          (static_cast<uint32_t>(buffer[2]) << 16) |
          (static_cast<uint32_t>(buffer[3]) << 24));
}

uint64_t DecodeFixed64(const char* ptr) noexcept {
  const uint8_t* buffer = reinterpret_cast<const uint8_t*>(ptr);
  return ((static_cast<uint64_t>(buffer[0])) |
          (static_cast<uint64_t>(buffer[1]) << 8) |
          (static_cast<uint64_t>(buffer[2]) << 16) |
          (static_cast<uint64_t>(buffer[3]) << 24) |
          (static_cast<uint64_t>(buffer[4]) << 32) |
          (static_cast<uint64_t>(buffer[5]) << 40) |
          (static_cast<uint64_t>(buffer[6]) << 48) |
          (static_cast<uint64_t>(buffer[7]) << 56));
}


However, the following functions do not get optimized to mov instructions.

void EncodeFixed32(char* dst, uint32_t value) noexcept {
  uint8_t* buffer = reinterpret_cast<uint8_t*>(dst);
  buffer[0] = static_cast<uint8_t>(value);
  buffer[1] = static_cast<uint8_t>(value >> 8);
  buffer[2] = static_cast<uint8_t>(value >> 16);
  buffer[3] = static_cast<uint8_t>(value >> 24);
}

void EncodeFixed64(char* dst, uint64_t value) noexcept {
  uint8_t* buffer = reinterpret_cast<uint8_t*>(dst);
  buffer[0] = static_cast<uint8_t>(value);
  buffer[1] = static_cast<uint8_t>(value >> 8);
  buffer[2] = static_cast<uint8_t>(value >> 16);
  buffer[3] = static_cast<uint8_t>(value >> 24);
  buffer[4] = static_cast<uint8_t>(value >> 32);
  buffer[5] = static_cast<uint8_t>(value >> 40);
  buffer[6] = static_cast<uint8_t>(value >> 48);
  buffer[7] = static_cast<uint8_t>(value >> 56);
}


GCC versions 8 and above optimize stores. Lower versions only optimize loads.
MSVC does not optimize loads or stores.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to