This is an automated email from the ASF dual-hosted git repository.

alexey pushed a commit to branch branch-1.18.x
in repository https://gitbox.apache.org/repos/asf/kudu.git


The following commit(s) were added to refs/heads/branch-1.18.x by this push:
     new 9ea189054 KUDU-3736 fix SIGSEGV in codegen with libgcc-11.5.0-10+
9ea189054 is described below

commit 9ea189054ff1fb6c63366e27f3670d75d637f15d
Author: Alexey Serbin <[email protected]>
AuthorDate: Fri Jan 30 00:53:45 2026 -0800

    KUDU-3736 fix SIGSEGV in codegen with libgcc-11.5.0-10+
    
    Former libgcc versions used a mutex-protected linked list to store
    information on EH frames.  Addressing the scalability bottlenecks of
    the __{register,deregister}_frame() API in the former implementation,
    the new implementation in unwind-fde-dw2.c [1] and related headers
    now uses the read-optimized b-tree approach [2].  The b-tree uses
    the start address of a memory range as key.
    
    While the lock and the related bottleneck are now gone, the b-tree-based
    implementation is heavily reliant on a few invariants regarding the
    properties of frames/sections being (de)registered.  In particular,
    there is a range invariant (a.k.a. the "non-overlapping rule") which
    the new implementation heavily relies upon when updating and rebalancing
    the b-tree when inserting/removing elements.  At least, this rule is
    referred to in the commit description at [3]:
    
      * There must not be two frames that have overlapping address ranges
    
    While the former implementation might have tolerated when this rule
    wasn't honored, the new b-tree implementation is susceptible to UB
    (undefined behavior) in such a case because the tree logic assumes
    the presence of clear range boundaries for its separators.
    
    As it turns out, this applies not only to a pair of particular frame
    description entries (FDEs), but also to the whole range span of FDEs
    that come along with corresponding CIE entry in the data structure
    that is supplied to __{register,deregister}_frame() invocations. In
    particular, in classify_object_over_fdes() the span is calculated by
    going over each FDE of the supplied CIE/FDE chunk, and, as a result,
    get_pc_range() returns an address range spanning from
    min(beginning address of all FDEs) up to max(end address of all FDEs).
    
    In its turn, the implementation of RuntimeDyld's SectionMemoryManager
    in LLVM uses mmap() with MAP_ANON and MAP_PRIVATE flags to allocate
    memory for jitted object sections, including the .eh_frame section.
    That results in quite an arbitrary placement of the mmap-ed/allocated
    memory ranges since the placement of an allocated range isn't controlled
    by MCJIT execution engine beyond providing an 'address hint' for the
    placement of a newly allocated memory range.  The kernel is free to use
    any range in the address space of the process that it finds appropriate
    given the size of the requested memory range if its first attempt to
    establish the new mapping at the closest memory page boundary fails
    because there is an existing memory mapping at that address already [4].
    
    The address space of a running Kudu tablet server process might become
    fragmented when there are many jitted code references kept alive, while
    other references might have been used and then purged out of the codegen
    cache and other references dropped by already completed scanners,
    compaction operations, etc.  Eventually, it's possible to end up in a
    memory layout that's illustrated below:
    
      object A sections:         [.....][...][...]
      object B sections:  [.....]                 [...][...]
    
    .eh_frame section contents for the layout above wouldn't comply with
    the "non-overlapping rule" for the FDEs span, and the new libgcc
    implementation could end up with receiving SIGSEGV in an attempt to
    register .eh_frame section for one of those objects or updating the
    b-tree upon de-registering corresponding .eh_frame section.
    
    The situation described above manifested itself in Kudu tablet servers
    crashing with SIGSEGV on RHEL9 with libgcc 11.5.0-11 installed.
    The libgcc package of 11.5.0-11 version came with the updates pushed
    into the RedHat package repos along with newly released RHEL 9.7 on
    2025-11-12 (November 11, 2025).  From the libgcc's package changelog [5]
    it's clear that RedHat's libgcc switched to the b-tree-based EH frame
    implementation since version 11.5.0-10, and it's here to stay.
    
    This patch addresses the issue of Kudu tablet servers crashing in the
    codegen by providing a custom implementation of the section memory
    manager for the LLVM's MCJIT execution engine.  In essence, it reserves
    the memory area for subsequent allocations of all the codegenned
    objects' sections, so they are now guaranteed to be adjacent and
    localized in the pre-allocated memory area.  This approach ensures
    that the ranges of all the FDEs provided to the libgcc's
    __{register,deregister}_frame() API cannot interleave with ranges
    of any other FDEs and range spans defined by their boundary entries
    registered by a Kudu tablet server during its life cycle.
    
    [1] 
https://github.com/gcc-mirror/gcc/blob/1c0305d7aea53d788f3f74ca9a2bd9fb764c0109/libgcc/unwind-dw2-fde.c
    [2] 
https://github.com/gcc-mirror/gcc/blob/1c0305d7aea53d788f3f74ca9a2bd9fb764c0109/libgcc/unwind-dw2-btree.h
    [3] 
https://gcc.gnu.org/cgit/gcc/commit/libgcc/unwind-dw2-btree.h?id=21109b37e8585a7a1b27650fcbf1749380016108
    [4] https://www.man7.org/linux/man-pages/man2/mmap.2.html
    [5] 
https://rhel.pkgs.org/9/red-hat-ubi-baseos-x86_64/libgcc-11.5.0-11.el9.x86_64.rpm.html
    
    Change-Id: I691d2f442c3148f235847c4c8e56767577804b1a
    Reviewed-on: http://gerrit.cloudera.org:8080/23925
    Reviewed-by: Marton Greber <[email protected]>
    Tested-by: Marton Greber <[email protected]>
    Reviewed-by: Ashwani Raina <[email protected]>
    (cherry picked from commit 0efbebd487e7d0c7f21ab0d5a91e0fa70046af83)
      Conflicts:
        src/kudu/codegen/jit_frame_manager.h
    Reviewed-on: http://gerrit.cloudera.org:8080/23933
    Tested-by: Alexey Serbin <[email protected]>
    Reviewed-by: Abhishek Chennaka <[email protected]>
---
 src/kudu/codegen/jit_frame_manager.cc              | 293 ++++++++++++++++++---
 src/kudu/codegen/jit_frame_manager.h               |  94 +++++--
 src/kudu/codegen/module_builder.cc                 |   3 +-
 thirdparty/download-thirdparty.sh                  |   6 +-
 .../patches/llvm-section-mm-extra-methods.patch    |  66 +++++
 .../patches/llvm-section-mm-memory-mapper.patch    |  84 ++++++
 6 files changed, 482 insertions(+), 64 deletions(-)

diff --git a/src/kudu/codegen/jit_frame_manager.cc 
b/src/kudu/codegen/jit_frame_manager.cc
index 5adefa258..e230cfa93 100644
--- a/src/kudu/codegen/jit_frame_manager.cc
+++ b/src/kudu/codegen/jit_frame_manager.cc
@@ -17,68 +17,283 @@
 
 #include "kudu/codegen/jit_frame_manager.h"
 
+#include <sys/mman.h>
+
+#include <cerrno>
 #include <cstdint>
-#include <iterator>
+#include <ostream>
+#include <system_error>
+#include <utility>
 
+#include <glog/logging.h>
 #include <llvm/ExecutionEngine/SectionMemoryManager.h>
+#include <llvm/Support/Memory.h>
+#include <llvm/Support/Process.h>
 
-// External symbols from libgcc/libunwind.
-extern "C" void __register_frame(void*);  // 
NOLINT(bugprone-reserved-identifier)
-extern "C" void __deregister_frame(void*);// 
NOLINT(bugprone-reserved-identifier)
+#include "kudu/gutil/casts.h"
+#include "kudu/gutil/port.h"
 
 using llvm::SectionMemoryManager;
-using llvm::StringRef;
-using std::lock_guard;
+using llvm::sys::Memory;
+using llvm::sys::MemoryBlock;
+using llvm::sys::Process;
 
 namespace kudu {
 namespace codegen {
 
-// Initialize the static mutex
-std::mutex JITFrameManager::kRegistrationMutex;
+namespace {
 
-JITFrameManager::~JITFrameManager() {
-  // Be explicit about avoiding the virtual dispatch: invoke
-  // deregisterEHFramesImpl() instead of deregisterEHFrames().
-  deregisterEHFramesImpl();
+int getPosixProtectionFlags(unsigned flags) {
+  switch (flags & Memory::MF_RWE_MASK) {
+  case Memory::MF_READ:
+    return PROT_READ;
+  case Memory::MF_WRITE:
+    return PROT_WRITE;
+  case Memory::MF_READ | Memory::MF_WRITE:
+    return PROT_READ | PROT_WRITE;
+  case Memory::MF_READ | Memory::MF_EXEC:
+    return PROT_READ | PROT_EXEC;
+  case Memory::MF_READ | Memory::MF_WRITE | Memory::MF_EXEC:
+    return PROT_READ | PROT_WRITE | PROT_EXEC;
+  case Memory::MF_EXEC:
+    return PROT_EXEC;
+  default:
+    LOG(DFATAL) << "unsupported LLVM memory protection flags";
+    return PROT_NONE;
+  }
+}
+
+} // anonymous namespace
+
+
+JITFrameManager::CustomMapper::CustomMapper()
+    : memory_range_bytes_left_(0) {
 }
 
-uint8_t* JITFrameManager::allocateCodeSection(uintptr_t size,
-                                              unsigned alignment,
-                                              unsigned section_id,
-                                              StringRef section_name) {
-    // Add extra padding for EH frame section: it's zeroed out later upon
-    // registerEHFrames() calls.
-    if (section_name == ".eh_frame") {
-      size += 4;
+// This implementation of allocateMappedMemory() method is modeled after LLVM's
+// llvm::sys::Memory::allocateMappedMemory() in lib/Support/Unix/Memory.inc.
+// One important difference is treating the 'near_block' memory address hint
+// (if present) as the exact address for the result memory mapping, adding
+// MAP_FIXED to the corresponding flags for the mmap() call.
+MemoryBlock JITFrameManager::CustomMapper::allocateMappedMemory(
+    SectionMemoryManager::AllocationPurpose /*purpose*/,
+    size_t num_bytes,
+    const MemoryBlock* const near_block,
+    unsigned protection_flags,
+    std::error_code& ec) {
+
+  ec = std::error_code();
+  if (num_bytes == 0) {
+    return {};
+  }
+
+  // Is this a request to pre-allocate memory range for subsequent allocations?
+  const bool is_pre_allocation = !near_block ||
+      (near_block->base() == nullptr && near_block->allocatedSize() == 0);
+
+  uintptr_t start = near_block ? 
reinterpret_cast<uintptr_t>(near_block->base()) +
+                                     near_block->allocatedSize()
+                               : 0;
+  int mm_flags = MAP_PRIVATE | MAP_ANON;
+  if (!is_pre_allocation) {
+    DCHECK_NE(0, start);
+    mm_flags |= MAP_FIXED;
+  }
+
+  // If the start address is non-zero, this must be a follow-up allocation
+  // within the pre-allocated memory range. Vice versa, a pre-allocation 
request
+  // doesn't provide valid start address.
+  DCHECK((start != 0 && !is_pre_allocation) ^ (start == 0 && 
is_pre_allocation));
+
+  // Memory is allocated/mapped in multiples of the memory page size.
+  static const size_t page_size = Process::getPageSizeEstimate();
+  const size_t num_pages = (num_bytes + page_size - 1) / page_size;
+
+  // Use the near hint and the page size to set a page-aligned starting 
address.
+  if (start && start % page_size) {
+    // Move the start address up to the nearest page boundary.
+    start += page_size - (start % page_size);
+  }
+  const size_t size_bytes = page_size * num_pages;
+
+  if (is_pre_allocation) {
+    if (PREDICT_FALSE(isPreAllocatedRangeSet())) {
+      // Once the pre-allocated range is set, all the subsequent requests
+      // must provide non-null 'near_block' for allocations within the range.
+      LOG(DFATAL) << "must not attempt pre-allocating memory multiple times";
+      return {};
+    }
+    DCHECK_EQ(0, memory_range_bytes_left_);
+  } else {
+    // If this is a follow-up request, the information on the pre-allocated
+    // range must already be set.
+    if (PREDICT_FALSE(!isPreAllocatedRangeSet())) {
+      LOG(DFATAL) << "must set pre-allocated memory range first";
+      return {};
+    }
+    // For subsequent allocations, make sure there is still enough 
pre-allocated
+    // memory left to accomodate a new memory range of the requested size.
+    if (PREDICT_FALSE(memory_range_bytes_left_ < size_bytes)) {
+      LOG(DFATAL) << "insufficient pre-allocated memory";
+      return {};
+    }
+
+    // Check the provided 'near_block' hint for sanity: it must be within
+    // the pre-allocated area.
+    if (PREDICT_FALSE(reinterpret_cast<uintptr_t>(start) <
+                      reinterpret_cast<uintptr_t>(memory_range_.base()))) {
+      LOG(DFATAL) << "'near_hint' start address is beyond pre-allocated range";
+      return {};
+    }
+    if (PREDICT_FALSE(reinterpret_cast<uintptr_t>(start) + size_bytes >
+                      reinterpret_cast<uintptr_t>(memory_range_.base()) +
+                          memory_range_.allocatedSize())) {
+      LOG(DFATAL) << "invalid 'near_hint'";
+      return {};
+    }
+    // Make sure the requested range with the provided 'near_block' hint
+    // wouldn't clobber previously allocated range. As for the address hints
+    // provided with the 'near_block' parameter, start addresses for new memory
+    // ranges must increase.
+    if (PREDICT_FALSE(reinterpret_cast<uintptr_t>(start) <
+                      reinterpret_cast<uintptr_t>(prev_range_.base()) +
+                          prev_range_.allocatedSize())) {
+      LOG(DFATAL) << "'near_hint' would clobber previously allocated range";
+      return {};
+    }
+  }
+
+  const int protect = getPosixProtectionFlags(protection_flags);
+  void* addr = ::mmap(
+      reinterpret_cast<void*>(start), size_bytes, protect, mm_flags, -1, 0);
+  if (PREDICT_FALSE(addr == MAP_FAILED)) {
+    const int err = errno;
+    ec = std::error_code(err, std::generic_category());
+    return {};
+  }
+
+  MemoryBlock result(addr, size_bytes);
+  // Update the amount of the pre-allocated memory left.
+  if (!is_pre_allocation) {
+    memory_range_bytes_left_ -= size_bytes;
+    prev_range_ = result;
+  }
+
+  // Rely on protectMappedMemory to invalidate instruction cache.
+  if (protection_flags & Memory::MF_EXEC) {
+    ec = Memory::protectMappedMemory(result, protection_flags);
+    if (ec) {
+      return {};
     }
-    return SectionMemoryManager::allocateCodeSection(
-        size, alignment, section_id, section_name);
   }
 
-void JITFrameManager::registerEHFrames(uint8_t* addr,
-                                       uint64_t /*load_addr*/,
-                                       size_t size) {
-  lock_guard guard(kRegistrationMutex);
+  return result;
+}
+
+std::error_code 
JITFrameManager::CustomMapper::releaseMappedMemory(MemoryBlock& m) {
+  return Memory::releaseMappedMemory(m);
+}
+
+std::error_code JITFrameManager::CustomMapper::protectMappedMemory(
+    const MemoryBlock& block, unsigned flags) {
+  return Memory::protectMappedMemory(block, flags);
+}
+
+void JITFrameManager::CustomMapper::setPreAllocatedRange(
+    const llvm::sys::MemoryBlock& range) {
+  // This should be called at most once per CustomMapper instance.
+  DCHECK_EQ(nullptr, memory_range_.base());
+  DCHECK_EQ(0, memory_range_.allocatedSize());
+  memory_range_ = range;
 
-  // libgcc expects a null-terminated list of FDEs: write 4 zero bytes in the
-  // end of the allocated section.
-  auto* terminator = reinterpret_cast<uint32_t*>(addr + size);
-  *terminator = 0;
+  DCHECK_EQ(0, memory_range_bytes_left_);
+  memory_range_bytes_left_ = range.allocatedSize();
+}
+
+bool JITFrameManager::CustomMapper::isPreAllocatedRangeSet() const {
+  return memory_range_.base() != nullptr &&
+         memory_range_.allocatedSize() != 0;
+}
 
-  __register_frame(addr);
-  registered_frames_.push_back(addr);
+// TODO(aserbin): find a way to get rid of down_cast;
+//                delegating constructor didn't help
+JITFrameManager::JITFrameManager(std::unique_ptr<CustomMapper> mm)
+    : SectionMemoryManager(std::move(mm)),
+      mm_(down_cast<CustomMapper*>(this->getMemoryMapper())) {
 }
 
-void JITFrameManager::deregisterEHFrames() {
-  return deregisterEHFramesImpl();
+JITFrameManager::~JITFrameManager() {
+  // Release the memory mapping if it's been successfully allocated.
+  if (preallocated_block_.base() != nullptr &&
+      preallocated_block_.allocatedSize() != 0) {
+    if (mm_->releaseMappedMemory(preallocated_block_)) {
+      LOG(WARNING) << "JITFrameManager: could not release pre-allocated 
memory";
+    }
+  }
 }
 
-void JITFrameManager::deregisterEHFramesImpl() {
-  lock_guard guard(kRegistrationMutex);
-  for (auto it = registered_frames_.rbegin(); it != registered_frames_.rend(); 
++it) {
-    __deregister_frame(*it);
+void JITFrameManager::reserveAllocationSpace(uintptr_t code_size,
+                                             uint32_t code_align,
+                                             uintptr_t ro_data_size,
+                                             uint32_t ro_data_align,
+                                             uintptr_t rw_data_size,
+                                             uint32_t rw_data_align) {
+  // This can be called only once per JITFrameManager instance.
+  DCHECK_EQ(nullptr, preallocated_block_.base());
+  DCHECK_EQ(0, preallocated_block_.allocatedSize());
+
+  DCHECK_NE(0, code_align);
+  DCHECK_NE(0, ro_data_align);
+  DCHECK_NE(0, rw_data_align);
+
+  static const size_t page_size = Process::getPageSizeEstimate();
+
+  constexpr auto align_up = [](uintptr_t size, uint32_t alignment) {
+    return alignment * ((size + alignment - 1) / alignment);
+  };
+
+  const auto code_required_size_bytes = align_up(code_size, code_align);
+  const auto ro_data_required_size_bytes = align_up(ro_data_size, 
ro_data_align);
+  const auto rw_data_required_size_bytes = align_up(rw_data_size, 
rw_data_align);
+
+  // Extra safety margin: pre-allocate 2 times more, aligning up the the memory
+  // page size for each section type.
+  const size_t required_size_bytes = 2 * (
+      align_up(code_required_size_bytes, page_size) +
+      align_up(ro_data_required_size_bytes, page_size) +
+      align_up(rw_data_required_size_bytes, page_size));
+
+  // Reserve enough memory for the jitted sections to avoid fragmentation and
+  // eliminate the risk of interleaving with any other memory allocations by
+  // the process. Use the returned address as a hint for subsequent smaller
+  // allocations performed by the loader for the segments of the jitted object
+  // being loaded. Those smaller allocations will re-map consecutive chunks of
+  // the pre-allocated MAP_ANON region using exact addresses and MAP_FIXED 
flag.
+  // This approach guarantees reserving the pre-allocated range exclusively for
+  // this JITFrameManager's subsequent activity, so it's guaranteed to have
+  // no interleaving with any other memory areas allocated by this process.
+  std::error_code ec;
+  preallocated_block_ = mm_->allocateMappedMemory(
+      SectionMemoryManager::AllocationPurpose::RWData,
+      required_size_bytes,
+      nullptr,
+      Memory::MF_READ | Memory::MF_WRITE,
+      ec);
+  if (ec) {
+    LOG(DFATAL) << "JITFrameManager: memory pre-allocation failed";
+  } else {
+    // Providing memory block 'mb_near' with the correct address and 0 size to
+    // the SectionMemoryManager for subsequent memory allocations. This is to
+    // point 'mb_near.base() + mb_near.allocatedSize()' to the start address
+    // of the pre-allocated memory area. So, when RuntimeDyld requests this
+    // JITFrameManager instance to allocate memory for the sections of the
+    // jitted object being loaded, the memory for each section is allocated
+    // strictly within the reserved memory area.
+    DCHECK_NE(nullptr, preallocated_block_.base());
+    DCHECK_NE(0, preallocated_block_.allocatedSize());
+    setNearHintMB(MemoryBlock(preallocated_block_.base(), 0));
+    mm_->setPreAllocatedRange(preallocated_block_);
   }
-  registered_frames_.clear();
 }
 
 } // namespace codegen
diff --git a/src/kudu/codegen/jit_frame_manager.h 
b/src/kudu/codegen/jit_frame_manager.h
index 7d1099e20..96fdbed27 100644
--- a/src/kudu/codegen/jit_frame_manager.h
+++ b/src/kudu/codegen/jit_frame_manager.h
@@ -18,39 +18,89 @@
 
 #include <cstddef>
 #include <cstdint>
-#include <deque>
-#include <mutex>
+#include <memory>
+#include <system_error>
 
-#include "llvm/ExecutionEngine/SectionMemoryManager.h"
-#include <llvm/ADT/StringRef.h>
+#include <llvm/ExecutionEngine/SectionMemoryManager.h>
+#include <llvm/Support/Memory.h>
 
 namespace kudu {
 namespace codegen {
 
-class JITFrameManager : public llvm::SectionMemoryManager {
+class JITFrameManager final : public llvm::SectionMemoryManager {
  public:
-  JITFrameManager() = default;
-  ~JITFrameManager() override;
 
-  // Override to add space for the 4-byte null terminator.
-  uint8_t* allocateCodeSection(uintptr_t size,
-                               unsigned alignment,
-                               unsigned section_id,
-                               llvm::StringRef section_name) override;
+  // This implementation of the SectionMemoryManager::MemoryMapper interface
+  // is used by SectionMemoryManager to request memory pages from the OS.
+  // For the documentation of the interface, see in-line docs
+  // for LLVM's SectionMemoryManager::MemoryMapper in SectionMemoryManager.h.
+  class CustomMapper final : public SectionMemoryManager::MemoryMapper {
+   public:
+    CustomMapper();
+    ~CustomMapper() override = default;
 
-  void registerEHFrames(uint8_t* addr, uint64_t load_addr, size_t size) 
override;
-  void deregisterEHFrames() override;
+    llvm::sys::MemoryBlock allocateMappedMemory(
+        SectionMemoryManager::AllocationPurpose /*purpose*/,
+        size_t num_bytes,
+        // NOLINTNEXTLINE(readability-avoid-const-params-in-decls)
+        const llvm::sys::MemoryBlock* const near_block,
+        unsigned protection_flags,
+        std::error_code& ec) override;
 
- private:
-  void deregisterEHFramesImpl();
+    std::error_code releaseMappedMemory(llvm::sys::MemoryBlock& m) override;
+
+    std::error_code protectMappedMemory(const llvm::sys::MemoryBlock& block,
+                                        unsigned flags) override;
+
+    // Store the information on the pre-allocated memory range. This
+    // information is used to check for the range of subsequent memory
+    // allocations when allocateMappedMemory() is called with non-null
+    // 'near_block' argument.
+    void setPreAllocatedRange(const llvm::sys::MemoryBlock& range);
+
+   private:
+    // Whether a valid pre-allocated memory range has been set.
+    bool isPreAllocatedRangeSet() const;
+
+    // The pre-allocated range that all the allocations with non-null
+    // 'near_block' must fit into.
+    llvm::sys::MemoryBlock memory_range_;
 
-  // Mutex to prevent races in libgcc/libunwind. Since it should work across
-  // multiple instances, it's a static one.
-  static std::mutex kRegistrationMutex;
+    // Previously allocated memory block within the pre-allocated area.
+    // It's used to make sure follow-up allocation requests with provided
+    // 'near_hint' memory block don't overlap with already allocated ranges.
+    llvm::sys::MemoryBlock prev_range_;
+
+    // The number of remaining bytes in the pre-allocated memory range.
+    // Each call to the allocateMappedMemory() method with non-null 
'near_block'
+    // decrements this by the size of the newly allocated memory block.
+    int64_t memory_range_bytes_left_;
+  };
+
+  explicit JITFrameManager(std::unique_ptr<CustomMapper> mm);
+  ~JITFrameManager() override;
+
+  // This custom memory manager reserves/allocates memory for object sections
+  // to be loaded in advance.
+  bool needsToReserveAllocationSpace() override {
+    return true;
+  }
+
+  // Reserve the memory to provide at least the specified amount of memory for
+  // object sections.
+  void reserveAllocationSpace(uintptr_t code_size,
+                              uint32_t code_align,
+                              uintptr_t ro_data_size,
+                              uint32_t ro_data_align,
+                              uintptr_t rw_data_size,
+                              uint32_t rw_data_align) override;
+ private:
+  // This is a non-owning pointer to the memory mapper object that's passed to
+  // the constructor and then to the base SectionMemoryManager object.
+  CustomMapper* const mm_;
 
-  // Container to keep track of registered frames: this information is 
necessary
-  // for unregistring all of them.
-  std::deque<uint8_t*> registered_frames_;
+  // The result of memory pre-allocation performed by reserveAllocationSpace().
+  llvm::sys::MemoryBlock preallocated_block_;
 };
 
 } // namespace codegen
diff --git a/src/kudu/codegen/module_builder.cc 
b/src/kudu/codegen/module_builder.cc
index 3e33840b2..76ad31cfa 100644
--- a/src/kudu/codegen/module_builder.cc
+++ b/src/kudu/codegen/module_builder.cc
@@ -333,7 +333,8 @@ Status ModuleBuilder::Compile(unique_ptr<ExecutionEngine>* 
out) {
 #endif
   Module* module = module_.get();
   EngineBuilder ebuilder(std::move(module_));
-  ebuilder.setMCJITMemoryManager(std::make_unique<JITFrameManager>());
+  ebuilder.setMCJITMemoryManager(std::make_unique<JITFrameManager>(
+      std::make_unique<JITFrameManager::CustomMapper>()));
   ebuilder.setErrorStr(&str);
   ebuilder.setOptLevel(opt_level);
   ebuilder.setMCPU(llvm::sys::getHostCPUName());
diff --git a/thirdparty/download-thirdparty.sh 
b/thirdparty/download-thirdparty.sh
index b699dbe02..22b8f345f 100755
--- a/thirdparty/download-thirdparty.sh
+++ b/thirdparty/download-thirdparty.sh
@@ -336,7 +336,7 @@ fetch_and_patch \
  $PYTHON_SOURCE \
  $PYTHON_PATCHLEVEL
 
-LLVM_PATCHLEVEL=8
+LLVM_PATCHLEVEL=10
 fetch_and_patch \
  llvm-${LLVM_VERSION}-iwyu-${IWYU_VERSION}.src.tar.gz \
  $LLVM_SOURCE \
@@ -355,7 +355,9 @@ fetch_and_patch \
  "patch -p1 < $TP_DIR/patches/llvm-is-convertible-00.patch" \
  "patch -p1 < $TP_DIR/patches/llvm-is-convertible-01.patch" \
  "patch -p1 < $TP_DIR/patches/llvm-chrono-duration-00.patch" \
- "patch -p1 < $TP_DIR/patches/llvm-chrono-duration-01.patch"
+ "patch -p1 < $TP_DIR/patches/llvm-chrono-duration-01.patch" \
+ "patch -p1 < $TP_DIR/patches/llvm-section-mm-memory-mapper.patch" \
+ "patch -p1 < $TP_DIR/patches/llvm-section-mm-extra-methods.patch"
 
 LZ4_PATCHLEVEL=0
 fetch_and_patch \
diff --git a/thirdparty/patches/llvm-section-mm-extra-methods.patch 
b/thirdparty/patches/llvm-section-mm-extra-methods.patch
new file mode 100644
index 000000000..636c4fd42
--- /dev/null
+++ b/thirdparty/patches/llvm-section-mm-extra-methods.patch
@@ -0,0 +1,66 @@
+--- a/lib/ExecutionEngine/SectionMemoryManager.cpp     2026-01-31 
09:06:29.326138922 -0800
++++ b/lib/ExecutionEngine/SectionMemoryManager.cpp     2026-01-31 
09:13:40.829420553 -0800
+@@ -102,7 +102,7 @@
+   // interleaving.
+   std::error_code ec;
+   sys::MemoryBlock MB = MMapper->allocateMappedMemory(
+-      Purpose, RequiredSize, &MemGroup.Near,
++      Purpose, RequiredSize, &NearHintMB,
+       sys::Memory::MF_READ | sys::Memory::MF_WRITE, ec);
+   if (ec) {
+     // FIXME: Add error propagation to the interface.
+@@ -110,7 +110,7 @@
+   }
+ 
+   // Save this address as the basis for our next request
+-  MemGroup.Near = MB;
++  NearHintMB = MB;
+ 
+   // Remember that we allocated this memory
+   MemGroup.AllocatedMem.push_back(MB);
+@@ -267,4 +267,12 @@
+   }
+ }
+ 
++SectionMemoryManager::SectionMemoryManager(std::unique_ptr<MemoryMapper> 
OwnedMM)
++    : MMapper(OwnedMM.get()), OwnedMMapper(std::move(OwnedMM)) {
++}
++
++void SectionMemoryManager::setNearHintMB(sys::MemoryBlock MB) {
++  NearHintMB = std::move(MB);
++}
++
+ } // namespace llvm
+--- a/include/llvm/ExecutionEngine/SectionMemoryManager.h      2026-01-31 
09:06:17.088074261 -0800
++++ b/include/llvm/ExecutionEngine/SectionMemoryManager.h      2026-01-31 
09:12:50.779155873 -0800
+@@ -105,6 +105,11 @@
+   /// memory mapper.  If \p MM is nullptr then a default memory mapper is used
+   /// that directly calls into the operating system.
+   SectionMemoryManager(MemoryMapper *MM = nullptr);
++
++  /// Creates a SectionMemoryManager instance with \p OwnedMM as the 
associated
++  /// memory mapper, taking ownership of the memory mapper object.
++  SectionMemoryManager(std::unique_ptr<MemoryMapper> OwnedMM);
++
+   SectionMemoryManager(const SectionMemoryManager &) = delete;
+   void operator=(const SectionMemoryManager &) = delete;
+   ~SectionMemoryManager() override;
+@@ -149,6 +154,10 @@
+   /// This method is called from finalizeMemory.
+   virtual void invalidateInstructionCache();
+ 
++  MemoryMapper *getMemoryMapper() { return MMapper; }
++
++  void setNearHintMB(sys::MemoryBlock MB);
++
+ private:
+   struct FreeMemBlock {
+     // The actual block of free memory
+@@ -187,6 +196,7 @@
+   MemoryGroup RODataMem;
+   MemoryMapper *MMapper;
+   std::unique_ptr<MemoryMapper> OwnedMMapper;
++  sys::MemoryBlock NearHintMB;
+ };
+ 
+ } // end namespace llvm
diff --git a/thirdparty/patches/llvm-section-mm-memory-mapper.patch 
b/thirdparty/patches/llvm-section-mm-memory-mapper.patch
new file mode 100644
index 000000000..3add1f80e
--- /dev/null
+++ b/thirdparty/patches/llvm-section-mm-memory-mapper.patch
@@ -0,0 +1,84 @@
+commit 9ce06411994e9bcaa98c219c7dc34c2824353a81
+Author: Stefan Gränitz <[email protected]>
+Date:   Wed Jul 5 14:28:47 2023 +0200
+
+    [Kaleidoscope] Fix race condition in order-of-destruction between 
SectionMemoryManager and its MemoryMapper
+    
+    SectionMemoryManager's default memory mapper used to be a global static
+    object. If the SectionMemoryManager itself is a global static
+    object, it might be destroyed after its memory mapper and thus couldn't
+    use it from the destructor.
+    
+    The Kaleidoscope tutorial reproduced this situation with MSVC for a long 
time.
+    Since 47f5c54f997a59bb2c65 it's triggered with GCC as well. The solution 
from
+    this patch was proposed in the existing review 
https://reviews.llvm.org/D107087
+    before, but it didn't move forward.
+    
+    Reviewed By: nikic
+    
+    Differential Revision: https://reviews.llvm.org/D154338
+
+diff --git a/include/llvm/ExecutionEngine/SectionMemoryManager.h 
b/include/llvm/ExecutionEngine/SectionMemoryManager.h
+index 455efc9f9001..fa1b2355528d 100644
+--- a/include/llvm/ExecutionEngine/SectionMemoryManager.h
++++ b/include/llvm/ExecutionEngine/SectionMemoryManager.h
+@@ -185,7 +185,8 @@ private:
+   MemoryGroup CodeMem;
+   MemoryGroup RWDataMem;
+   MemoryGroup RODataMem;
+-  MemoryMapper &MMapper;
++  MemoryMapper *MMapper;
++  std::unique_ptr<MemoryMapper> OwnedMMapper;
+ };
+ 
+ } // end namespace llvm
+diff --git a/lib/ExecutionEngine/SectionMemoryManager.cpp 
b/lib/ExecutionEngine/SectionMemoryManager.cpp
+index b23e33039c35..436888730bfb 100644
+--- a/lib/ExecutionEngine/SectionMemoryManager.cpp
++++ b/lib/ExecutionEngine/SectionMemoryManager.cpp
+@@ -101,7 +101,7 @@ uint8_t *SectionMemoryManager::allocateSection(
+   // FIXME: Initialize the Near member for each memory group to avoid
+   // interleaving.
+   std::error_code ec;
+-  sys::MemoryBlock MB = MMapper.allocateMappedMemory(
++  sys::MemoryBlock MB = MMapper->allocateMappedMemory(
+       Purpose, RequiredSize, &MemGroup.Near,
+       sys::Memory::MF_READ | sys::Memory::MF_WRITE, ec);
+   if (ec) {
+@@ -204,7 +204,7 @@ std::error_code
+ SectionMemoryManager::applyMemoryGroupPermissions(MemoryGroup &MemGroup,
+                                                   unsigned Permissions) {
+   for (sys::MemoryBlock &MB : MemGroup.PendingMem)
+-    if (std::error_code EC = MMapper.protectMappedMemory(MB, Permissions))
++    if (std::error_code EC = MMapper->protectMappedMemory(MB, Permissions))
+       return EC;
+ 
+   MemGroup.PendingMem.clear();
+@@ -234,7 +234,7 @@ void SectionMemoryManager::invalidateInstructionCache() {
+ SectionMemoryManager::~SectionMemoryManager() {
+   for (MemoryGroup *Group : {&CodeMem, &RWDataMem, &RODataMem}) {
+     for (sys::MemoryBlock &Block : Group->AllocatedMem)
+-      MMapper.releaseMappedMemory(Block);
++      MMapper->releaseMappedMemory(Block);
+   }
+ }
+ 
+@@ -263,11 +263,14 @@ public:
+     return sys::Memory::releaseMappedMemory(M);
+   }
+ };
+-
+-DefaultMMapper DefaultMMapperInstance;
+ } // namespace
+ 
+-SectionMemoryManager::SectionMemoryManager(MemoryMapper *MM)
+-    : MMapper(MM ? *MM : DefaultMMapperInstance) {}
++SectionMemoryManager::SectionMemoryManager(MemoryMapper *UnownedMM)
++    : MMapper(UnownedMM), OwnedMMapper(nullptr) {
++  if (!MMapper) {
++    OwnedMMapper = std::make_unique<DefaultMMapper>();
++    MMapper = OwnedMMapper.get();
++  }
++}
+ 
+ } // namespace llvm

Reply via email to