Title: [241847] trunk/Source/bmalloc
Revision
241847
Author
[email protected]
Date
2019-02-20 16:03:17 -0800 (Wed, 20 Feb 2019)

Log Message

[bmalloc] bmalloc::Heap is allocated even though we use system malloc mode
https://bugs.webkit.org/show_bug.cgi?id=194836

Reviewed by Mark Lam.

Previously, bmalloc::Heap holds DebugHeap, and delegates allocation and deallocation to debug heap.
However, bmalloc::Heap is large. We would like to avoid initialization of bmalloc::Heap under the
system malloc mode.

This patch extracts out DebugHeap from bmalloc::Heap, and logically puts this in a boundary of
bmalloc::api. bmalloc::api delegates allocation and deallocation to DebugHeap if DebugHeap is enabled.
Otherwise, using bmalloc's usual mechanism. The challenge is that we would like to keep bmalloc fast
path fast.

1. For IsoHeaps, we use the similar techniques done in Cache. If the debug mode is enabled, we always go
   to the slow path of the IsoHeap allocation, and keep IsoTLS::get() returning nullptr. In the slow path,
   we just fallback to the usual bmalloc::api::tryMalloc implementation. This is efficient because bmalloc
   continues using the fast path.

2. For the other APIs, like freeLargeVirtual, we just put DebugHeap check because this API itself takes fair
   amount of time. Then debug heap check does not matter.

* bmalloc/Allocator.cpp:
(bmalloc::Allocator::reallocateImpl):
* bmalloc/Cache.cpp:
(bmalloc::Cache::tryAllocateSlowCaseNullCache):
(bmalloc::Cache::allocateSlowCaseNullCache):
(bmalloc::Cache::deallocateSlowCaseNullCache):
(bmalloc::Cache::tryReallocateSlowCaseNullCache):
(bmalloc::Cache::reallocateSlowCaseNullCache):
(): Deleted.
(bmalloc::debugHeap): Deleted.
* bmalloc/DebugHeap.cpp:
* bmalloc/DebugHeap.h:
(bmalloc::DebugHeap::tryGet):
* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap):
(bmalloc::Heap::footprint):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::deallocateLarge):
* bmalloc/Heap.h:
(bmalloc::Heap::debugHeap): Deleted.
* bmalloc/IsoTLS.cpp:
(bmalloc::IsoTLS::IsoTLS):
(bmalloc::IsoTLS::isUsingDebugHeap): Deleted.
(bmalloc::IsoTLS::debugMalloc): Deleted.
(bmalloc::IsoTLS::debugFree): Deleted.
* bmalloc/IsoTLS.h:
* bmalloc/IsoTLSInlines.h:
(bmalloc::IsoTLS::allocateSlow):
(bmalloc::IsoTLS::deallocateSlow):
* bmalloc/ObjectType.cpp:
(bmalloc::objectType):
* bmalloc/ObjectType.h:
* bmalloc/Scavenger.cpp:
(bmalloc::Scavenger::Scavenger):
* bmalloc/bmalloc.cpp:
(bmalloc::api::tryLargeZeroedMemalignVirtual):
(bmalloc::api::freeLargeVirtual):
(bmalloc::api::scavenge):
(bmalloc::api::isEnabled):
(bmalloc::api::setScavengerThreadQOSClass):
(bmalloc::api::commitAlignedPhysical):
(bmalloc::api::decommitAlignedPhysical):
(bmalloc::api::enableMiniMode):

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (241846 => 241847)


--- trunk/Source/bmalloc/ChangeLog	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/ChangeLog	2019-02-21 00:03:17 UTC (rev 241847)
@@ -1,3 +1,71 @@
+2019-02-19  Yusuke Suzuki  <[email protected]>
+
+        [bmalloc] bmalloc::Heap is allocated even though we use system malloc mode
+        https://bugs.webkit.org/show_bug.cgi?id=194836
+
+        Reviewed by Mark Lam.
+
+        Previously, bmalloc::Heap holds DebugHeap, and delegates allocation and deallocation to debug heap.
+        However, bmalloc::Heap is large. We would like to avoid initialization of bmalloc::Heap under the
+        system malloc mode.
+
+        This patch extracts out DebugHeap from bmalloc::Heap, and logically puts this in a boundary of
+        bmalloc::api. bmalloc::api delegates allocation and deallocation to DebugHeap if DebugHeap is enabled.
+        Otherwise, using bmalloc's usual mechanism. The challenge is that we would like to keep bmalloc fast
+        path fast.
+
+        1. For IsoHeaps, we use the similar techniques done in Cache. If the debug mode is enabled, we always go
+           to the slow path of the IsoHeap allocation, and keep IsoTLS::get() returning nullptr. In the slow path,
+           we just fallback to the usual bmalloc::api::tryMalloc implementation. This is efficient because bmalloc
+           continues using the fast path.
+
+        2. For the other APIs, like freeLargeVirtual, we just put DebugHeap check because this API itself takes fair
+           amount of time. Then debug heap check does not matter.
+
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::reallocateImpl):
+        * bmalloc/Cache.cpp:
+        (bmalloc::Cache::tryAllocateSlowCaseNullCache):
+        (bmalloc::Cache::allocateSlowCaseNullCache):
+        (bmalloc::Cache::deallocateSlowCaseNullCache):
+        (bmalloc::Cache::tryReallocateSlowCaseNullCache):
+        (bmalloc::Cache::reallocateSlowCaseNullCache):
+        (): Deleted.
+        (bmalloc::debugHeap): Deleted.
+        * bmalloc/DebugHeap.cpp:
+        * bmalloc/DebugHeap.h:
+        (bmalloc::DebugHeap::tryGet):
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap):
+        (bmalloc::Heap::footprint):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::deallocateLarge):
+        * bmalloc/Heap.h:
+        (bmalloc::Heap::debugHeap): Deleted.
+        * bmalloc/IsoTLS.cpp:
+        (bmalloc::IsoTLS::IsoTLS):
+        (bmalloc::IsoTLS::isUsingDebugHeap): Deleted.
+        (bmalloc::IsoTLS::debugMalloc): Deleted.
+        (bmalloc::IsoTLS::debugFree): Deleted.
+        * bmalloc/IsoTLS.h:
+        * bmalloc/IsoTLSInlines.h:
+        (bmalloc::IsoTLS::allocateSlow):
+        (bmalloc::IsoTLS::deallocateSlow):
+        * bmalloc/ObjectType.cpp:
+        (bmalloc::objectType):
+        * bmalloc/ObjectType.h:
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::Scavenger::Scavenger):
+        * bmalloc/bmalloc.cpp:
+        (bmalloc::api::tryLargeZeroedMemalignVirtual):
+        (bmalloc::api::freeLargeVirtual):
+        (bmalloc::api::scavenge):
+        (bmalloc::api::isEnabled):
+        (bmalloc::api::setScavengerThreadQOSClass):
+        (bmalloc::api::commitAlignedPhysical):
+        (bmalloc::api::decommitAlignedPhysical):
+        (bmalloc::api::enableMiniMode):
+
 2019-02-20  Andy Estes  <[email protected]>
 
         [Xcode] Add SDKVariant.xcconfig to various Xcode projects

Modified: trunk/Source/bmalloc/bmalloc/Allocator.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Allocator.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Allocator.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -102,9 +102,9 @@
 void* Allocator::reallocateImpl(void* object, size_t newSize, bool crashOnFailure)
 {
     size_t oldSize = 0;
-    switch (objectType(m_heap.kind(), object)) {
+    switch (objectType(m_heap, object)) {
     case ObjectType::Small: {
-        BASSERT(objectType(m_heap.kind(), nullptr) == ObjectType::Small);
+        BASSERT(objectType(m_heap, nullptr) == ObjectType::Small);
         if (!object)
             break;
 

Modified: trunk/Source/bmalloc/bmalloc/Cache.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Cache.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Cache.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -32,8 +32,6 @@
 
 namespace bmalloc {
 
-static DebugHeap* debugHeapCache { nullptr };
-
 void Cache::scavenge(HeapKind heapKind)
 {
     PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
@@ -46,17 +44,6 @@
     caches->at(heapKind).deallocator().scavenge();
 }
 
-static BINLINE DebugHeap* debugHeap()
-{
-    if (debugHeapCache)
-        return debugHeapCache;
-    if (PerProcess<Environment>::get()->isDebugHeapEnabled()) {
-        debugHeapCache = PerProcess<DebugHeap>::get();
-        return debugHeapCache;
-    }
-    return nullptr;
-}
-
 Cache::Cache(HeapKind heapKind)
     : m_deallocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind))
     , m_allocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind), m_deallocator)
@@ -66,9 +53,9 @@
 
 BNO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = false;
-        return heap->malloc(size, crashOnFailure);
+        return debugHeap->malloc(size, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryAllocate(size);
 }
@@ -75,9 +62,9 @@
 
 BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = true;
-        return heap->malloc(size, crashOnFailure);
+        return debugHeap->malloc(size, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().allocate(size);
 }
@@ -84,9 +71,9 @@
 
 BNO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = false;
-        return heap->memalign(alignment, size, crashOnFailure);
+        return debugHeap->memalign(alignment, size, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryAllocate(alignment, size);
 }
@@ -93,9 +80,9 @@
 
 BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = true;
-        return heap->memalign(alignment, size, crashOnFailure);
+        return debugHeap->memalign(alignment, size, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().allocate(alignment, size);
 }
@@ -102,8 +89,8 @@
 
 BNO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
 {
-    if (auto* heap = debugHeap()) {
-        heap->free(object);
+    if (auto* debugHeap = DebugHeap::tryGet()) {
+        debugHeap->free(object);
         return;
     }
     PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).deallocator().deallocate(object);
@@ -111,9 +98,9 @@
 
 BNO_INLINE void* Cache::tryReallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = false;
-        return heap->realloc(object, newSize, crashOnFailure);
+        return debugHeap->realloc(object, newSize, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryReallocate(object, newSize);
 }
@@ -120,9 +107,9 @@
 
 BNO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
 {
-    if (auto* heap = debugHeap()) {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
         constexpr bool crashOnFailure = true;
-        return heap->realloc(object, newSize, crashOnFailure);
+        return debugHeap->realloc(object, newSize, crashOnFailure);
     }
     return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().reallocate(object, newSize);
 }

Modified: trunk/Source/bmalloc/bmalloc/DebugHeap.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/DebugHeap.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/DebugHeap.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -33,6 +33,8 @@
 #include <thread>
 
 namespace bmalloc {
+
+DebugHeap* debugHeapCache { nullptr };
     
 #if BOS(DARWIN)
 

Modified: trunk/Source/bmalloc/bmalloc/DebugHeap.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/DebugHeap.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/DebugHeap.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -25,7 +25,9 @@
 
 #pragma once
 
+#include "Environment.h"
 #include "Mutex.h"
+#include "PerProcess.h"
 #include <mutex>
 #include <unordered_map>
 
@@ -47,6 +49,8 @@
     void* memalignLarge(size_t alignment, size_t);
     void freeLarge(void* base);
 
+    static DebugHeap* tryGet();
+
 private:
 #if BOS(DARWIN)
     malloc_zone_t* m_zone;
@@ -58,4 +62,16 @@
     std::unordered_map<void*, size_t> m_sizeMap;
 };
 
+extern BEXPORT DebugHeap* debugHeapCache;
+BINLINE DebugHeap* DebugHeap::tryGet()
+{
+    if (debugHeapCache)
+        return debugHeapCache;
+    if (PerProcess<Environment>::get()->isDebugHeapEnabled()) {
+        debugHeapCache = PerProcess<DebugHeap>::get();
+        return debugHeapCache;
+    }
+    return nullptr;
+}
+
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Environment.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Environment.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Environment.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -32,7 +32,7 @@
 
 class Environment {
 public:
-    Environment(std::lock_guard<Mutex>&);
+    BEXPORT Environment(std::lock_guard<Mutex>&);
     
     bool isDebugHeapEnabled() { return m_isDebugHeapEnabled; }
 

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -47,7 +47,6 @@
 Heap::Heap(HeapKind kind, std::lock_guard<Mutex>&)
     : m_kind(kind)
     , m_vmPageSizePhysical(vmPageSizePhysical())
-    , m_debugHeap(nullptr)
 {
     RELEASE_BASSERT(vmPageSizePhysical() >= smallPageSize);
     RELEASE_BASSERT(vmPageSize() >= vmPageSizePhysical());
@@ -55,22 +54,20 @@
     initializeLineMetadata();
     initializePageMetadata();
     
-    if (PerProcess<Environment>::get()->isDebugHeapEnabled())
-        m_debugHeap = PerProcess<DebugHeap>::get();
-    else {
-        Gigacage::ensureGigacage();
+    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
+
+    Gigacage::ensureGigacage();
 #if GIGACAGE_ENABLED
-        if (usingGigacage()) {
-            RELEASE_BASSERT(gigacageBasePtr());
-            uint64_t random[2];
-            cryptoRandom(reinterpret_cast<unsigned char*>(random), sizeof(random));
-            size_t size = roundDownToMultipleOf(vmPageSize(), gigacageSize() - (random[0] % Gigacage::maximumCageSizeReductionForSlide));
-            ptrdiff_t offset = roundDownToMultipleOf(vmPageSize(), random[1] % (gigacageSize() - size));
-            void* base = reinterpret_cast<unsigned char*>(gigacageBasePtr()) + offset;
-            m_largeFree.add(LargeRange(base, size, 0, 0));
-        }
+    if (usingGigacage()) {
+        RELEASE_BASSERT(gigacageBasePtr());
+        uint64_t random[2];
+        cryptoRandom(reinterpret_cast<unsigned char*>(random), sizeof(random));
+        size_t size = roundDownToMultipleOf(vmPageSize(), gigacageSize() - (random[0] % Gigacage::maximumCageSizeReductionForSlide));
+        ptrdiff_t offset = roundDownToMultipleOf(vmPageSize(), random[1] % (gigacageSize() - size));
+        void* base = reinterpret_cast<unsigned char*>(gigacageBasePtr()) + offset;
+        m_largeFree.add(LargeRange(base, size, 0, 0));
+    }
 #endif
-    }
     
     m_scavenger = PerProcess<Scavenger>::get();
 }
@@ -153,7 +150,6 @@
 
 size_t Heap::footprint()
 {
-    BASSERT(!m_debugHeap);
     return m_footprint;
 }
 
@@ -555,9 +551,6 @@
 
     BASSERT(isPowerOfTwo(alignment));
     
-    if (m_debugHeap)
-        return m_debugHeap->memalignLarge(alignment, size);
-    
     m_scavenger->didStartGrowing();
     
     size_t roundedSize = size ? roundUpToMultipleOf(largeAlignment, size) : largeAlignment;
@@ -626,9 +619,6 @@
 
 void Heap::deallocateLarge(std::unique_lock<Mutex>&, void* object)
 {
-    if (m_debugHeap)
-        return m_debugHeap->freeLarge(object);
-
     size_t size = m_largeAllocated.remove(object);
     m_largeFree.add(LargeRange(object, size, size, size));
     m_freeableMemory += size;

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -63,8 +63,6 @@
     
     HeapKind kind() const { return m_kind; }
     
-    DebugHeap* debugHeap() { return m_debugHeap; }
-
     void allocateSmallBumpRanges(std::unique_lock<Mutex>&, size_t sizeClass,
         BumpAllocator&, BumpRangeCache&, LineCache&);
     void derefSmallLine(std::unique_lock<Mutex>&, Object, LineCache&);
@@ -145,7 +143,6 @@
     Map<Chunk*, ObjectType, ChunkHash> m_objectTypes;
 
     Scavenger* m_scavenger { nullptr };
-    DebugHeap* m_debugHeap { nullptr };
 
     size_t m_footprint { 0 };
     size_t m_freeableMemory { 0 };

Modified: trunk/Source/bmalloc/bmalloc/IsoTLS.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/IsoTLS.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/IsoTLS.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -25,7 +25,6 @@
 
 #include "IsoTLS.h"
 
-#include "DebugHeap.h"
 #include "Environment.h"
 #include "Gigacage.h"
 #include "IsoTLSEntryInlines.h"
@@ -55,6 +54,7 @@
 
 IsoTLS::IsoTLS()
 {
+    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
 }
 
 IsoTLS* IsoTLS::ensureEntries(unsigned offset)
@@ -174,30 +174,6 @@
         });
 }
 
-bool IsoTLS::isUsingDebugHeap()
-{
-    return PerProcess<Environment>::get()->isDebugHeapEnabled();
-}
-
-auto IsoTLS::debugMalloc(size_t size) -> DebugMallocResult
-{
-    DebugMallocResult result;
-    if ((result.usingDebugHeap = isUsingDebugHeap())) {
-        constexpr bool crashOnFailure = true;
-        result.ptr = PerProcess<DebugHeap>::get()->malloc(size, crashOnFailure);
-    }
-    return result;
-}
-
-bool IsoTLS::debugFree(void* p)
-{
-    if (isUsingDebugHeap()) {
-        PerProcess<DebugHeap>::get()->free(p);
-        return true;
-    }
-    return false;
-}
-
 void IsoTLS::determineMallocFallbackState()
 {
     static std::once_flag onceFlag;

Modified: trunk/Source/bmalloc/bmalloc/IsoTLS.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/IsoTLS.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/IsoTLS.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -103,16 +103,6 @@
     
     BEXPORT static void determineMallocFallbackState();
     
-    static bool isUsingDebugHeap();
-    
-    struct DebugMallocResult {
-        void* ptr { nullptr };
-        bool usingDebugHeap { false };
-    };
-    
-    BEXPORT static DebugMallocResult debugMalloc(size_t);
-    BEXPORT static bool debugFree(void*);
-    
     IsoTLSEntry* m_lastEntry { nullptr };
     unsigned m_extent { 0 };
     unsigned m_capacity { 0 };

Modified: trunk/Source/bmalloc/bmalloc/IsoTLSInlines.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/IsoTLSInlines.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/IsoTLSInlines.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "Environment.h"
 #include "IsoHeapImpl.h"
 #include "IsoTLS.h"
 #include "bmalloc.h"
@@ -94,9 +95,8 @@
         break;
     }
     
-    auto debugMallocResult = debugMalloc(Config::objectSize);
-    if (debugMallocResult.usingDebugHeap)
-        return debugMallocResult.ptr;
+    // If debug heap is enabled, s_mallocFallbackState becomes MallocFallbackState::FallBackToMalloc.
+    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
     
     IsoTLS* tls = ensureHeapAndEntries(handle);
     
@@ -138,8 +138,8 @@
         break;
     }
     
-    if (debugFree(p))
-        return;
+    // If debug heap is enabled, s_mallocFallbackState becomes MallocFallbackState::FallBackToMalloc.
+    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
     
     RELEASE_BASSERT(handle.isInitialized());
     

Modified: trunk/Source/bmalloc/bmalloc/ObjectType.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/ObjectType.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/ObjectType.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -32,7 +32,7 @@
 
 namespace bmalloc {
 
-ObjectType objectType(HeapKind kind, void* object)
+ObjectType objectType(Heap& heap, void* object)
 {
     if (mightBeLarge(object)) {
         if (!object)
@@ -39,7 +39,7 @@
             return ObjectType::Small;
 
         std::unique_lock<Mutex> lock(Heap::mutex());
-        if (PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).isLarge(lock, object))
+        if (heap.isLarge(lock, object))
             return ObjectType::Large;
     }
     

Modified: trunk/Source/bmalloc/bmalloc/ObjectType.h (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/ObjectType.h	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/ObjectType.h	2019-02-21 00:03:17 UTC (rev 241847)
@@ -32,9 +32,11 @@
 
 namespace bmalloc {
 
+class Heap;
+
 enum class ObjectType : unsigned char { Small, Large };
 
-ObjectType objectType(HeapKind, void*);
+ObjectType objectType(Heap&, void*);
 
 inline bool mightBeLarge(void* object)
 {

Modified: trunk/Source/bmalloc/bmalloc/Scavenger.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -67,8 +67,7 @@
 
 Scavenger::Scavenger(std::lock_guard<Mutex>&)
 {
-    if (PerProcess<Environment>::get()->isDebugHeapEnabled())
-        return;
+    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
 
 #if BOS(DARWIN)
     auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);

Modified: trunk/Source/bmalloc/bmalloc/bmalloc.cpp (241846 => 241847)


--- trunk/Source/bmalloc/bmalloc/bmalloc.cpp	2019-02-20 23:34:50 UTC (rev 241846)
+++ trunk/Source/bmalloc/bmalloc/bmalloc.cpp	2019-02-21 00:03:17 UTC (rev 241847)
@@ -25,6 +25,7 @@
 
 #include "bmalloc.h"
 
+#include "DebugHeap.h"
 #include "Environment.h"
 #include "PerProcess.h"
 
@@ -50,11 +51,13 @@
     RELEASE_BASSERT(alignment >= requiredAlignment);
     RELEASE_BASSERT(size >= requestedSize);
 
-    kind = mapToActiveHeapKind(kind);
-    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
+    void* result;
+    if (auto* debugHeap = DebugHeap::tryGet())
+        result = debugHeap->memalignLarge(alignment, size);
+    else {
+        kind = mapToActiveHeapKind(kind);
+        Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
 
-    void* result;
-    {
         std::unique_lock<Mutex> lock(Heap::mutex());
         result = heap.tryAllocateLarge(lock, alignment, size);
         if (result) {
@@ -73,6 +76,10 @@
 
 void freeLargeVirtual(void* object, size_t size, HeapKind kind)
 {
+    if (auto* debugHeap = DebugHeap::tryGet()) {
+        debugHeap->freeLarge(object);
+        return;
+    }
     kind = mapToActiveHeapKind(kind);
     Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
     std::unique_lock<Mutex> lock(Heap::mutex());
@@ -85,7 +92,8 @@
 {
     scavengeThisThread();
 
-    PerProcess<Scavenger>::get()->scavenge();
+    if (!DebugHeap::tryGet())
+        PerProcess<Scavenger>::get()->scavenge();
 }
 
 bool isEnabled(HeapKind)
@@ -96,6 +104,8 @@
 #if BOS(DARWIN)
 void setScavengerThreadQOSClass(qos_class_t overrideClass)
 {
+    if (DebugHeap::tryGet())
+        return;
     std::unique_lock<Mutex> lock(Heap::mutex());
     PerProcess<Scavenger>::get()->setScavengerThreadQOSClass(overrideClass);
 }
@@ -105,8 +115,8 @@
 {
     vmValidatePhysical(object, size);
     vmAllocatePhysicalPages(object, size);
-    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
-    heap.externalCommit(object, size);
+    if (!DebugHeap::tryGet())
+        PerProcess<PerHeapKind<Heap>>::get()->at(kind).externalCommit(object, size);
 }
 
 void decommitAlignedPhysical(void* object, size_t size, HeapKind kind)
@@ -113,13 +123,14 @@
 {
     vmValidatePhysical(object, size);
     vmDeallocatePhysicalPages(object, size);
-    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
-    heap.externalDecommit(object, size);
+    if (!DebugHeap::tryGet())
+        PerProcess<PerHeapKind<Heap>>::get()->at(kind).externalDecommit(object, size);
 }
 
 void enableMiniMode()
 {
-    PerProcess<Scavenger>::get()->enableMiniMode();
+    if (!DebugHeap::tryGet())
+        PerProcess<Scavenger>::get()->enableMiniMode();
 }
 
 } } // namespace bmalloc::api
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to