Title: [276266] trunk/Source/bmalloc
Revision
276266
Author
[email protected]
Date
2021-04-19 11:52:25 -0700 (Mon, 19 Apr 2021)

Log Message

[bmalloc] Enable Adaptive Scavenger for Mac
https://bugs.webkit.org/show_bug.cgi?id=224706

Reviewed by Filip Pizlo.

Enabled the adaptive scavenger code paths for macOS.
The original reason that the partial scavenging paths were kept for macOS was due
to regression on power tests.  To alleviate the power regression, this patch splits
out the adaptive scavenger parameters with macOS specific values.

The parameters are:
  The multiplier used to compute the next scavenging wait time based on the 
    time needed for the prior scavenging.
  Minimum wait time between scavenging.
  Maximum wait time between scavenging.

The values in the current code are:
  Wait time Multiplier: 150
  Minimum wait time: 100ms
  Maximum wait time: 10,000ms (10 seconds)

The proposed values for macOS, determined using empirical testing.
  Wait time Multiplier: 300
  Minimum wait time: 750ms
  Maximum wait time: 20,000ms (20 seconds)
        
When tested on various mac variants, this change:
 * Provides a 3-5% reduction in memory use on RAMification.
 * It is neutral on JetStream2.
 * It is neutral to a slight regression on Speedometer2, but there is some
   variability in those results.

Since macOS was the only platform still using the partial scavenging code path,
the partial scavenging code paths were deleted.

* bmalloc/BPlatform.h:
* bmalloc/Heap.cpp:
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::scavengeToHighWatermark): Deleted.
* bmalloc/Heap.h:
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::scavenge):
(bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
* bmalloc/LargeMap.cpp:
(bmalloc::LargeMap::add):
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::LargeRange):
(bmalloc::LargeRange::setUsedSinceLastScavenge):
(bmalloc::merge):
(): Deleted.
* bmalloc/Scavenger.cpp:
(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::scavenge):
(bmalloc::Scavenger::threadRunLoop):
(bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
(bmalloc::Scavenger::partialScavenge): Deleted.
* bmalloc/Scavenger.h:
* bmalloc/SmallPage.h:
(bmalloc::SmallPage::setUsedSinceLastScavenge):

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (276265 => 276266)


--- trunk/Source/bmalloc/ChangeLog	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/ChangeLog	2021-04-19 18:52:25 UTC (rev 276266)
@@ -1,3 +1,73 @@
+2021-04-19  Michael Saboff  <[email protected]>
+
+        [bmalloc] Enable Adaptive Scavenger for Mac
+        https://bugs.webkit.org/show_bug.cgi?id=224706
+
+        Reviewed by Filip Pizlo.
+
+        Enabled the adaptive scavenger code paths for macOS.
+        The original reason that the partial scavenging paths were kept for macOS was due
+        to regression on power tests.  To alleviate the power regression, this patch splits
+        out the adaptive scavenger parameters with macOS specific values.
+
+        The parameters are:
+          The multiplier used to compute the next scavenging wait time based on the 
+            time needed for the prior scavenging.
+          Minimum wait time between scavenging.
+          Maximum wait time between scavenging.
+
+        The values in the current code are:
+          Wait time Multiplier: 150
+          Minimum wait time: 100ms
+          Maximum wait time: 10,000ms (10 seconds)
+
+        The proposed values for macOS, determined using empirical testing.
+          Wait time Multiplier: 300
+          Minimum wait time: 750ms
+          Maximum wait time: 20,000ms (20 seconds)
+        
+        When tested on various mac variants, this change:
+         * Provides a 3-5% reduction in memory use on RAMification.
+         * It is neutral on JetStream2.
+         * It is neutral to a slight regression on Speedometer2, but there is some
+           variability in those results.
+
+        Since macOS was the only platform still using the partial scavenging code path,
+        the partial scavenging code paths were deleted.
+
+        * bmalloc/BPlatform.h:
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::scavengeToHighWatermark): Deleted.
+        * bmalloc/Heap.h:
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::scavenge):
+        (bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
+        * bmalloc/LargeMap.cpp:
+        (bmalloc::LargeMap::add):
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::LargeRange):
+        (bmalloc::LargeRange::setUsedSinceLastScavenge):
+        (bmalloc::merge):
+        (): Deleted.
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::Scavenger::Scavenger):
+        (bmalloc::Scavenger::scavenge):
+        (bmalloc::Scavenger::threadRunLoop):
+        (bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
+        (bmalloc::Scavenger::partialScavenge): Deleted.
+        * bmalloc/Scavenger.h:
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::setUsedSinceLastScavenge):
+
 2021-04-19  Kimmo Kinnunen  <[email protected]>
 
         Enable -Wthread-safety, add attributes to custom lock classes, and provide macros to declare guards

Modified: trunk/Source/bmalloc/bmalloc/BPlatform.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/BPlatform.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/BPlatform.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -309,12 +309,6 @@
 /* This is used for debugging when hacking on how bmalloc calculates its physical footprint. */
 #define ENABLE_PHYSICAL_PAGE_MAP 0
 
-#if BPLATFORM(MAC)
-#define BUSE_PARTIAL_SCAVENGE 1
-#else
-#define BUSE_PARTIAL_SCAVENGE 0
-#endif
-
 #if !defined(BUSE_PRECOMPUTED_CONSTANTS_VMPAGE4K)
 #define BUSE_PRECOMPUTED_CONSTANTS_VMPAGE4K 1
 #endif

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2021-04-19 18:52:25 UTC (rev 276266)
@@ -119,11 +119,7 @@
 #endif
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
-void Heap::scavenge(UniqueLockHolder& lock, BulkDecommit& decommitter)
-#else
 void Heap::scavenge(UniqueLockHolder& lock, BulkDecommit& decommitter, size_t& deferredDecommits)
-#endif
 {
     for (auto& list : m_freePages) {
         for (auto* chunk : list) {
@@ -130,13 +126,11 @@
             for (auto* page : chunk->freePages()) {
                 if (!page->hasPhysicalPages())
                     continue;
-#if !BUSE(PARTIAL_SCAVENGE)
                 if (page->usedSinceLastScavenge()) {
                     page->clearUsedSinceLastScavenge();
                     deferredDecommits++;
                     continue;
                 }
-#endif
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
@@ -157,37 +151,15 @@
     }
 
     for (LargeRange& range : m_largeFree) {
-#if BUSE(PARTIAL_SCAVENGE)
-        m_highWatermark = std::min(m_highWatermark, static_cast<void*>(range.begin()));
-#else
         if (range.usedSinceLastScavenge()) {
             range.clearUsedSinceLastScavenge();
             deferredDecommits++;
             continue;
         }
-#endif
         decommitLargeRange(lock, range, decommitter);
     }
-
-#if BUSE(PARTIAL_SCAVENGE)
-    m_freeableMemory = 0;
-#endif
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
-void Heap::scavengeToHighWatermark(UniqueLockHolder& lock, BulkDecommit& decommitter)
-{
-    void* newHighWaterMark = nullptr;
-    for (LargeRange& range : m_largeFree) {
-        if (range.begin() <= m_highWatermark)
-            newHighWaterMark = std::min(newHighWaterMark, static_cast<void*>(range.begin()));
-        else
-            decommitLargeRange(lock, range, decommitter);
-    }
-    m_highWatermark = newHighWaterMark;
-}
-#endif
-
 void Heap::deallocateLineCache(UniqueLockHolder&, LineCache& lineCache)
 {
     for (auto& list : lineCache) {
@@ -221,9 +193,7 @@
         size_t accountedInFreeable = 0;
         forEachPage(chunk, pageSize, [&](SmallPage* page) {
             page->setHasPhysicalPages(true);
-#if !BUSE(PARTIAL_SCAVENGE)
             page->setUsedSinceLastScavenge();
-#endif
             page->setHasFreeLines(lock, true);
             chunk->freePages().push(page);
             accountedInFreeable += pageSize;
@@ -314,9 +284,7 @@
             m_physicalPageMap.commit(page->begin()->begin(), pageSize);
 #endif
         }
-#if !BUSE(PARTIAL_SCAVENGE)
         page->setUsedSinceLastScavenge();
-#endif
 
         return page;
     }();
@@ -590,9 +558,6 @@
     m_freeableMemory -= range.totalPhysicalSize();
 
     void* result = splitAndAllocate(lock, range, alignment, size).begin();
-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = std::max(m_highWatermark, result);
-#endif
     ASSERT_OR_RETURN_ON_FAILURE(result);
     return result;
 

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -74,12 +74,7 @@
     size_t largeSize(UniqueLockHolder&, void*);
     void shrinkLarge(UniqueLockHolder&, const Range&, size_t);
 
-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(UniqueLockHolder&, BulkDecommit&);
-    void scavenge(UniqueLockHolder&, BulkDecommit&);
-#else
     void scavenge(UniqueLockHolder&, BulkDecommit&, size_t& deferredDecommits);
-#endif
     void scavenge(UniqueLockHolder&, BulkDecommit&, size_t& freed, size_t goal);
 
     size_t freeableMemory(UniqueLockHolder&);
@@ -147,10 +142,6 @@
 #if ENABLE_PHYSICAL_PAGE_MAP 
     PhysicalPageMap m_physicalPageMap;
 #endif
-    
-#if BUSE(PARTIAL_SCAVENGE)
-    void* m_highWatermark { nullptr };
-#endif
 };
 
 inline void Heap::allocateSmallBumpRanges(

Modified: trunk/Source/bmalloc/bmalloc/IsoDirectory.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/IsoDirectory.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectory.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -76,9 +76,6 @@
     // Iterate over all empty and committed pages, and put them into the vector. This also records the
     // pages as being decommitted. It's the caller's job to do the actual decommitting.
     void scavenge(const LockHolder&, Vector<DeferredDecommit>&);
-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(const LockHolder&, Vector<DeferredDecommit>&);
-#endif
 
     template<typename Func>
     void forEachCommittedPage(const LockHolder&, const Func&);
@@ -93,9 +90,6 @@
     Bits<numPages> m_empty;
     Bits<numPages> m_committed;
     unsigned m_firstEligibleOrDecommitted { 0 };
-#if BUSE(PARTIAL_SCAVENGE)
-    unsigned m_highWatermark { 0 };
-#endif
 };
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -50,10 +50,6 @@
     if (pageIndex >= numPages)
         return EligibilityKind::Full;
 
-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = std::max(pageIndex, m_highWatermark);
-#endif
-
     Scavenger& scavenger = *Scavenger::get();
     scavenger.didStartGrowing();
     
@@ -146,25 +142,9 @@
         [&] (size_t index) {
             scavengePage(locker, index, decommits);
         });
-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = 0;
-#endif
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
 template<typename Config, unsigned passedNumPages>
-void IsoDirectory<Config, passedNumPages>::scavengeToHighWatermark(const LockHolder& locker, Vector<DeferredDecommit>& decommits)
-{
-    (m_empty & m_committed).forEachSetBit(
-        [&] (size_t index) {
-            if (index > m_highWatermark)
-                scavengePage(locker, index, decommits);
-        });
-    m_highWatermark = 0;
-}
-#endif
-
-template<typename Config, unsigned passedNumPages>
 template<typename Func>
 void IsoDirectory<Config, passedNumPages>::forEachCommittedPage(const LockHolder&, const Func& func)
 {

Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -49,9 +49,6 @@
     virtual ~IsoHeapImplBase();
     
     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
-#if BUSE(PARTIAL_SCAVENGE)
-    virtual void scavengeToHighWatermark(Vector<DeferredDecommit>&) = 0;
-#endif
     
     void scavengeNow();
     static void finishScavenging(Vector<DeferredDecommit>&);
@@ -112,9 +109,6 @@
     void didBecomeEligibleOrDecommited(const LockHolder&, IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
     
     void scavenge(Vector<DeferredDecommit>&) override;
-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&) override;
-#endif
 
     unsigned allocatorOffset();
     unsigned deallocatorOffset();

Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -121,21 +121,6 @@
     m_directoryHighWatermark = 0;
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
-template<typename Config>
-void IsoHeapImpl<Config>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    LockHolder locker(this->lock);
-    if (!m_directoryHighWatermark)
-        m_inlineDirectory.scavengeToHighWatermark(locker, decommits);
-    for (IsoDirectoryPage<Config>* page = m_headDirectory.get(); page; page = page->next) {
-        if (page->index() >= m_directoryHighWatermark)
-            page->payload.scavengeToHighWatermark(locker, decommits);
-    }
-    m_directoryHighWatermark = 0;
-}
-#endif
-
 inline size_t IsoHeapImplBase::freeableMemory()
 {
     return m_freeableMemory;

Modified: trunk/Source/bmalloc/bmalloc/LargeMap.cpp (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/LargeMap.cpp	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/LargeMap.cpp	2021-04-19 18:52:25 UTC (rev 276266)
@@ -76,9 +76,7 @@
         merged = merge(merged, m_free.pop(i--));
     }
 
-#if !BUSE(PARTIAL_SCAVENGE)
     merged.setUsedSinceLastScavenge();
-#endif
     m_free.push(merged);
 }
 

Modified: trunk/Source/bmalloc/bmalloc/LargeRange.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/LargeRange.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/LargeRange.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -37,10 +37,8 @@
         : Range()
         , m_startPhysicalSize(0)
         , m_totalPhysicalSize(0)
-#if !BUSE(PARTIAL_SCAVENGE)
         , m_isEligible(true)
         , m_usedSinceLastScavenge(false)
-#endif
     {
     }
 
@@ -48,25 +46,13 @@
         : Range(other)
         , m_startPhysicalSize(startPhysicalSize)
         , m_totalPhysicalSize(totalPhysicalSize)
-#if !BUSE(PARTIAL_SCAVENGE)
         , m_isEligible(true)
         , m_usedSinceLastScavenge(false)
-#endif
     {
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
 
-#if BUSE(PARTIAL_SCAVENGE)
-    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize)
-        : Range(begin, size)
-        , m_startPhysicalSize(startPhysicalSize)
-        , m_totalPhysicalSize(totalPhysicalSize)
-    {
-        BASSERT(this->size() >= this->totalPhysicalSize());
-        BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
-    }
-#else
     LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize, bool usedSinceLastScavenge = false)
         : Range(begin, size)
         , m_startPhysicalSize(startPhysicalSize)
@@ -77,7 +63,6 @@
         BASSERT(this->size() >= this->totalPhysicalSize());
         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
     }
-#endif
 
     // Returns a lower bound on physical size at the start of the range. Ranges that
     // span non-physical fragments use this number to remember the physical size of
@@ -104,11 +89,9 @@
     void setEligible(bool eligible) { m_isEligible = eligible; }
     bool isEligibile() const { return m_isEligible; }
 
-#if !BUSE(PARTIAL_SCAVENGE)
     bool usedSinceLastScavenge() const { return m_usedSinceLastScavenge; }
     void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
     void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
-#endif
 
     bool operator<(const void* other) const { return begin() < other; }
     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
@@ -116,12 +99,8 @@
 private:
     size_t m_startPhysicalSize;
     size_t m_totalPhysicalSize;
-#if BUSE(PARTIAL_SCAVENGE)
-    bool m_isEligible { true };
-#else
     unsigned m_isEligible: 1;
     unsigned m_usedSinceLastScavenge: 1;
-#endif
 };
 
 inline bool canMerge(const LargeRange& a, const LargeRange& b)
@@ -144,9 +123,7 @@
 inline LargeRange merge(const LargeRange& a, const LargeRange& b)
 {
     const LargeRange& left = std::min(a, b);
-#if !BUSE(PARTIAL_SCAVENGE)
     bool mergedUsedSinceLastScavenge = a.usedSinceLastScavenge() || b.usedSinceLastScavenge();
-#endif
     if (left.size() == left.startPhysicalSize()) {
         return LargeRange(
             left.begin(),
@@ -153,9 +130,7 @@
             a.size() + b.size(),
             a.startPhysicalSize() + b.startPhysicalSize(),
             a.totalPhysicalSize() + b.totalPhysicalSize()
-#if !BUSE(PARTIAL_SCAVENGE)
             , mergedUsedSinceLastScavenge
-#endif
         );
         
     }
@@ -165,9 +140,7 @@
         a.size() + b.size(),
         left.startPhysicalSize(),
         a.totalPhysicalSize() + b.totalPhysicalSize()
-#if !BUSE(PARTIAL_SCAVENGE)
         , mergedUsedSinceLastScavenge
-#endif
     );
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Scavenger.cpp (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.cpp	2021-04-19 18:52:25 UTC (rev 276266)
@@ -85,11 +85,7 @@
     dispatch_resume(m_pressureHandlerDispatchSource);
     dispatch_release(queue);
 #endif
-#if BUSE(PARTIAL_SCAVENGE)
-    m_waitTime = std::chrono::milliseconds(m_isInMiniMode ? 200 : 2000);
-#else
     m_waitTime = std::chrono::milliseconds(10);
-#endif
 
     m_thread = std::thread(&threadEntryPoint, this);
 }
@@ -187,14 +183,6 @@
     return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
-std::chrono::milliseconds Scavenger::timeSinceLastPartialScavenge()
-{
-    UniqueLockHolder lock(mutex());
-    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastPartialScavengeTime);
-}
-#endif
-
 void Scavenger::enableMiniMode()
 {
     m_isInMiniMode = true; // We just store to this racily. The scavenger thread will eventually pick up the right value.
@@ -220,25 +208,17 @@
 
         {
             PrintTime printTime("\nfull scavenge under lock time");
-#if !BUSE(PARTIAL_SCAVENGE)
             size_t deferredDecommits = 0;
-#endif
             UniqueLockHolder lock(Heap::mutex());
             for (unsigned i = numHeaps; i--;) {
                 if (!isActiveHeapKind(static_cast<HeapKind>(i)))
                     continue;
-#if BUSE(PARTIAL_SCAVENGE)
-                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter);
-#else
                 PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter, deferredDecommits);
-#endif
             }
             decommitter.processEager();
 
-#if !BUSE(PARTIAL_SCAVENGE)
             if (deferredDecommits)
                 m_state = State::RunSoon;
-#endif
         }
 
         {
@@ -279,78 +259,6 @@
     }
 }
 
-#if BUSE(PARTIAL_SCAVENGE)
-void Scavenger::partialScavenge()
-{
-    if (!m_isEnabled)
-        return;
-
-    UniqueLockHolder lock(m_scavengingMutex);
-
-    if (verbose) {
-        fprintf(stderr, "--------------------------------\n");
-        fprintf(stderr, "--before partial scavenging--\n");
-        dumpStats();
-    }
-
-    {
-        BulkDecommit decommitter;
-        {
-            PrintTime printTime("\npartialScavenge under lock time");
-            UniqueLockHolder lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                size_t freeableMemory = heap.freeableMemory(lock);
-                if (freeableMemory < 4 * MB)
-                    continue;
-                heap.scavengeToHighWatermark(lock, decommitter);
-            }
-
-            decommitter.processEager();
-        }
-
-        {
-            PrintTime printTime("partialScavenge lazy decommit time");
-            decommitter.processLazy();
-        }
-
-        {
-            PrintTime printTime("partialScavenge mark all as eligible time");
-            LockHolder lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                heap.markAllLargeAsEligibile(lock);
-            }
-        }
-    }
-
-    {
-        RELEASE_BASSERT(!m_deferredDecommits.size());
-        AllIsoHeaps::get()->forEach(
-            [&] (IsoHeapImplBase& heap) {
-                heap.scavengeToHighWatermark(m_deferredDecommits);
-            });
-        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
-        m_deferredDecommits.shrink(0);
-    }
-
-    if (verbose) {
-        fprintf(stderr, "--after partial scavenging--\n");
-        dumpStats();
-        fprintf(stderr, "--------------------------------\n");
-    }
-
-    {
-        UniqueLockHolder lock(mutex());
-        m_lastPartialScavengeTime = std::chrono::steady_clock::now();
-    }
-}
-#endif
-
 size_t Scavenger::freeableMemory()
 {
     size_t result = 0;
@@ -432,69 +340,6 @@
             fprintf(stderr, "--------------------------------\n");
         }
 
-#if BUSE(PARTIAL_SCAVENGE)
-        enum class ScavengeMode {
-            None,
-            Partial,
-            Full
-        };
-
-        size_t freeableMemory = this->freeableMemory();
-
-        ScavengeMode scavengeMode = [&] {
-            auto timeSinceLastFullScavenge = this->timeSinceLastFullScavenge();
-            auto timeSinceLastPartialScavenge = this->timeSinceLastPartialScavenge();
-            auto timeSinceLastScavenge = std::min(timeSinceLastPartialScavenge, timeSinceLastFullScavenge);
-
-            if (isUnderMemoryPressure() && freeableMemory > 1 * MB && timeSinceLastScavenge > std::chrono::milliseconds(5))
-                return ScavengeMode::Full;
-
-            if (!m_isProbablyGrowing) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(1000) && !m_isInMiniMode)
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
-
-            if (m_isInMiniMode) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(200))
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
-
-#if BCPU(X86_64)
-            auto partialScavengeInterval = std::chrono::milliseconds(12000);
-#else
-            auto partialScavengeInterval = std::chrono::milliseconds(8000);
-#endif
-            if (timeSinceLastScavenge < partialScavengeInterval) {
-                // Rate limit partial scavenges.
-                return ScavengeMode::None;
-            }
-            if (freeableMemory < 25 * MB)
-                return ScavengeMode::None;
-            if (5 * freeableMemory < footprint())
-                return ScavengeMode::None;
-            return ScavengeMode::Partial;
-        }();
-
-        m_isProbablyGrowing = false;
-
-        switch (scavengeMode) {
-        case ScavengeMode::None: {
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Partial: {
-            partialScavenge();
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Full: {
-            scavenge();
-            break;
-        }
-        }
-#else
         std::chrono::steady_clock::time_point start { std::chrono::steady_clock::now() };
         
         scavenge();
@@ -509,14 +354,13 @@
         // FIXME: We need to investigate mini-mode's adjustment.
         // https://bugs.webkit.org/show_bug.cgi?id=203987
         if (!m_isInMiniMode) {
-            timeSpentScavenging *= 150;
+            timeSpentScavenging *= s_newWaitMultiplier;
             std::chrono::milliseconds newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
-            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(100)), std::chrono::milliseconds(10000));
+            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(s_minWaitTimeMilliseconds)), std::chrono::milliseconds(s_maxWaitTimeMilliseconds));
         }
 
         if (verbose)
             fprintf(stderr, "new wait time %lldms\n", static_cast<long long int>(m_waitTime.count()));
-#endif
     }
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Scavenger.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/Scavenger.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -92,10 +92,6 @@
     void setThreadName(const char*);
 
     std::chrono::milliseconds timeSinceLastFullScavenge();
-#if BUSE(PARTIAL_SCAVENGE)
-    std::chrono::milliseconds timeSinceLastPartialScavenge();
-    void partialScavenge();
-#endif
 
     std::atomic<State> m_state { State::Sleep };
     size_t m_scavengerBytes { 0 };
@@ -108,9 +104,6 @@
 
     std::thread m_thread;
     std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
-#if BUSE(PARTIAL_SCAVENGE)
-    std::chrono::steady_clock::time_point m_lastPartialScavengeTime { std::chrono::steady_clock::now() };
-#endif
 
 #if BOS(DARWIN)
     dispatch_source_t m_pressureHandlerDispatchSource;
@@ -117,6 +110,16 @@
     qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
 #endif
     
+#if BPLATFORM(MAC)
+    const unsigned s_newWaitMultiplier = 300;
+    const unsigned s_minWaitTimeMilliseconds = 750;
+    const unsigned s_maxWaitTimeMilliseconds = 20000;
+#else
+    const unsigned s_newWaitMultiplier = 150;
+    const unsigned s_minWaitTimeMilliseconds = 100;
+    const unsigned s_maxWaitTimeMilliseconds = 10000;
+#endif
+
     Vector<DeferredDecommit> m_deferredDecommits;
     bool m_isEnabled { true };
 };

Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (276265 => 276266)


--- trunk/Source/bmalloc/bmalloc/SmallPage.h	2021-04-19 18:21:54 UTC (rev 276265)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h	2021-04-19 18:52:25 UTC (rev 276266)
@@ -51,11 +51,9 @@
     bool hasPhysicalPages() { return m_hasPhysicalPages; }
     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
 
-#if !BUSE(PARTIAL_SCAVENGE)
     bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
     void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
     void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
-#endif
 
     SmallLine* begin();
 
@@ -65,9 +63,7 @@
 private:
     unsigned char m_hasFreeLines: 1;
     unsigned char m_hasPhysicalPages: 1;
-#if !BUSE(PARTIAL_SCAVENGE)
     unsigned char m_usedSinceLastScavenge: 1;
-#endif
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
     unsigned char m_slide;
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to