Title: [217456] trunk/Source/bmalloc
Revision
217456
Author
[email protected]
Date
2017-05-25 16:11:24 -0700 (Thu, 25 May 2017)

Log Message

bmalloc: scavenger runs too much on JetStream
https://bugs.webkit.org/show_bug.cgi?id=172373

Reviewed by Geoffrey Garen.

Instruments says that JetStream on macOS spends about 3% of its time in
madvise.

In <https://bugs.webkit.org/show_bug.cgi?id=160098>, Ben saw some
evidence that madvise was the reason that switching to bmalloc for
DFG::Node allocations was a slowdown the first time around.

In <https://bugs.webkit.org/show_bug.cgi?id=172124>, Michael saw that
scavening policy can affect JetStream.

Intuitively, it seems wrong for the heap to idle shrink during hardcore
benchmarking.

The strategy here is to back off in response to any heap growth event,
and to wait 2s instead of 0.5s for heap growth to take place -- but we
scavenge immediately in response to critical memory pressure, to avoid
jetsam.

One hole in this strategy is that a workload with a perfectly
unfragmented heap that allocates and deallocates ~16kB every 2s will
never shrink its heap. This doesn't seem to be a problem in practice.

This looks like a 2% - 4% speedup on JetStream on Mac Pro and MacBook Air.

* bmalloc/AsyncTask.h:
(bmalloc::AsyncTask::willRun):
(bmalloc::AsyncTask::willRunSoon):
(bmalloc::Function>::AsyncTask):
(bmalloc::Function>::run):
(bmalloc::Function>::runSoon):
(bmalloc::Function>::threadRunLoop):
(bmalloc::Function>::runSlowCase): Deleted. Added a "run soon" state
so that execution delay is modeled directly instead of implicitly
through sleep events. This enables the Heap to issue a "run now" event
at any moment in response ot memory pressure.

* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap): Don't call into our own API -- that's a layering
violation.

(bmalloc::Heap::updateMemoryInUseParameters): No need for
m_scavengeSleepDuration anymore.

(bmalloc::Heap::concurrentScavenge): Added a back-off policy when the
heap is growing.
(bmalloc::Heap::scavenge):

(bmalloc::Heap::scavengeSmallPages):
(bmalloc::Heap::scavengeLargeObjects): Don't try to give up in the middle
of a scavenge event. Our new backoff policy supplants that design. Also,
it's easier to profile and understand scavenging behavior if it always
runs to completion once started.

(bmalloc::Heap::scheduleScavenger):
(bmalloc::Heap::scheduleScavengerIfUnderMemoryPressure): Added a
synchronous amortized check for memory pressure. This check has the
benefit that it runs immediately during high rates of heap activity,
so we can detect memory pressure right away and wake the scavenger
instead of waiting for the scavenger to wake up.

(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::deallocateSmallLine):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):
* bmalloc/Heap.h:
(bmalloc::Heap::isUnderMemoryPressure):
* bmalloc/Sizes.h:
* bmalloc/VMHeap.h:
(bmalloc::VMHeap::deallocateSmallPage):
* bmalloc/bmalloc.h:
(bmalloc::api::scavenge): Updated for API changes above.

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (217455 => 217456)


--- trunk/Source/bmalloc/ChangeLog	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/ChangeLog	2017-05-25 23:11:24 UTC (rev 217456)
@@ -1,3 +1,84 @@
+2017-05-25  Geoffrey Garen  <[email protected]> and Michael Saboff  <[email protected]>
+
+        bmalloc: scavenger runs too much on JetStream
+        https://bugs.webkit.org/show_bug.cgi?id=172373
+
+        Reviewed by Geoffrey Garen.
+
+        Instruments says that JetStream on macOS spends about 3% of its time in
+        madvise.
+
+        In <https://bugs.webkit.org/show_bug.cgi?id=160098>, Ben saw some
+        evidence that madvise was the reason that switching to bmalloc for
+        DFG::Node allocations was a slowdown the first time around.
+
+        In <https://bugs.webkit.org/show_bug.cgi?id=172124>, Michael saw that
+        scavening policy can affect JetStream.
+
+        Intuitively, it seems wrong for the heap to idle shrink during hardcore
+        benchmarking.
+
+        The strategy here is to back off in response to any heap growth event,
+        and to wait 2s instead of 0.5s for heap growth to take place -- but we
+        scavenge immediately in response to critical memory pressure, to avoid
+        jetsam.
+
+        One hole in this strategy is that a workload with a perfectly
+        unfragmented heap that allocates and deallocates ~16kB every 2s will
+        never shrink its heap. This doesn't seem to be a problem in practice.
+
+        This looks like a 2% - 4% speedup on JetStream on Mac Pro and MacBook Air.
+
+        * bmalloc/AsyncTask.h:
+        (bmalloc::AsyncTask::willRun):
+        (bmalloc::AsyncTask::willRunSoon):
+        (bmalloc::Function>::AsyncTask):
+        (bmalloc::Function>::run):
+        (bmalloc::Function>::runSoon):
+        (bmalloc::Function>::threadRunLoop):
+        (bmalloc::Function>::runSlowCase): Deleted. Added a "run soon" state
+        so that execution delay is modeled directly instead of implicitly
+        through sleep events. This enables the Heap to issue a "run now" event
+        at any moment in response ot memory pressure.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap): Don't call into our own API -- that's a layering
+        violation.
+
+        (bmalloc::Heap::updateMemoryInUseParameters): No need for
+        m_scavengeSleepDuration anymore.
+
+        (bmalloc::Heap::concurrentScavenge): Added a back-off policy when the
+        heap is growing.
+        (bmalloc::Heap::scavenge):
+
+        (bmalloc::Heap::scavengeSmallPages):
+        (bmalloc::Heap::scavengeLargeObjects): Don't try to give up in the middle
+        of a scavenge event. Our new backoff policy supplants that design. Also,
+        it's easier to profile and understand scavenging behavior if it always
+        runs to completion once started.
+
+        (bmalloc::Heap::scheduleScavenger):
+        (bmalloc::Heap::scheduleScavengerIfUnderMemoryPressure): Added a
+        synchronous amortized check for memory pressure. This check has the
+        benefit that it runs immediately during high rates of heap activity,
+        so we can detect memory pressure right away and wake the scavenger
+        instead of waiting for the scavenger to wake up.
+
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::deallocateSmallLine):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge):
+        * bmalloc/Heap.h:
+        (bmalloc::Heap::isUnderMemoryPressure):
+        * bmalloc/Sizes.h:
+        * bmalloc/VMHeap.h:
+        (bmalloc::VMHeap::deallocateSmallPage):
+        * bmalloc/bmalloc.h:
+        (bmalloc::api::scavenge): Updated for API changes above.
+
 2017-05-17  Michael Saboff  <[email protected]>
 
         [iOS] The Garbage Collector shouldn't rely on the bmalloc scavenger for up to date memory footprint info (172186)

Modified: trunk/Source/bmalloc/bmalloc/AsyncTask.h (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/AsyncTask.h	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/AsyncTask.h	2017-05-25 23:11:24 UTC (rev 217456)
@@ -29,6 +29,7 @@
 #include "BAssert.h"
 #include "Inline.h"
 #include "Mutex.h"
+#include "Sizes.h"
 #include <atomic>
 #include <condition_variable>
 #include <thread>
@@ -40,14 +41,19 @@
 public:
     AsyncTask(Object&, const Function&);
     ~AsyncTask();
-
+    
+    bool willRun() { return m_state == State::Run; }
     void run();
-
+    
+    bool willRunSoon() { return m_state > State::Sleep; }
+    void runSoon();
+    
 private:
-    enum State { Sleeping, Running, RunRequested };
-
+    enum class State { Sleep, Run, RunSoon };
+    
     void runSlowCase();
-
+    void runSoonSlowCase();
+    
     static void threadEntryPoint(AsyncTask*);
     void threadRunLoop();
 
@@ -64,7 +70,7 @@
 
 template<typename Object, typename Function>
 AsyncTask<Object, Function>::AsyncTask(Object& object, const Function& function)
-    : m_state(Running)
+    : m_state(State::Sleep)
     , m_condition()
     , m_thread(std::thread(&AsyncTask::threadEntryPoint, this))
     , m_object(object)
@@ -81,21 +87,19 @@
 }
 
 template<typename Object, typename Function>
-inline void AsyncTask<Object, Function>::run()
+void AsyncTask<Object, Function>::run()
 {
-    if (m_state == RunRequested)
-        return;
-    runSlowCase();
+    m_state = State::Run;
+    
+    std::lock_guard<Mutex> lock(m_conditionMutex);
+    m_condition.notify_all();
 }
-
+    
 template<typename Object, typename Function>
-NO_INLINE void AsyncTask<Object, Function>::runSlowCase()
+void AsyncTask<Object, Function>::runSoon()
 {
-    State oldState = m_state.exchange(RunRequested);
-    if (oldState == RunRequested || oldState == Running)
-        return;
-
-    BASSERT(oldState == Sleeping);
+    m_state = State::RunSoon;
+    
     std::lock_guard<Mutex> lock(m_conditionMutex);
     m_condition.notify_all();
 }
@@ -115,20 +119,23 @@
 {
     // This loop ratchets downward from most active to least active state. While
     // we ratchet downward, any other thread may reset our state.
-
+    
     // We require any state change while we are sleeping to signal to our
     // condition variable and wake us up.
-
+    
     while (1) {
-        State expectedState = RunRequested;
-        if (m_state.compare_exchange_weak(expectedState, Running))
-            (m_object.*m_function)();
-
-        expectedState = Running;
-        if (m_state.compare_exchange_weak(expectedState, Sleeping)) {
+        if (m_state == State::Sleep) {
             std::unique_lock<Mutex> lock(m_conditionMutex);
-            m_condition.wait(lock, [&]() { return m_state != Sleeping; });
+            m_condition.wait(lock, [&]() { return m_state != State::Sleep; });
         }
+        
+        if (m_state == State::RunSoon) {
+            std::unique_lock<Mutex> lock(m_conditionMutex);
+            m_condition.wait_for(lock, asyncTaskSleepDuration, [&]() { return m_state != State::RunSoon; });
+        }
+        
+        m_state = State::Sleep;
+        (m_object.*m_function)();
     }
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2017-05-25 23:11:24 UTC (rev 217456)
@@ -68,7 +68,8 @@
     auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);
     m_pressureHandlerDispatchSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0, DISPATCH_MEMORYPRESSURE_CRITICAL, queue);
     dispatch_source_set_event_handler(m_pressureHandlerDispatchSource, ^{
-        api::scavenge();
+        std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+        scavenge(lock);
     });
     dispatch_resume(m_pressureHandlerDispatchSource);
     dispatch_release(queue);
@@ -144,75 +145,53 @@
 
     double percentInUse = static_cast<double>(m_memoryFootprint) / static_cast<double>(m_maxAvailableMemory);
     m_percentAvailableMemoryInUse = std::min(percentInUse, 1.0);
-
-    double percentFree = 1.0 - m_percentAvailableMemoryInUse;
-    double sleepInMS = 1200.0 * percentFree * percentFree - 100.0 * percentFree + 2.0;
-    sleepInMS = std::max(std::min(sleepInMS, static_cast<double>(maxScavengeSleepDuration.count())), 2.0);
-
-    m_scavengeSleepDuration = std::chrono::milliseconds(static_cast<long long>(sleepInMS));
 }
 #endif
 
 void Heap::concurrentScavenge()
 {
+    std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+
 #if BOS(DARWIN)
     pthread_set_qos_class_self_np(m_requestedScavengerThreadQOSClass, 0);
 #endif
 
-    std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-
-    scavenge(lock, Async);
-
-#if BPLATFORM(IOS)
-    updateMemoryInUseParameters();
-#endif
+    if (m_isGrowing && !isUnderMemoryPressure()) {
+        m_isGrowing = false;
+        m_scavenger.runSoon();
+        return;
+    }
+    
+    scavenge(lock);
 }
 
-void Heap::scavenge(std::unique_lock<StaticMutex>& lock, ScavengeMode scavengeMode)
+void Heap::scavenge(std::unique_lock<StaticMutex>& lock)
 {
-    m_isAllocatingPages.fill(false);
-    m_isAllocatingLargePages = false;
-
-    if (scavengeMode == Async)
-        sleep(lock, m_scavengeSleepDuration);
-
-    scavengeSmallPages(lock, scavengeMode);
-    scavengeLargeObjects(lock, scavengeMode);
+    scavengeSmallPages(lock);
+    scavengeLargeObjects(lock);
 }
-
-void Heap::scavengeSmallPages(std::unique_lock<StaticMutex>& lock, ScavengeMode scavengeMode)
+    
+void Heap::scavengeSmallPages(std::unique_lock<StaticMutex>& lock)
 {
     for (size_t pageClass = 0; pageClass < pageClassCount; pageClass++) {
         auto& smallPages = m_smallPages[pageClass];
 
         while (!smallPages.isEmpty()) {
-            if (m_isAllocatingPages[pageClass]) {
-                m_scavenger.run();
-                break;
-            }
-
             SmallPage* page = smallPages.pop();
-            m_vmHeap.deallocateSmallPage(lock, pageClass, page, scavengeMode);
+            m_vmHeap.deallocateSmallPage(lock, pageClass, page);
         }
     }
 }
 
-void Heap::scavengeLargeObjects(std::unique_lock<StaticMutex>& lock, ScavengeMode scavengeMode)
+void Heap::scavengeLargeObjects(std::unique_lock<StaticMutex>& lock)
 {
     auto& ranges = m_largeFree.ranges();
     for (size_t i = ranges.size(); i-- > 0; i = std::min(i, ranges.size())) {
-        if (m_isAllocatingLargePages) {
-            m_scavenger.run();
-            break;
-        }
-
         auto range = ranges.pop(i);
-
-        if (scavengeMode == Async)
-            lock.unlock();
+        
+        lock.unlock();
         vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
-        if (scavengeMode == Async)
-            lock.lock();
+        lock.lock();
 
         range.setPhysicalSize(0);
         ranges.push(range);
@@ -219,18 +198,49 @@
     }
 }
 
+void Heap::scheduleScavengerIfUnderMemoryPressure(size_t bytes)
+{
+    m_scavengerBytes += bytes;
+    if (m_scavengerBytes < scavengerBytesPerMemoryPressureCheck)
+        return;
+
+    m_scavengerBytes = 0;
+
+    if (m_scavenger.willRun())
+        return;
+
+    if (!isUnderMemoryPressure())
+        return;
+
+    m_isGrowing = false;
+    m_scavenger.run();
+}
+
+void Heap::scheduleScavenger(size_t bytes)
+{
+    scheduleScavengerIfUnderMemoryPressure(bytes);
+
+    if (m_scavenger.willRunSoon())
+        return;
+
+    m_isGrowing = false;
+    m_scavenger.runSoon();
+}
+
 SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t sizeClass)
 {
     if (!m_smallPagesWithFreeLines[sizeClass].isEmpty())
         return m_smallPagesWithFreeLines[sizeClass].popFront();
 
+    m_isGrowing = true;
+    
     SmallPage* page = [&]() {
         size_t pageClass = m_pageClasses[sizeClass];
         if (!m_smallPages[pageClass].isEmpty())
             return m_smallPages[pageClass].pop();
 
-        m_isAllocatingPages[pageClass] = true;
-
+        scheduleScavengerIfUnderMemoryPressure(pageSize(pageClass));
+        
         SmallPage* page = m_vmHeap.allocateSmallPage(lock, pageClass);
         m_objectTypes.set(Chunk::get(page), ObjectType::Small);
         return page;
@@ -259,8 +269,8 @@
 
     m_smallPagesWithFreeLines[sizeClass].remove(page);
     m_smallPages[pageClass].push(page);
-
-    m_scavenger.run();
+    
+    scheduleScavenger(pageSize(pageClass));
 }
 
 void Heap::allocateSmallBumpRangesByMetadata(
@@ -398,8 +408,8 @@
     }
     
     if (range.physicalSize() < range.size()) {
-        m_isAllocatingLargePages = true;
-
+        scheduleScavengerIfUnderMemoryPressure(range.size());
+        
         vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
         range.setPhysicalSize(range.size());
     }
@@ -420,6 +430,8 @@
 {
     BASSERT(isPowerOfTwo(alignment));
 
+    m_isGrowing = true;
+    
     size_t roundedSize = size ? roundUpToMultipleOf(largeAlignment, size) : largeAlignment;
     if (roundedSize < size) // Check for overflow
         return nullptr;
@@ -468,7 +480,7 @@
     LargeRange range = LargeRange(object, size);
     splitAndAllocate(range, alignment, newSize);
 
-    m_scavenger.run();
+    scheduleScavenger(size);
 }
 
 void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
@@ -476,7 +488,7 @@
     size_t size = m_largeAllocated.remove(object);
     m_largeFree.add(LargeRange(object, size, size));
     
-    m_scavenger.run();
+    scheduleScavenger(size);
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2017-05-25 23:11:24 UTC (rev 217456)
@@ -70,13 +70,12 @@
     size_t largeSize(std::lock_guard<StaticMutex>&, void*);
     void shrinkLarge(std::lock_guard<StaticMutex>&, const Range&, size_t);
 
-    void scavenge(std::unique_lock<StaticMutex>&, ScavengeMode);
+    void scavenge(std::unique_lock<StaticMutex>&);
 
-#if BPLATFORM(IOS)
     size_t memoryFootprint();
     double percentAvailableMemoryInUse();
-#endif
-
+    bool isUnderMemoryPressure();
+    
 #if BOS(DARWIN)
     void setScavengerThreadQOSClass(qos_class_t overrideClass) { m_requestedScavengerThreadQOSClass = overrideClass; }
 #endif
@@ -110,10 +109,13 @@
 
     LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t);
 
+    void scheduleScavenger(size_t);
+    void scheduleScavengerIfUnderMemoryPressure(size_t);
+    
     void concurrentScavenge();
-    void scavengeSmallPages(std::unique_lock<StaticMutex>&, ScavengeMode);
-    void scavengeLargeObjects(std::unique_lock<StaticMutex>&, ScavengeMode);
-
+    void scavengeSmallPages(std::unique_lock<StaticMutex>&);
+    void scavengeLargeObjects(std::unique_lock<StaticMutex>&);
+    
 #if BPLATFORM(IOS)
     void updateMemoryInUseParameters();
 #endif
@@ -130,16 +132,14 @@
 
     Map<Chunk*, ObjectType, ChunkHash> m_objectTypes;
 
-    std::array<bool, pageClassCount> m_isAllocatingPages;
-    bool m_isAllocatingLargePages;
-
+    size_t m_scavengerBytes { 0 };
+    bool m_isGrowing { false };
+    
     AsyncTask<Heap, decltype(&Heap::concurrentScavenge)> m_scavenger;
 
     Environment m_environment;
     DebugHeap* m_debugHeap;
 
-    std::chrono::milliseconds m_scavengeSleepDuration = { maxScavengeSleepDuration };
-
 #if BPLATFORM(IOS)
     size_t m_maxAvailableMemory;
     size_t m_memoryFootprint;
@@ -170,7 +170,16 @@
     deallocateSmallLine(lock, object);
 }
 
+inline bool Heap::isUnderMemoryPressure()
+{
 #if BPLATFORM(IOS)
+    return percentAvailableMemoryInUse() > memoryPressureThreshold;
+#else
+    return false;
+#endif
+}
+    
+#if BPLATFORM(IOS)
 inline size_t Heap::memoryFootprint()
 {
     updateMemoryInUseParameters();

Modified: trunk/Source/bmalloc/bmalloc/Sizes.h (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/Sizes.h	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/Sizes.h	2017-05-25 23:11:24 UTC (rev 217456)
@@ -68,8 +68,11 @@
     static const size_t deallocatorLogCapacity = 512;
     static const size_t bumpRangeCacheCapacity = 3;
     
-    static const std::chrono::milliseconds maxScavengeSleepDuration = std::chrono::milliseconds(512);
-
+    static const size_t scavengerBytesPerMemoryPressureCheck = 16 * MB;
+    static const double memoryPressureThreshold = 0.75;
+    
+    static const std::chrono::milliseconds asyncTaskSleepDuration = std::chrono::milliseconds(2000);
+    
     static const size_t maskSizeClassCount = maskSizeClassMax / alignment;
 
     inline constexpr size_t maskSizeClass(size_t size)

Modified: trunk/Source/bmalloc/bmalloc/VMHeap.h (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/VMHeap.h	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.h	2017-05-25 23:11:24 UTC (rev 217456)
@@ -46,7 +46,7 @@
 class VMHeap {
 public:
     SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&, size_t);
-    void deallocateSmallPage(std::unique_lock<StaticMutex>&, size_t, SmallPage*, ScavengeMode);
+    void deallocateSmallPage(std::unique_lock<StaticMutex>&, size_t, SmallPage*);
 
     LargeRange tryAllocateLargeChunk(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
     
@@ -70,13 +70,11 @@
     return page;
 }
 
-inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, size_t pageClass, SmallPage* page, ScavengeMode scavengeMode)
+inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, size_t pageClass, SmallPage* page)
 {
-    if (scavengeMode == Async)
-        lock.unlock();
+    lock.unlock();
     vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(pageClass));
-    if (scavengeMode == Async)
-        lock.lock();
+    lock.lock();
     
     m_smallPages[pageClass].push(page);
 }

Modified: trunk/Source/bmalloc/bmalloc/bmalloc.h (217455 => 217456)


--- trunk/Source/bmalloc/bmalloc/bmalloc.h	2017-05-25 22:33:28 UTC (rev 217455)
+++ trunk/Source/bmalloc/bmalloc/bmalloc.h	2017-05-25 23:11:24 UTC (rev 217456)
@@ -77,7 +77,7 @@
     scavengeThisThread();
 
     std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-    PerProcess<Heap>::get()->scavenge(lock, Sync);
+    PerProcess<Heap>::get()->scavenge(lock);
 }
 
 inline bool isEnabled()
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to