Title: [282850] trunk/Source/bmalloc
Revision
282850
Author
[email protected]
Date
2021-09-21 17:00:07 -0700 (Tue, 21 Sep 2021)

Log Message

[bmalloc] freeableMemory and footprint of Heap are completely broken
https://bugs.webkit.org/show_bug.cgi?id=230245

Reviewed by Geoffrey Garen.

This introduced in r279922. The physical size of the newly allocated range was changed from zero
to the size of the range on that commit and that causes the numbers wrong. That change itself is
correct fix because the range has physical pages attached. That simply activated the bug which was
there for a long time.

I've added the correction to adjust both numbers with newly allocated region. Also added assertion
to check the overflow of those values and the option to log those value change to the console.

Fortunately those numbers are used for debugging purpose. Scavenger will dump out those to stderr
with its verbose mode. There's no practical cases affected by this bug.

Here is the example of footprint logging before fixing the bug:

        >>> footprint: 18446744073709535232 (-16384) scavenge
        footprint: 18446744073709518848 (-16384) scavenge
        footprint: 18446744073709502464 (-16384) scavenge
        footprint: 18446744073709486080 (-16384) scavenge
        footprint: 18446744073709469696 (-16384) scavenge
        footprint: 18446744073709453312 (-16384) scavenge
        ...

It just began with negative number which overflows on unsigned. And following is the one with fix:

        footprint: 1048576 (1048576) allocateLarge
        footprint: 2097152 (1048576) allocateLarge
        footprint: 3145728 (1048576) allocateLarge
        footprint: 4194304 (1048576) allocateLarge
        footprint: 5242880 (1048576) allocateLarge
        footprint: 6291456 (1048576) allocateLarge
        >>> footprint: 6275072 (-16384) scavenge
        footprint: 6258688 (-16384) scavenge
        footprint: 6242304 (-16384) scavenge
        footprint: 6225920 (-16384) scavenge
        footprint: 6209536 (-16384) scavenge
        footprint: 6193152 (-16384) scavenge
        ...

* bmalloc/Heap.cpp:
(bmalloc::Heap::adjustStat):
(bmalloc::Heap::logStat):
(bmalloc::Heap::adjustFreeableMemory):
(bmalloc::Heap::adjustFootprint):
(bmalloc::Heap::decommitLargeRange):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::deallocateSmallLine):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::externalCommit):
(bmalloc::Heap::externalDecommit):
* bmalloc/Heap.h:

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (282849 => 282850)


--- trunk/Source/bmalloc/ChangeLog	2021-09-21 23:35:41 UTC (rev 282849)
+++ trunk/Source/bmalloc/ChangeLog	2021-09-22 00:00:07 UTC (rev 282850)
@@ -1,3 +1,64 @@
+2021-09-21  Basuke Suzuki  <[email protected]>
+
+        [bmalloc] freeableMemory and footprint of Heap are completely broken
+        https://bugs.webkit.org/show_bug.cgi?id=230245
+
+        Reviewed by Geoffrey Garen.
+
+        This introduced in r279922. The physical size of the newly allocated range was changed from zero
+        to the size of the range on that commit and that causes the numbers wrong. That change itself is
+        correct fix because the range has physical pages attached. That simply activated the bug which was
+        there for a long time.
+
+        I've added the correction to adjust both numbers with newly allocated region. Also added assertion
+        to check the overflow of those values and the option to log those value change to the console.
+
+        Fortunately those numbers are used for debugging purpose. Scavenger will dump out those to stderr
+        with its verbose mode. There's no practical cases affected by this bug.
+
+        Here is the example of footprint logging before fixing the bug:
+
+                >>> footprint: 18446744073709535232 (-16384) scavenge
+                footprint: 18446744073709518848 (-16384) scavenge
+                footprint: 18446744073709502464 (-16384) scavenge
+                footprint: 18446744073709486080 (-16384) scavenge
+                footprint: 18446744073709469696 (-16384) scavenge
+                footprint: 18446744073709453312 (-16384) scavenge
+                ...
+
+        It just began with negative number which overflows on unsigned. And following is the one with fix:
+
+                footprint: 1048576 (1048576) allocateLarge
+                footprint: 2097152 (1048576) allocateLarge
+                footprint: 3145728 (1048576) allocateLarge
+                footprint: 4194304 (1048576) allocateLarge
+                footprint: 5242880 (1048576) allocateLarge
+                footprint: 6291456 (1048576) allocateLarge
+                >>> footprint: 6275072 (-16384) scavenge
+                footprint: 6258688 (-16384) scavenge
+                footprint: 6242304 (-16384) scavenge
+                footprint: 6225920 (-16384) scavenge
+                footprint: 6209536 (-16384) scavenge
+                footprint: 6193152 (-16384) scavenge
+                ...
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::adjustStat):
+        (bmalloc::Heap::logStat):
+        (bmalloc::Heap::adjustFreeableMemory):
+        (bmalloc::Heap::adjustFootprint):
+        (bmalloc::Heap::decommitLargeRange):
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::deallocateSmallLine):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::deallocateLarge):
+        (bmalloc::Heap::externalCommit):
+        (bmalloc::Heap::externalDecommit):
+        * bmalloc/Heap.h:
+
 2021-09-16  Filip Pizlo  <[email protected]>
 
         Stub out the footprint() API when libpas is in use

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (282849 => 282850)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2021-09-21 23:35:41 UTC (rev 282849)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2021-09-22 00:00:07 UTC (rev 282850)
@@ -99,6 +99,38 @@
     return m_footprint;
 }
 
+BINLINE void Heap::adjustStat(size_t& value, ssize_t amount)
+{
+    auto result = value + amount;
+    BASSERT((amount >= 0 && result >= value) || (amount < 0 && result < value));
+    value = result;
+}
+
+BINLINE void Heap::logStat(size_t value, ssize_t amount, const char* label, const char* note)
+{
+    fprintf(stderr, "%s: %lu (%ld) %s\n", label, value, amount, note);
+}
+
+BINLINE void Heap::adjustFreeableMemory(UniqueLockHolder&, ssize_t amount, const char* note)
+{
+    constexpr bool verbose = false;
+
+    adjustStat(m_freeableMemory, amount);
+
+    if constexpr (verbose)
+        logStat(m_freeableMemory, amount, "freeableMemory", note);
+}
+
+BINLINE void Heap::adjustFootprint(UniqueLockHolder&, ssize_t amount, const char* note)
+{
+    constexpr bool verbose = false;
+
+    adjustStat(m_footprint, amount);
+
+    if constexpr (verbose)
+        logStat(m_footprint, amount, "footprint", note);
+}
+
 void Heap::markAllLargeAsEligibile(const LockHolder&)
 {
     m_largeFree.markAllAsEligibile();
@@ -106,12 +138,12 @@
     m_condition.notify_all();
 }
 
-void Heap::decommitLargeRange(UniqueLockHolder&, LargeRange& range, BulkDecommit& decommitter)
+void Heap::decommitLargeRange(UniqueLockHolder& lock, LargeRange& range, BulkDecommit& decommitter)
 {
     BASSERT(range.hasPhysicalPages());
 
-    m_footprint -= range.totalPhysicalSize();
-    m_freeableMemory -= range.totalPhysicalSize();
+    adjustFootprint(lock, -range.totalPhysicalSize(), "decommitLargeRange");
+    adjustFreeableMemory(lock, -range.totalPhysicalSize(), "decommitLargeRange");
     decommitter.addLazy(range.begin(), range.physicalEnd() - range.begin());
     m_hasPendingDecommits = true;
     range.setStartPhysicalSize(0);
@@ -139,8 +171,8 @@
 
                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
-                m_freeableMemory -= decommitSize;
-                m_footprint -= decommitSize;
+                adjustFootprint(lock, -decommitSize, "scavenge");
+                adjustFreeableMemory(lock, -decommitSize, "scavenge");
                 decommitter.addEager(page->begin()->begin(), pageSize);
                 page->setHasPhysicalPages(false);
 #if ENABLE_PHYSICAL_PAGE_MAP
@@ -205,7 +237,7 @@
             chunk->freePages().push(page);
         });
 
-        m_freeableMemory += chunkSize;
+        adjustFreeableMemory(lock, chunkSize, "allocateSmallChunk");
 
         m_scavenger->schedule(0);
 
@@ -275,10 +307,10 @@
         size_t pageSize = bmalloc::pageSize(pageClass);
         size_t physicalSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
         if (page->hasPhysicalPages())
-            m_freeableMemory -= physicalSize;
+            adjustFreeableMemory(lock, -physicalSize, "allocateSmallPage");
         else {
             m_scavenger->scheduleIfUnderMemoryPressure(pageSize);
-            m_footprint += physicalSize;
+            adjustFootprint(lock, physicalSize, "allocateSmallPage");
             vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
             page->setHasPhysicalPages(true);
 #if ENABLE_PHYSICAL_PAGE_MAP
@@ -314,7 +346,7 @@
 
     size_t pageClass = m_constants.pageClass(page->sizeClass());
 
-    m_freeableMemory += physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
+    adjustFreeableMemory(lock, physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass)), "deallocateSmallLine");
 
     List<SmallPage>::remove(page); // 'page' may be in any thread's line cache.
     
@@ -490,7 +522,7 @@
     
     if (range.startPhysicalSize() < range.size()) {
         m_scavenger->scheduleIfUnderMemoryPressure(range.size());
-        m_footprint += range.size() - range.totalPhysicalSize();
+        adjustFootprint(lock, range.size() - range.totalPhysicalSize(), "splitAndAllocate");
         vmAllocatePhysicalPagesSloppy(range.begin() + range.startPhysicalSize(), range.size() - range.startPhysicalSize());
         range.setStartPhysicalSize(range.size());
         range.setTotalPhysicalSize(range.size());
@@ -501,12 +533,12 @@
     }
     
     if (prev) {
-        m_freeableMemory += prev.totalPhysicalSize();
+        adjustFreeableMemory(lock, prev.totalPhysicalSize(), "splitAndAllocate.prev");
         m_largeFree.add(prev);
     }
 
     if (next) {
-        m_freeableMemory += next.totalPhysicalSize();
+        adjustFreeableMemory(lock, next.totalPhysicalSize(), "splitAndAllocate.next");
         m_largeFree.add(next);
     }
 
@@ -552,11 +584,13 @@
         ASSERT_OR_RETURN_ON_FAILURE(range);
         
         m_largeFree.add(range);
+        adjustFreeableMemory(lock, range.totalPhysicalSize(), "allocateLarge");
+        adjustFootprint(lock, range.totalPhysicalSize(), "allocateLarge");
+
         range = m_largeFree.remove(alignment, size);
     }
+    adjustFreeableMemory(lock, -range.totalPhysicalSize(), "allocateLarge.reuse");
 
-    m_freeableMemory -= range.totalPhysicalSize();
-
     void* result = splitAndAllocate(lock, range, alignment, size).begin();
     ASSERT_OR_RETURN_ON_FAILURE(result);
     return result;
@@ -605,11 +639,11 @@
     m_scavenger->schedule(size);
 }
 
-void Heap::deallocateLarge(UniqueLockHolder&, void* object)
+void Heap::deallocateLarge(UniqueLockHolder& lock, void* object)
 {
     size_t size = m_largeAllocated.remove(object);
     m_largeFree.add(LargeRange(object, size, size, size, static_cast<char*>(object) + size));
-    m_freeableMemory += size;
+    adjustFreeableMemory(lock, size, "deallocateLarge");
     m_scavenger->schedule(size);
 }
 
@@ -619,11 +653,11 @@
     externalCommit(lock, ptr, size);
 }
 
-void Heap::externalCommit(UniqueLockHolder&, void* ptr, size_t size)
+void Heap::externalCommit(UniqueLockHolder& lock, void* ptr, size_t size)
 {
     BUNUSED_PARAM(ptr);
 
-    m_footprint += size;
+    adjustFootprint(lock, size, "externalCommit");
 #if ENABLE_PHYSICAL_PAGE_MAP 
     m_physicalPageMap.commit(ptr, size);
 #endif
@@ -635,11 +669,11 @@
     externalDecommit(lock, ptr, size);
 }
 
-void Heap::externalDecommit(UniqueLockHolder&, void* ptr, size_t size)
+void Heap::externalDecommit(UniqueLockHolder& lock, void* ptr, size_t size)
 {
     BUNUSED_PARAM(ptr);
 
-    m_footprint -= size;
+    adjustFootprint(lock, -size, "externalDecommit");
 #if ENABLE_PHYSICAL_PAGE_MAP 
     m_physicalPageMap.decommit(ptr, size);
 #endif

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (282849 => 282850)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2021-09-21 23:35:41 UTC (rev 282849)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2021-09-22 00:00:07 UTC (rev 282850)
@@ -117,6 +117,11 @@
     LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
     LargeRange splitAndAllocate(UniqueLockHolder&, LargeRange&, size_t alignment, size_t);
 
+    inline void adjustFootprint(UniqueLockHolder&, ssize_t, const char* note);
+    inline void adjustFreeableMemory(UniqueLockHolder&, ssize_t, const char* note);
+    inline void adjustStat(size_t& value, ssize_t);
+    inline void logStat(size_t value, ssize_t amount, const char* label, const char* note);
+
     HeapKind m_kind;
     HeapConstants& m_constants;
 
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to