Title: [198675] trunk/Source/bmalloc
Revision
198675
Author
[email protected]
Date
2016-03-25 11:07:31 -0700 (Fri, 25 Mar 2016)

Log Message

bmalloc: small and large objects should share memory
https://bugs.webkit.org/show_bug.cgi?id=155866

Reviewed by Andreas Kling.

This patch cuts our VM footprint in half. (VM footprint usually doesn't
matter, but on iOS there's an artificial VM limit around 700MB, and if
you hit it you jetsam / crash.)

It's also a step toward honoring the hardware page size at runtime,
which will reduce memory usage on iOS.

This patch is a small improvement in peak memory usage because it allows
small and large objects to recycle each other's memory. The tradeoff is
that we require more metadata, which causes more memory usage after
shrinking down from peak memory usage. In the end, we have some memory
wins and some losses, and a small win in the mean on our standard memory
benchmarks.

* bmalloc.xcodeproj/project.pbxproj: Removed SuperChunk.

* bmalloc/Allocator.cpp:
(bmalloc::Allocator::reallocate): Adopt a new Heap API for shrinking
large objects because it's a little more complicated than it used to be.

Don't check for equality in the XLarge case because we don't do it in
other cases, and it's unlikely that we'll be called for no reason.

* bmalloc/BumpAllocator.h:
(bmalloc::BumpAllocator::allocate): Don't ASSERT isSmall because that's
an old concept from when small and large objects were in distinct memory
regions.

* bmalloc/Deallocator.cpp:
(bmalloc::Deallocator::deallocateSlowCase): Large objects are not
segregated anymore.

(bmalloc::Deallocator::deallocateLarge): Deleted.

* bmalloc/Deallocator.h:
(bmalloc::Deallocator::deallocateFastCase): Don't ASSERT isSmall(). See
above.

* bmalloc/Heap.cpp:
(bmalloc::Heap::scavenge):
(bmalloc::Heap::scavengeSmallPage):
(bmalloc::Heap::scavengeSmallPages): New helpers for returning cached
small pages to the large object heap.

(bmalloc::Heap::allocateSmallPage): Allocate small pages from the large
object heap. This is how we accomplish sharing.

(bmalloc::Heap::deallocateSmallLine): Handle large objects since we can
encounter them on this code path now.

(bmalloc::Heap::splitAndAllocate): Fixed a bug where we would sometimes
not split even though we could.

Allocating a large object also requires ref'ing its small line so that
we can alias memory between small and large objects.

(bmalloc::Heap::allocateLarge): Return cached small pages before
allocating a large object that would fit in a cached small page. This
allows some large allocations to reuse small object memory.

(bmalloc::Heap::shrinkLarge): New helper.

(bmalloc::Heap::deallocateLarge): Deleted.

* bmalloc/Heap.h:

* bmalloc/LargeChunk.h:
(bmalloc::LargeChunk::pageBegin):
(bmalloc::LargeChunk::pageEnd):
(bmalloc::LargeChunk::lines):
(bmalloc::LargeChunk::pages):
(bmalloc::LargeChunk::begin):
(bmalloc::LargeChunk::end):
(bmalloc::LargeChunk::LargeChunk):
(bmalloc::LargeChunk::get):
(bmalloc::LargeChunk::endTag):
(bmalloc::LargeChunk::offset):
(bmalloc::LargeChunk::object):
(bmalloc::LargeChunk::page):
(bmalloc::LargeChunk::line):
(bmalloc::SmallLine::begin):
(bmalloc::SmallLine::end):
(bmalloc::SmallPage::begin):
(bmalloc::SmallPage::end):
(bmalloc::Object::Object):
(bmalloc::Object::begin):
(bmalloc::Object::pageBegin):
(bmalloc::Object::line):
(bmalloc::Object::page): I merged all the SmallChunk metadata and code
into LargeChunk. Now we use a single class to track both small and large
metadata, so we can share memory between small and large objects.

I'm going to rename this class to Chunk in a follow-up patch.

* bmalloc/Object.h:
(bmalloc::Object::chunk): Updated for LargeChunk transition.

* bmalloc/ObjectType.cpp:
(bmalloc::objectType):
* bmalloc/ObjectType.h:
(bmalloc::isXLarge):
(bmalloc::isSmall): Deleted. The difference between small and large
objects is now stored in metadata and is not a property of their
virtual address range.

* bmalloc/SegregatedFreeList.h: One more entry because we cover all of
what used to be the super chunk in a large chunk now.

* bmalloc/Sizes.h: Removed bit masking helpers because we don't use
address masks to distinguish small vs large object type anymore.

* bmalloc/SmallChunk.h: Removed.

* bmalloc/SmallPage.h:
(bmalloc::SmallPage::SmallPage): Store object type per page because any
given page can be used for large objects or small objects.

* bmalloc/SuperChunk.h: Removed.

* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::VMHeap):
(bmalloc::VMHeap::allocateLargeChunk):
(bmalloc::VMHeap::allocateSmallChunk): Deleted.
(bmalloc::VMHeap::allocateSuperChunk): Deleted.
* bmalloc/VMHeap.h:
(bmalloc::VMHeap::allocateLargeObject):
(bmalloc::VMHeap::deallocateLargeObject):
(bmalloc::VMHeap::allocateSmallPage): Deleted.
(bmalloc::VMHeap::deallocateSmallPage): Deleted. Removed super chunk and
small chunk support.

* bmalloc/Zone.cpp:
(bmalloc::enumerator):
* bmalloc/Zone.h:
(bmalloc::Zone::largeChunks):
(bmalloc::Zone::addLargeChunk):
(bmalloc::Zone::superChunks): Deleted.
(bmalloc::Zone::addSuperChunk): Deleted. Removed super chunk and
small chunk support.

Modified Paths

Removed Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (198674 => 198675)


--- trunk/Source/bmalloc/ChangeLog	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/ChangeLog	2016-03-25 18:07:31 UTC (rev 198675)
@@ -1,3 +1,150 @@
+2016-03-24  Geoffrey Garen  <[email protected]>
+
+        bmalloc: small and large objects should share memory
+        https://bugs.webkit.org/show_bug.cgi?id=155866
+
+        Reviewed by Andreas Kling.
+
+        This patch cuts our VM footprint in half. (VM footprint usually doesn't
+        matter, but on iOS there's an artificial VM limit around 700MB, and if
+        you hit it you jetsam / crash.)
+
+        It's also a step toward honoring the hardware page size at runtime,
+        which will reduce memory usage on iOS.
+
+        This patch is a small improvement in peak memory usage because it allows
+        small and large objects to recycle each other's memory. The tradeoff is
+        that we require more metadata, which causes more memory usage after
+        shrinking down from peak memory usage. In the end, we have some memory
+        wins and some losses, and a small win in the mean on our standard memory
+        benchmarks.
+
+        * bmalloc.xcodeproj/project.pbxproj: Removed SuperChunk.
+
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::reallocate): Adopt a new Heap API for shrinking
+        large objects because it's a little more complicated than it used to be.
+
+        Don't check for equality in the XLarge case because we don't do it in
+        other cases, and it's unlikely that we'll be called for no reason.
+
+        * bmalloc/BumpAllocator.h:
+        (bmalloc::BumpAllocator::allocate): Don't ASSERT isSmall because that's
+        an old concept from when small and large objects were in distinct memory
+        regions.
+
+        * bmalloc/Deallocator.cpp:
+        (bmalloc::Deallocator::deallocateSlowCase): Large objects are not
+        segregated anymore.
+
+        (bmalloc::Deallocator::deallocateLarge): Deleted.
+
+        * bmalloc/Deallocator.h:
+        (bmalloc::Deallocator::deallocateFastCase): Don't ASSERT isSmall(). See
+        above.
+
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::scavengeSmallPage):
+        (bmalloc::Heap::scavengeSmallPages): New helpers for returning cached
+        small pages to the large object heap.
+
+        (bmalloc::Heap::allocateSmallPage): Allocate small pages from the large
+        object heap. This is how we accomplish sharing.
+
+        (bmalloc::Heap::deallocateSmallLine): Handle large objects since we can
+        encounter them on this code path now.
+
+        (bmalloc::Heap::splitAndAllocate): Fixed a bug where we would sometimes
+        not split even though we could.
+
+        Allocating a large object also requires ref'ing its small line so that
+        we can alias memory between small and large objects.
+
+        (bmalloc::Heap::allocateLarge): Return cached small pages before
+        allocating a large object that would fit in a cached small page. This
+        allows some large allocations to reuse small object memory.
+
+        (bmalloc::Heap::shrinkLarge): New helper.
+
+        (bmalloc::Heap::deallocateLarge): Deleted.
+
+        * bmalloc/Heap.h:
+
+        * bmalloc/LargeChunk.h:
+        (bmalloc::LargeChunk::pageBegin):
+        (bmalloc::LargeChunk::pageEnd):
+        (bmalloc::LargeChunk::lines):
+        (bmalloc::LargeChunk::pages):
+        (bmalloc::LargeChunk::begin):
+        (bmalloc::LargeChunk::end):
+        (bmalloc::LargeChunk::LargeChunk):
+        (bmalloc::LargeChunk::get):
+        (bmalloc::LargeChunk::endTag):
+        (bmalloc::LargeChunk::offset):
+        (bmalloc::LargeChunk::object):
+        (bmalloc::LargeChunk::page):
+        (bmalloc::LargeChunk::line):
+        (bmalloc::SmallLine::begin):
+        (bmalloc::SmallLine::end):
+        (bmalloc::SmallPage::begin):
+        (bmalloc::SmallPage::end):
+        (bmalloc::Object::Object):
+        (bmalloc::Object::begin):
+        (bmalloc::Object::pageBegin):
+        (bmalloc::Object::line):
+        (bmalloc::Object::page): I merged all the SmallChunk metadata and code
+        into LargeChunk. Now we use a single class to track both small and large
+        metadata, so we can share memory between small and large objects.
+
+        I'm going to rename this class to Chunk in a follow-up patch.
+
+        * bmalloc/Object.h:
+        (bmalloc::Object::chunk): Updated for LargeChunk transition.
+
+        * bmalloc/ObjectType.cpp:
+        (bmalloc::objectType):
+        * bmalloc/ObjectType.h:
+        (bmalloc::isXLarge):
+        (bmalloc::isSmall): Deleted. The difference between small and large
+        objects is now stored in metadata and is not a property of their
+        virtual address range.
+
+        * bmalloc/SegregatedFreeList.h: One more entry because we cover all of
+        what used to be the super chunk in a large chunk now.
+
+        * bmalloc/Sizes.h: Removed bit masking helpers because we don't use
+        address masks to distinguish small vs large object type anymore.
+
+        * bmalloc/SmallChunk.h: Removed.
+
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::SmallPage): Store object type per page because any
+        given page can be used for large objects or small objects.
+
+        * bmalloc/SuperChunk.h: Removed.
+
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::VMHeap):
+        (bmalloc::VMHeap::allocateLargeChunk):
+        (bmalloc::VMHeap::allocateSmallChunk): Deleted.
+        (bmalloc::VMHeap::allocateSuperChunk): Deleted.
+        * bmalloc/VMHeap.h:
+        (bmalloc::VMHeap::allocateLargeObject):
+        (bmalloc::VMHeap::deallocateLargeObject):
+        (bmalloc::VMHeap::allocateSmallPage): Deleted.
+        (bmalloc::VMHeap::deallocateSmallPage): Deleted. Removed super chunk and
+        small chunk support.
+
+        * bmalloc/Zone.cpp:
+        (bmalloc::enumerator):
+        * bmalloc/Zone.h:
+        (bmalloc::Zone::largeChunks):
+        (bmalloc::Zone::addLargeChunk):
+        (bmalloc::Zone::superChunks): Deleted.
+        (bmalloc::Zone::addSuperChunk): Deleted. Removed super chunk and
+        small chunk support.
+
 2016-03-23  Geoffrey Garen  <[email protected]>
 
         bmalloc: Added an Object helper class

Modified: trunk/Source/bmalloc/bmalloc/Allocator.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Allocator.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Allocator.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -114,40 +114,33 @@
 
     size_t oldSize = 0;
     switch (objectType(object)) {
-    case Small: {
+    case ObjectType::Small: {
         size_t sizeClass = Object(object).page()->sizeClass();
         oldSize = objectSize(sizeClass);
         break;
     }
-    case Large: {
-        std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+    case ObjectType::Large: {
         LargeObject largeObject(object);
         oldSize = largeObject.size();
 
         if (newSize < oldSize && newSize > smallMax) {
-            newSize = roundUpToMultipleOf<largeAlignment>(newSize);
             if (oldSize - newSize >= largeMin) {
-                std::pair<LargeObject, LargeObject> split = largeObject.split(newSize);
-                
-                lock.unlock();
-                m_deallocator.deallocate(split.second.begin());
-                lock.lock();
+                std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+                newSize = roundUpToMultipleOf<largeAlignment>(newSize);
+                PerProcess<Heap>::getFastCase()->shrinkLarge(lock, largeObject, newSize);
+                return object;
             }
-            return object;
         }
         break;
     }
-    case XLarge: {
-        BASSERT(objectType(nullptr) == XLarge);
+    case ObjectType::XLarge: {
+        BASSERT(objectType(nullptr) == ObjectType::XLarge);
         if (!object)
             break;
 
         std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
         oldSize = PerProcess<Heap>::getFastCase()->xLargeSize(lock, object);
 
-        if (newSize == oldSize)
-            return object;
-
         if (newSize < oldSize && newSize > largeMax) {
             PerProcess<Heap>::getFastCase()->shrinkXLarge(lock, Range(object, oldSize), newSize);
             return object;

Modified: trunk/Source/bmalloc/bmalloc/BumpAllocator.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/BumpAllocator.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/BumpAllocator.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -76,7 +76,6 @@
     --m_remaining;
     char* result = m_ptr;
     m_ptr += m_size;
-    BASSERT(isSmall(result));
     return result;
 }
 

Modified: trunk/Source/bmalloc/bmalloc/Deallocator.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Deallocator.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Deallocator.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -30,7 +30,6 @@
 #include "Inline.h"
 #include "Object.h"
 #include "PerProcess.h"
-#include "SmallChunk.h"
 #include <algorithm>
 #include <cstdlib>
 #include <sys/mman.h>
@@ -60,12 +59,6 @@
         processObjectLog();
 }
 
-void Deallocator::deallocateLarge(void* object)
-{
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    PerProcess<Heap>::getFastCase()->deallocateLarge(lock, object);
-}
-
 void Deallocator::deallocateXLarge(void* object)
 {
     std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
@@ -97,20 +90,15 @@
         return;
     }
 
-    BASSERT(objectType(nullptr) == XLarge);
     if (!object)
         return;
 
-    if (isSmall(object)) {
-        processObjectLog();
-        m_objectLog.push(object);
-        return;
-    }
+    if (isXLarge(object))
+        return deallocateXLarge(object);
 
-    if (!isXLarge(object))
-        return deallocateLarge(object);
-    
-    return deallocateXLarge(object);
+    BASSERT(m_objectLog.size() == m_objectLog.capacity());
+    processObjectLog();
+    m_objectLog.push(object);
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Deallocator.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Deallocator.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Deallocator.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -51,7 +51,6 @@
     bool deallocateFastCase(void*);
     void deallocateSlowCase(void*);
 
-    void deallocateLarge(void*);
     void deallocateXLarge(void*);
 
     FixedVector<void*, deallocatorLogCapacity> m_objectLog;
@@ -60,11 +59,10 @@
 
 inline bool Deallocator::deallocateFastCase(void* object)
 {
-    if (!isSmall(object))
+    BASSERT(isXLarge(nullptr));
+    if (isXLarge(object))
         return false;
 
-    BASSERT(object);
-
     if (m_objectLog.size() == m_objectLog.capacity())
         return false;
 

Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Heap.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -28,7 +28,6 @@
 #include "LargeChunk.h"
 #include "LargeObject.h"
 #include "PerProcess.h"
-#include "SmallChunk.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
 #include <thread>
@@ -84,21 +83,36 @@
 {
     waitUntilFalse(lock, sleepDuration, m_isAllocatingPages);
 
-    scavengeSmallPages(lock, sleepDuration);
+    lock.unlock();
+    {
+        std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+        scavengeSmallPages(lock);
+    }
+    lock.lock();
+
     scavengeLargeObjects(lock, sleepDuration);
     scavengeXLargeObjects(lock, sleepDuration);
 
     sleep(lock, sleepDuration);
 }
 
-void Heap::scavengeSmallPages(std::unique_lock<StaticMutex>& lock, std::chrono::milliseconds sleepDuration)
+void Heap::scavengeSmallPage(std::lock_guard<StaticMutex>& lock)
 {
-    while (!m_smallPages.isEmpty()) {
-        m_vmHeap.deallocateSmallPage(lock, m_smallPages.pop());
-        waitUntilFalse(lock, sleepDuration, m_isAllocatingPages);
-    }
+    SmallPage* page = m_smallPages.pop();
+
+    // Transform small object page back into a large object.
+    page->setObjectType(ObjectType::Large);
+
+    LargeObject largeObject(page->begin()->begin());
+    deallocateLarge(lock, largeObject);
 }
 
+void Heap::scavengeSmallPages(std::lock_guard<StaticMutex>& lock)
+{
+    while (!m_smallPages.isEmpty())
+        scavengeSmallPage(lock);
+}
+
 void Heap::scavengeLargeObjects(std::unique_lock<StaticMutex>& lock, std::chrono::milliseconds sleepDuration)
 {
     while (LargeObject largeObject = m_largeObjects.takeGreedy()) {
@@ -179,26 +193,34 @@
 {
     if (!m_smallPagesWithFreeLines[sizeClass].isEmpty())
         return m_smallPagesWithFreeLines[sizeClass].popFront();
+    
+    if (!m_smallPages.isEmpty()) {
+        SmallPage* page = m_smallPages.pop();
+        page->setSizeClass(sizeClass);
+        return page;
+    }
 
-    SmallPage* page = [this, &lock]() {
-        if (!m_smallPages.isEmpty())
-            return m_smallPages.pop();
+    size_t unalignedSize = largeMin + vmPageSize - largeAlignment + vmPageSize;
+    LargeObject largeObject = allocateLarge(lock, vmPageSize, vmPageSize, unalignedSize);
 
-        m_isAllocatingPages = true;
-        SmallPage* page = m_vmHeap.allocateSmallPage(lock);
-        return page;
-    }();
+    // Transform our large object into a small object page. We deref here
+    // because our small objects will keep their own refcounts on the line.
+    Object object(largeObject.begin());
+    object.line()->deref(lock);
+    object.page()->setObjectType(ObjectType::Small);
 
-    page->setSizeClass(sizeClass);
-    return page;
+    object.page()->setSizeClass(sizeClass);
+    return object.page();
 }
 
 void Heap::deallocateSmallLine(std::lock_guard<StaticMutex>& lock, Object object)
 {
     BASSERT(!object.line()->refCount(lock));
     SmallPage* page = object.page();
+    if (page->objectType() == ObjectType::Large)
+        return deallocateLarge(lock, LargeObject(object.begin()));
+
     page->deref(lock);
-
     if (!page->hasFreeLines(lock)) {
         page->setHasFreeLines(lock, true);
         m_smallPagesWithFreeLines[page->sizeClass()].push(page);
@@ -215,19 +237,22 @@
     m_scavenger.run();
 }
 
-inline LargeObject& Heap::splitAndAllocate(LargeObject& largeObject, size_t size)
+inline LargeObject& Heap::splitAndAllocate(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t size)
 {
     BASSERT(largeObject.isFree());
 
     LargeObject nextLargeObject;
 
-    if (largeObject.size() - size > largeMin) {
+    if (largeObject.size() - size >= largeMin) {
         std::pair<LargeObject, LargeObject> split = largeObject.split(size);
         largeObject = split.first;
         nextLargeObject = split.second;
     }
 
     largeObject.setFree(false);
+    Object object(largeObject.begin());
+    object.line()->ref(lock);
+    BASSERT(object.page()->objectType() == ObjectType::Large);
 
     if (nextLargeObject) {
         BASSERT(!nextLargeObject.nextCanMerge());
@@ -237,7 +262,7 @@
     return largeObject;
 }
 
-inline LargeObject& Heap::splitAndAllocate(LargeObject& largeObject, size_t alignment, size_t size)
+inline LargeObject& Heap::splitAndAllocate(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t alignment, size_t size)
 {
     LargeObject prevLargeObject;
     LargeObject nextLargeObject;
@@ -252,13 +277,16 @@
 
     BASSERT(largeObject.isFree());
 
-    if (largeObject.size() - size > largeMin) {
+    if (largeObject.size() - size >= largeMin) {
         std::pair<LargeObject, LargeObject> split = largeObject.split(size);
         largeObject = split.first;
         nextLargeObject = split.second;
     }
 
     largeObject.setFree(false);
+    Object object(largeObject.begin());
+    object.line()->ref(lock);
+    BASSERT(object.page()->objectType() == ObjectType::Large);
 
     if (prevLargeObject) {
         LargeObject merged = prevLargeObject.merge();
@@ -278,6 +306,9 @@
     BASSERT(size <= largeMax);
     BASSERT(size >= largeMin);
     BASSERT(size == roundUpToMultipleOf<largeAlignment>(size));
+    
+    if (size <= vmPageSize)
+        scavengeSmallPages(lock);
 
     LargeObject largeObject = m_largeObjects.take(size);
     if (!largeObject)
@@ -290,7 +321,7 @@
         largeObject.setVMState(VMState::Physical);
     }
 
-    largeObject = splitAndAllocate(largeObject, size);
+    largeObject = splitAndAllocate(lock, largeObject, size);
 
     return largeObject.begin();
 }
@@ -307,6 +338,9 @@
     BASSERT(alignment >= largeAlignment);
     BASSERT(isPowerOfTwo(alignment));
 
+    if (size <= vmPageSize)
+        scavengeSmallPages(lock);
+
     LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
     if (!largeObject)
         largeObject = m_vmHeap.allocateLargeObject(lock, alignment, size, unalignedSize);
@@ -318,14 +352,21 @@
         largeObject.setVMState(VMState::Physical);
     }
 
-    largeObject = splitAndAllocate(largeObject, alignment, size);
+    largeObject = splitAndAllocate(lock, largeObject, alignment, size);
 
     return largeObject.begin();
 }
 
+void Heap::shrinkLarge(std::lock_guard<StaticMutex>& lock, LargeObject& largeObject, size_t newSize)
+{
+    std::pair<LargeObject, LargeObject> split = largeObject.split(newSize);
+    deallocateLarge(lock, split.second);
+}
+
 void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject& largeObject)
 {
     BASSERT(!largeObject.isFree());
+    BASSERT(Object(largeObject.begin()).page()->objectType() == ObjectType::Large);
     largeObject.setFree(true);
     
     LargeObject merged = largeObject.merge();
@@ -333,12 +374,6 @@
     m_scavenger.run();
 }
 
-void Heap::deallocateLarge(std::lock_guard<StaticMutex>& lock, void* object)
-{
-    LargeObject largeObject(object);
-    deallocateLarge(lock, largeObject);
-}
-
 void* Heap::allocateXLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
 {
     void* result = tryAllocateXLarge(lock, alignment, size);

Modified: trunk/Source/bmalloc/bmalloc/Heap.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Heap.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Heap.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -33,7 +33,6 @@
 #include "Mutex.h"
 #include "Object.h"
 #include "SegregatedFreeList.h"
-#include "SmallChunk.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
 #include "VMHeap.h"
@@ -59,7 +58,7 @@
 
     void* allocateLarge(std::lock_guard<StaticMutex>&, size_t);
     void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, size_t unalignedSize);
-    void deallocateLarge(std::lock_guard<StaticMutex>&, void*);
+    void shrinkLarge(std::lock_guard<StaticMutex>&, LargeObject&, size_t);
 
     void* allocateXLarge(std::lock_guard<StaticMutex>&, size_t);
     void* allocateXLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
@@ -80,8 +79,8 @@
     void deallocateSmallLine(std::lock_guard<StaticMutex>&, Object);
     void deallocateLarge(std::lock_guard<StaticMutex>&, const LargeObject&);
 
-    LargeObject& splitAndAllocate(LargeObject&, size_t);
-    LargeObject& splitAndAllocate(LargeObject&, size_t, size_t);
+    LargeObject& splitAndAllocate(std::lock_guard<StaticMutex>&, LargeObject&, size_t);
+    LargeObject& splitAndAllocate(std::lock_guard<StaticMutex>&, LargeObject&, size_t, size_t);
     void mergeLarge(BeginTag*&, EndTag*&, Range&);
     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
@@ -89,7 +88,8 @@
     XLargeRange splitAndAllocate(XLargeRange&, size_t alignment, size_t);
 
     void concurrentScavenge();
-    void scavengeSmallPages(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
+    void scavengeSmallPage(std::lock_guard<StaticMutex>&);
+    void scavengeSmallPages(std::lock_guard<StaticMutex>&);
     void scavengeLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
     void scavengeXLargeObjects(std::unique_lock<StaticMutex>&, std::chrono::milliseconds);
 

Modified: trunk/Source/bmalloc/bmalloc/LargeChunk.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/LargeChunk.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/LargeChunk.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -28,8 +28,11 @@
 
 #include "BeginTag.h"
 #include "EndTag.h"
+#include "Object.h"
 #include "ObjectType.h"
 #include "Sizes.h"
+#include "SmallLine.h"
+#include "SmallPage.h"
 #include "VMAllocate.h"
 #include <array>
 
@@ -37,12 +40,25 @@
 
 class LargeChunk {
 public:
-    LargeChunk();
     static LargeChunk* get(void*);
 
     static BeginTag* beginTag(void*);
     static EndTag* endTag(void*, size_t);
 
+    LargeChunk(std::lock_guard<StaticMutex>&);
+
+    size_t offset(void*);
+
+    void* object(size_t offset);
+    SmallPage* page(size_t offset);
+    SmallLine* line(size_t offset);
+
+    SmallPage* pageBegin() { return Object(m_memory).page(); }
+    SmallPage* pageEnd() { return m_pages.end(); }
+    
+    SmallLine* lines() { return m_lines.begin(); }
+    SmallPage* pages() { return m_pages.begin(); }
+
     char* begin() { return m_memory; }
     char* end() { return reinterpret_cast<char*>(this) + largeChunkSize; }
 
@@ -64,17 +80,21 @@
     //
     // We use the X's for boundary tags and the O's for edge sentinels.
 
+    std::array<SmallLine, largeChunkSize / smallLineSize> m_lines;
+    std::array<SmallPage, largeChunkSize / vmPageSize> m_pages;
     std::array<BoundaryTag, boundaryTagCount> m_boundaryTags;
-    char m_memory[] __attribute__((aligned(largeAlignment+0)));
+    char m_memory[] __attribute__((aligned(2 * smallMax + 0)));
 };
 
-static_assert(largeChunkMetadataSize == sizeof(LargeChunk), "Our largeChunkMetadataSize math in Sizes.h is wrong");
-static_assert(largeChunkMetadataSize + largeObjectMax == largeChunkSize, "largeObjectMax is too small or too big");
+static_assert(sizeof(LargeChunk) + largeMax <= largeChunkSize, "largeMax is too big");
+static_assert(
+    sizeof(LargeChunk) % vmPageSize + 2 * smallMax <= vmPageSize,
+    "the first page of object memory in a small chunk must be able to allocate smallMax");
 
-inline LargeChunk::LargeChunk()
+inline LargeChunk::LargeChunk(std::lock_guard<StaticMutex>& lock)
 {
     Range range(begin(), end() - begin());
-    BASSERT(range.size() == largeObjectMax);
+    BASSERT(range.size() <= largeObjectMax);
 
     BeginTag* beginTag = LargeChunk::beginTag(range.begin());
     beginTag->setRange(range);
@@ -97,11 +117,17 @@
     BASSERT(rightSentinel >= m_boundaryTags.begin());
     BASSERT(rightSentinel < m_boundaryTags.end());
     rightSentinel->initSentinel();
+
+    // Track the memory used for metadata by allocating imaginary objects.
+    for (char* it = reinterpret_cast<char*>(this); it < m_memory; it += smallLineSize) {
+        Object object(it);
+        object.line()->ref(lock);
+        object.page()->ref(lock);
+    }
 }
 
 inline LargeChunk* LargeChunk::get(void* object)
 {
-    BASSERT(!isSmall(object));
     return static_cast<LargeChunk*>(mask(object, largeChunkMask));
 }
 
@@ -114,8 +140,6 @@
 
 inline EndTag* LargeChunk::endTag(void* object, size_t size)
 {
-    BASSERT(!isSmall(object));
-
     LargeChunk* chunk = get(object);
     char* end = static_cast<char*>(object) + size;
 
@@ -127,6 +151,89 @@
     return static_cast<EndTag*>(&chunk->m_boundaryTags[boundaryTagNumber]);
 }
 
+inline size_t LargeChunk::offset(void* object)
+{
+    BASSERT(object >= this);
+    BASSERT(object < reinterpret_cast<char*>(this) + largeChunkSize);
+    return static_cast<char*>(object) - reinterpret_cast<char*>(this);
+}
+
+inline void* LargeChunk::object(size_t offset)
+{
+    return reinterpret_cast<char*>(this) + offset;
+}
+
+inline SmallPage* LargeChunk::page(size_t offset)
+{
+    size_t pageNumber = offset / vmPageSize;
+    return &m_pages[pageNumber];
+}
+
+inline SmallLine* LargeChunk::line(size_t offset)
+{
+    size_t lineNumber = offset / smallLineSize;
+    return &m_lines[lineNumber];
+}
+
+inline char* SmallLine::begin()
+{
+    LargeChunk* chunk = LargeChunk::get(this);
+    size_t lineNumber = this - chunk->lines();
+    size_t offset = lineNumber * smallLineSize;
+    return &reinterpret_cast<char*>(chunk)[offset];
+}
+
+inline char* SmallLine::end()
+{
+    return begin() + smallLineSize;
+}
+
+inline SmallLine* SmallPage::begin()
+{
+    LargeChunk* chunk = LargeChunk::get(this);
+    size_t pageNumber = this - chunk->pages();
+    size_t lineNumber = pageNumber * smallLineCount;
+    return &chunk->lines()[lineNumber];
+}
+
+inline SmallLine* SmallPage::end()
+{
+    return begin() + smallLineCount;
+}
+
+inline Object::Object(void* object)
+    : m_chunk(LargeChunk::get(object))
+    , m_offset(m_chunk->offset(object))
+{
+}
+
+inline Object::Object(LargeChunk* chunk, void* object)
+    : m_chunk(chunk)
+    , m_offset(m_chunk->offset(object))
+{
+    BASSERT(chunk == LargeChunk::get(object));
+}
+
+inline void* Object::begin()
+{
+    return m_chunk->object(m_offset);
+}
+
+inline void* Object::pageBegin()
+{
+    return m_chunk->object(roundDownToMultipleOf(vmPageSize, m_offset));
+}
+
+inline SmallLine* Object::line()
+{
+    return m_chunk->line(m_offset);
+}
+
+inline SmallPage* Object::page()
+{
+    return m_chunk->page(m_offset);
+}
+
 }; // namespace bmalloc
 
 #endif // LargeChunk

Modified: trunk/Source/bmalloc/bmalloc/Object.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Object.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Object.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -28,22 +28,24 @@
 
 namespace bmalloc {
 
-class SmallChunk;
+class LargeChunk;
 class SmallLine;
 class SmallPage;
 
 class Object {
 public:
     Object(void*);
-    Object(SmallChunk*, void*);
+    Object(LargeChunk*, void*);
     
-    SmallChunk* chunk() { return m_chunk; }
+    LargeChunk* chunk() { return m_chunk; }
+    void* begin();
+    void* pageBegin();
 
     SmallLine* line();
     SmallPage* page();
 
 private:
-    SmallChunk* m_chunk;
+    LargeChunk* m_chunk;
     size_t m_offset;
 };
 

Modified: trunk/Source/bmalloc/bmalloc/ObjectType.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/ObjectType.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/ObjectType.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -23,20 +23,19 @@
  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
  */
 
-#include "LargeChunk.h"
 #include "ObjectType.h"
 
+#include "LargeChunk.h"
+#include "Object.h"
+
 namespace bmalloc {
 
 ObjectType objectType(void* object)
 {
-    if (isSmall(object))
-        return Small;
+    if (isXLarge(object))
+        return ObjectType::XLarge;
     
-    if (!isXLarge(object))
-        return Large;
-    
-    return XLarge;
+    return Object(object).page()->objectType();
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/ObjectType.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/ObjectType.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/ObjectType.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -31,15 +31,10 @@
 
 namespace bmalloc {
 
-enum ObjectType { Small, Large, XLarge };
+enum class ObjectType : unsigned char { Small, Large, XLarge };
 
 ObjectType objectType(void*);
 
-inline bool isSmall(void* object)
-{
-    return test(object, smallMask);
-}
-
 inline bool isXLarge(void* object)
 {
     return !test(object, ~xLargeMask);

Modified: trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -61,7 +61,7 @@
     FreeList& select(size_t);
 
     VMState::HasPhysical m_hasPhysical;
-    std::array<FreeList, 15> m_freeLists;
+    std::array<FreeList, 16> m_freeLists;
 };
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/Sizes.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Sizes.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Sizes.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -51,42 +51,28 @@
 #else
     static const size_t vmPageSize = 4 * kB;
 #endif
-    static const size_t vmPageMask = ~(vmPageSize - 1);
     
-    static const size_t superChunkSize = 2 * MB;
-    static const size_t superChunkMask = ~(superChunkSize - 1);
-
-    static const size_t smallChunkSize = superChunkSize / 2;
-    static const size_t smallChunkOffset = superChunkSize / 2;
-    static const size_t smallChunkMask = ~(smallChunkSize - 1ul);
-
     static const size_t smallLineSize = 256;
     static const size_t smallLineCount = vmPageSize / smallLineSize;
 
     static const size_t smallMax = 1 * kB;
     static const size_t maskSizeClassMax = 512;
 
-    static const size_t largeChunkSize = superChunkSize / 2;
-    static const size_t largeChunkOffset = 0;
+    static const size_t largeChunkSize = 2 * MB;
     static const size_t largeChunkMask = ~(largeChunkSize - 1ul);
 
     static const size_t largeAlignment = 64;
     static const size_t largeMin = smallMax;
-    static const size_t largeChunkMetadataSize = 4 * kB; // sizeof(LargeChunk)
-    static const size_t largeObjectMax = largeChunkSize - largeChunkMetadataSize;
+    static const size_t largeObjectMax = largeChunkSize;
     static const size_t largeMax = largeObjectMax / 2;
 
-    static const size_t xLargeAlignment = superChunkSize;
+    static const size_t xLargeAlignment = largeChunkSize;
     static const size_t xLargeMask = ~(xLargeAlignment - 1);
     static const size_t xLargeMax = std::numeric_limits<size_t>::max() - xLargeAlignment; // Make sure that rounding up to xLargeAlignment does not overflow.
 
     static const size_t freeListSearchDepth = 16;
     static const size_t freeListGrowFactor = 2;
 
-    static const uintptr_t typeMask = (superChunkSize - 1) & ~((superChunkSize / 2) - 1); // 2 taggable chunks
-    static const uintptr_t largeMask = typeMask & (superChunkSize + largeChunkOffset);
-    static const uintptr_t smallMask = typeMask & (superChunkSize + smallChunkOffset);
-
     static const size_t deallocatorLogCapacity = 256;
     static const size_t bumpRangeCacheCapacity = 3;
     

Deleted: trunk/Source/bmalloc/bmalloc/SmallChunk.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/SmallChunk.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/SmallChunk.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -1,157 +0,0 @@
-/*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef SmallChunk_h
-#define SmallChunk_h
-
-#include "Object.h"
-#include "Sizes.h"
-#include "SmallLine.h"
-#include "SmallPage.h"
-#include "VMAllocate.h"
-
-namespace bmalloc {
-
-class SmallChunk {
-public:
-    SmallChunk(std::lock_guard<StaticMutex>&);
-
-    static SmallChunk* get(void*);
-    size_t offset(void*);
-
-    void* object(size_t offset);
-    SmallPage* page(size_t offset);
-    SmallLine* line(size_t offset);
-
-    SmallPage* begin() { return Object(m_memory).page(); }
-    SmallPage* end() { return m_pages.end(); }
-    
-    SmallLine* lines() { return m_lines.begin(); }
-    SmallPage* pages() { return m_pages.begin(); }
-    
-private:
-    std::array<SmallLine, smallChunkSize / smallLineSize> m_lines;
-    std::array<SmallPage, smallChunkSize / vmPageSize> m_pages;
-    char m_memory[] __attribute__((aligned(2 * smallMax + 0)));
-};
-
-static_assert(!(vmPageSize % smallLineSize), "vmPageSize must be an even multiple of line size");
-static_assert(!(smallChunkSize % smallLineSize), "chunk size must be an even multiple of line size");
-static_assert(
-    sizeof(SmallChunk) % vmPageSize + 2 * smallMax <= vmPageSize,
-    "the first page of object memory in a small chunk must be able to allocate smallMax");
-
-inline SmallChunk::SmallChunk(std::lock_guard<StaticMutex>& lock)
-{
-    // Track the memory used for metadata by allocating imaginary objects.
-    for (char* it = reinterpret_cast<char*>(this); it < m_memory; it += smallLineSize) {
-        Object object(it);
-        object.line()->ref(lock);
-        object.page()->ref(lock);
-    }
-}
-
-inline SmallChunk* SmallChunk::get(void* object)
-{
-    BASSERT(isSmall(object));
-    return static_cast<SmallChunk*>(mask(object, smallChunkMask));
-}
-
-inline size_t SmallChunk::offset(void* object)
-{
-    BASSERT(object >= this);
-    BASSERT(object < reinterpret_cast<char*>(this) + smallChunkSize);
-    return static_cast<char*>(object) - reinterpret_cast<char*>(this);
-}
-
-inline void* SmallChunk::object(size_t offset)
-{
-    return reinterpret_cast<char*>(this) + offset;
-}
-
-inline SmallPage* SmallChunk::page(size_t offset)
-{
-    size_t pageNumber = offset / vmPageSize;
-    return &m_pages[pageNumber];
-}
-
-inline SmallLine* SmallChunk::line(size_t offset)
-{
-    size_t lineNumber = offset / smallLineSize;
-    return &m_lines[lineNumber];
-}
-
-inline char* SmallLine::begin()
-{
-    SmallChunk* chunk = SmallChunk::get(this);
-    size_t lineNumber = this - chunk->lines();
-    size_t offset = lineNumber * smallLineSize;
-    return &reinterpret_cast<char*>(chunk)[offset];
-}
-
-inline char* SmallLine::end()
-{
-    return begin() + smallLineSize;
-}
-
-inline SmallLine* SmallPage::begin()
-{
-    SmallChunk* chunk = SmallChunk::get(this);
-    size_t pageNumber = this - chunk->pages();
-    size_t lineNumber = pageNumber * smallLineCount;
-    return &chunk->lines()[lineNumber];
-}
-
-inline SmallLine* SmallPage::end()
-{
-    return begin() + smallLineCount;
-}
-
-inline Object::Object(void* object)
-    : m_chunk(SmallChunk::get(object))
-    , m_offset(m_chunk->offset(object))
-{
-}
-
-inline Object::Object(SmallChunk* chunk, void* object)
-    : m_chunk(chunk)
-    , m_offset(m_chunk->offset(object))
-{
-    BASSERT(chunk == SmallChunk::get(object));
-}
-
-inline SmallLine* Object::line()
-{
-    return m_chunk->line(m_offset);
-}
-
-inline SmallPage* Object::page()
-{
-    return m_chunk->page(m_offset);
-}
-
-}; // namespace bmalloc
-
-#endif // Chunk

Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/SmallPage.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -38,6 +38,7 @@
 public:
     SmallPage()
         : m_hasFreeLines(true)
+        , m_objectType(ObjectType::Large)
     {
     }
 
@@ -48,6 +49,9 @@
     size_t sizeClass() { return m_sizeClass; }
     void setSizeClass(size_t sizeClass) { m_sizeClass = sizeClass; }
     
+    ObjectType objectType() const { return m_objectType; }
+    void setObjectType(ObjectType objectType) { m_objectType = objectType; }
+
     bool hasFreeLines(std::lock_guard<StaticMutex>&) const { return m_hasFreeLines; }
     void setHasFreeLines(std::lock_guard<StaticMutex>&, bool hasFreeLines) { m_hasFreeLines = hasFreeLines; }
     
@@ -58,6 +62,7 @@
     unsigned char m_hasFreeLines: 1;
     unsigned char m_refCount: 7;
     unsigned char m_sizeClass;
+    ObjectType m_objectType;
 
 static_assert(
     sizeClassCount <= std::numeric_limits<decltype(m_sizeClass)>::max(),

Deleted: trunk/Source/bmalloc/bmalloc/SuperChunk.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/SuperChunk.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/SuperChunk.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -1,61 +0,0 @@
-/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef SuperChunk_h
-#define SuperChunk_h
-
-#include "LargeChunk.h"
-#include "SmallChunk.h"
-
-namespace bmalloc {
-
-class SuperChunk {
-public:
-    SuperChunk();
-
-    void* smallChunk();
-    void* largeChunk();
-};
-
-inline SuperChunk::SuperChunk()
-{
-    BASSERT(!test(this, ~superChunkMask));
-    BASSERT(!test(smallChunk(), ~smallChunkMask));
-    BASSERT(!test(largeChunk(), ~largeChunkMask));
-}
-
-inline void* SuperChunk::smallChunk()
-{
-    return reinterpret_cast<char*>(this) + smallChunkOffset;
-}
-
-inline void* SuperChunk::largeChunk()
-{
-    return reinterpret_cast<char*>(this) + largeChunkOffset;
-}
-
-} // namespace bmalloc
-
-#endif // SuperChunk_h

Modified: trunk/Source/bmalloc/bmalloc/VMHeap.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/VMHeap.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -25,7 +25,6 @@
 
 #include "LargeObject.h"
 #include "PerProcess.h"
-#include "SuperChunk.h"
 #include "VMHeap.h"
 #include <thread>
 
@@ -36,36 +35,16 @@
 {
 }
 
-void VMHeap::allocateSmallChunk(std::lock_guard<StaticMutex>& lock)
-{
-    if (!m_smallChunks.size())
-        allocateSuperChunk(lock);
-
-    // We initialize chunks lazily to avoid dirtying their metadata pages.
-    SmallChunk* smallChunk = new (m_smallChunks.pop()->smallChunk()) SmallChunk(lock);
-    for (auto* it = smallChunk->begin(); it < smallChunk->end(); ++it)
-        m_smallPages.push(it);
-}
-
 LargeObject VMHeap::allocateLargeChunk(std::lock_guard<StaticMutex>& lock)
 {
-    if (!m_largeChunks.size())
-        allocateSuperChunk(lock);
+    LargeChunk* largeChunk =
+        new (vmAllocate(largeChunkSize, largeChunkSize)) LargeChunk(lock);
 
-    // We initialize chunks lazily to avoid dirtying their metadata pages.
-    LargeChunk* largeChunk = new (m_largeChunks.pop()->largeChunk()) LargeChunk;
-    return LargeObject(largeChunk->begin());
-}
-
-void VMHeap::allocateSuperChunk(std::lock_guard<StaticMutex>&)
-{
-    SuperChunk* superChunk =
-        new (vmAllocate(superChunkSize, superChunkSize)) SuperChunk;
-    m_smallChunks.push(superChunk);
-    m_largeChunks.push(superChunk);
 #if BOS(DARWIN)
-    m_zone.addSuperChunk(superChunk);
+    m_zone.addLargeChunk(largeChunk);
 #endif
+
+    return LargeObject(largeChunk->begin());
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc/VMHeap.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/VMHeap.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -32,7 +32,6 @@
 #include "LargeObject.h"
 #include "Range.h"
 #include "SegregatedFreeList.h"
-#include "SmallChunk.h"
 #include "VMState.h"
 #include "Vector.h"
 #if BOS(DARWIN)
@@ -44,45 +43,26 @@
 class BeginTag;
 class EndTag;
 class Heap;
-class SuperChunk;
 
 class VMHeap {
 public:
     VMHeap();
 
-    SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&);
     LargeObject allocateLargeObject(std::lock_guard<StaticMutex>&, size_t);
     LargeObject allocateLargeObject(std::lock_guard<StaticMutex>&, size_t, size_t, size_t);
 
-    void deallocateSmallPage(std::unique_lock<StaticMutex>&, SmallPage*);
     void deallocateLargeObject(std::unique_lock<StaticMutex>&, LargeObject);
     
 private:
-    void allocateSmallChunk(std::lock_guard<StaticMutex>&);
     LargeObject allocateLargeChunk(std::lock_guard<StaticMutex>&);
-    void allocateSuperChunk(std::lock_guard<StaticMutex>&);
 
-    List<SmallPage> m_smallPages;
     SegregatedFreeList m_largeObjects;
 
-    Vector<SuperChunk*> m_smallChunks;
-    Vector<SuperChunk*> m_largeChunks;
-
 #if BOS(DARWIN)
     Zone m_zone;
 #endif
 };
 
-inline SmallPage* VMHeap::allocateSmallPage(std::lock_guard<StaticMutex>& lock)
-{
-    if (m_smallPages.isEmpty())
-        allocateSmallChunk(lock);
-
-    SmallPage* page = m_smallPages.pop();
-    vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
-    return page;
-}
-
 inline LargeObject VMHeap::allocateLargeObject(std::lock_guard<StaticMutex>& lock, size_t size)
 {
     if (LargeObject largeObject = m_largeObjects.take(size))
@@ -101,15 +81,6 @@
     return allocateLargeChunk(lock);
 }
 
-inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, SmallPage* page)
-{
-    lock.unlock();
-    vmDeallocatePhysicalPages(page->begin()->begin(), vmPageSize);
-    lock.lock();
-    
-    m_smallPages.push(page);
-}
-
 inline void VMHeap::deallocateLargeObject(std::unique_lock<StaticMutex>& lock, LargeObject largeObject)
 {
     // Multiple threads might scavenge concurrently, meaning that new merging opportunities

Modified: trunk/Source/bmalloc/bmalloc/Zone.cpp (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Zone.cpp	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Zone.cpp	2016-03-25 18:07:31 UTC (rev 198675)
@@ -88,8 +88,8 @@
 static kern_return_t enumerator(task_t task, void* context, unsigned type_mask, vm_address_t zone_address, memory_reader_t reader, vm_range_recorder_t recorder)
 {
     Zone remoteZone(task, reader, zone_address);
-    for (auto* superChunk : remoteZone.superChunks()) {
-        vm_range_t range = { reinterpret_cast<vm_address_t>(superChunk), superChunkSize };
+    for (auto* largeChunk : remoteZone.largeChunks()) {
+        vm_range_t range = { reinterpret_cast<vm_address_t>(largeChunk), largeChunkSize };
 
         if ((type_mask & MALLOC_PTR_REGION_RANGE_TYPE))
             (*recorder)(task, context, MALLOC_PTR_REGION_RANGE_TYPE, &range, 1);

Modified: trunk/Source/bmalloc/bmalloc/Zone.h (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc/Zone.h	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc/Zone.h	2016-03-25 18:07:31 UTC (rev 198675)
@@ -31,7 +31,7 @@
 
 namespace bmalloc {
 
-class SuperChunk;
+class LargeChunk;
 
 class Zone : public malloc_zone_t {
 public:
@@ -41,30 +41,30 @@
     Zone();
     Zone(task_t, memory_reader_t, vm_address_t);
 
-    void addSuperChunk(SuperChunk*);
-    FixedVector<SuperChunk*, capacity>& superChunks() { return m_superChunks; }
+    void addLargeChunk(LargeChunk*);
+    FixedVector<LargeChunk*, capacity>& largeChunks() { return m_largeChunks; }
     
 private:
     // This vector has two purposes:
-    //     (1) It stores the list of SuperChunks so that we can enumerate
-    //         each SuperChunk and request that it be scanned if reachable.
-    //     (2) It roots a pointer to each SuperChunk in a global non-malloc
-    //         VM region, making each SuperChunk appear reachable, and therefore
+    //     (1) It stores the list of LargeChunks so that we can enumerate
+    //         each LargeChunk and request that it be scanned if reachable.
+    //     (2) It roots a pointer to each LargeChunk in a global non-malloc
+    //         VM region, making each LargeChunk appear reachable, and therefore
     //         ensuring that the leaks tool will scan it. (The leaks tool
     //         conservatively scans all writeable VM regions that are not malloc
     //         regions, and then scans malloc regions using the introspection API.)
     // This prevents the leaks tool from reporting false positive leaks for
     // objects pointed to from bmalloc memory -- though it also prevents the
     // leaks tool from finding any leaks in bmalloc memory.
-    FixedVector<SuperChunk*, capacity> m_superChunks;
+    FixedVector<LargeChunk*, capacity> m_largeChunks;
 };
 
-inline void Zone::addSuperChunk(SuperChunk* superChunk)
+inline void Zone::addLargeChunk(LargeChunk* largeChunk)
 {
-    if (m_superChunks.size() == m_superChunks.capacity())
+    if (m_largeChunks.size() == m_largeChunks.capacity())
         return;
     
-    m_superChunks.push(superChunk);
+    m_largeChunks.push(largeChunk);
 }
 
 } // namespace bmalloc

Modified: trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj (198674 => 198675)


--- trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj	2016-03-25 17:37:48 UTC (rev 198674)
+++ trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj	2016-03-25 18:07:31 UTC (rev 198675)
@@ -17,7 +17,6 @@
 		143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */ = {isa = PBXBuildFile; fileRef = 143CB81B19022BC900B16A45 /* StaticMutex.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		143EF9AF1A9FABF6004F5C77 /* FreeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 143EF9AD1A9FABF6004F5C77 /* FreeList.cpp */; };
 		143EF9B01A9FABF6004F5C77 /* FreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 143EF9AE1A9FABF6004F5C77 /* FreeList.h */; settings = {ATTRIBUTES = (Private, ); }; };
-		1440AFC91A95142400837FAA /* SuperChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 1440AFC81A95142400837FAA /* SuperChunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		1440AFCB1A95261100837FAA /* Zone.h in Headers */ = {isa = PBXBuildFile; fileRef = 1440AFCA1A95261100837FAA /* Zone.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		1440AFCD1A9527AF00837FAA /* Zone.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1440AFCC1A9527AF00837FAA /* Zone.cpp */; };
 		1448C30018F3754600502839 /* mbmalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1448C2FF18F3754300502839 /* mbmalloc.cpp */; };
@@ -42,7 +41,6 @@
 		14DD789918F48D4A00950702 /* Cache.h in Headers */ = {isa = PBXBuildFile; fileRef = 144469E517A46BFE00F9EA1D /* Cache.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14DD789A18F48D4A00950702 /* Deallocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 145F685A179DC90200D65598 /* Deallocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14DD789C18F48D4A00950702 /* BumpAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 1413E462189DE1CD00546D68 /* BumpAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
-		14DD78BB18F48D6B00950702 /* SmallChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147AAA8C18CD36A7002201E4 /* SmallChunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14DD78BC18F48D6B00950702 /* SmallLine.h in Headers */ = {isa = PBXBuildFile; fileRef = 1452478618BC757C00F80098 /* SmallLine.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14DD78BD18F48D6B00950702 /* SmallPage.h in Headers */ = {isa = PBXBuildFile; fileRef = 143E29ED18CAE90500FE8A0F /* SmallPage.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		14DD78C518F48D7500950702 /* Algorithm.h in Headers */ = {isa = PBXBuildFile; fileRef = 1421A87718EE462A00B4DD68 /* Algorithm.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -95,7 +93,6 @@
 		143E29ED18CAE90500FE8A0F /* SmallPage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SmallPage.h; path = bmalloc/SmallPage.h; sourceTree = "<group>"; };
 		143EF9AD1A9FABF6004F5C77 /* FreeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FreeList.cpp; path = bmalloc/FreeList.cpp; sourceTree = "<group>"; };
 		143EF9AE1A9FABF6004F5C77 /* FreeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FreeList.h; path = bmalloc/FreeList.h; sourceTree = "<group>"; };
-		1440AFC81A95142400837FAA /* SuperChunk.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SuperChunk.h; path = bmalloc/SuperChunk.h; sourceTree = "<group>"; };
 		1440AFCA1A95261100837FAA /* Zone.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Zone.h; path = bmalloc/Zone.h; sourceTree = "<group>"; };
 		1440AFCC1A9527AF00837FAA /* Zone.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Zone.cpp; path = bmalloc/Zone.cpp; sourceTree = "<group>"; };
 		144469E417A46BFE00F9EA1D /* Cache.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; lineEnding = 0; name = Cache.cpp; path = bmalloc/Cache.cpp; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.cpp; };
@@ -123,7 +120,6 @@
 		1479E21217A1A255006D4E9D /* Vector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Vector.h; path = bmalloc/Vector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
 		1479E21417A1A63E006D4E9D /* VMAllocate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = VMAllocate.h; path = bmalloc/VMAllocate.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
 		147AAA8818CD17CE002201E4 /* LargeChunk.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LargeChunk.h; path = bmalloc/LargeChunk.h; sourceTree = "<group>"; };
-		147AAA8C18CD36A7002201E4 /* SmallChunk.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SmallChunk.h; path = bmalloc/SmallChunk.h; sourceTree = "<group>"; };
 		1485655E18A43AF900ED6942 /* BoundaryTag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BoundaryTag.h; path = bmalloc/BoundaryTag.h; sourceTree = "<group>"; };
 		1485656018A43DBA00ED6942 /* ObjectType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ObjectType.h; path = bmalloc/ObjectType.h; sourceTree = "<group>"; };
 		14895D8F1A3A319C0006235D /* Environment.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = Environment.cpp; path = bmalloc/Environment.cpp; sourceTree = "<group>"; };
@@ -208,7 +204,6 @@
 		147AAA9A18CE5FD3002201E4 /* heap: small */ = {
 			isa = PBXGroup;
 			children = (
-				147AAA8C18CD36A7002201E4 /* SmallChunk.h */,
 				1452478618BC757C00F80098 /* SmallLine.h */,
 				143E29ED18CAE90500FE8A0F /* SmallPage.h */,
 			);
@@ -270,7 +265,6 @@
 				14105E8318E14374003A106E /* ObjectType.cpp */,
 				1485656018A43DBA00ED6942 /* ObjectType.h */,
 				145F6874179DF84100D65598 /* Sizes.h */,
-				1440AFC81A95142400837FAA /* SuperChunk.h */,
 				144F7BFB18BFC517003537F3 /* VMHeap.cpp */,
 				144F7BFC18BFC517003537F3 /* VMHeap.h */,
 				1440AFCA1A95261100837FAA /* Zone.h */,
@@ -339,11 +333,9 @@
 				14DD78C718F48D7500950702 /* BAssert.h in Headers */,
 				14DD78D018F48D7500950702 /* VMAllocate.h in Headers */,
 				14EB79EA1C7C1BC4005E834F /* XLargeRange.h in Headers */,
-				1440AFC91A95142400837FAA /* SuperChunk.h in Headers */,
 				143EF9B01A9FABF6004F5C77 /* FreeList.h in Headers */,
 				14DD78CE18F48D7500950702 /* Syscall.h in Headers */,
 				14DD78C618F48D7500950702 /* AsyncTask.h in Headers */,
-				14DD78BB18F48D6B00950702 /* SmallChunk.h in Headers */,
 				14DD78C918F48D7500950702 /* Inline.h in Headers */,
 				14895D921A3A319C0006235D /* Environment.h in Headers */,
 				1400274A18F89C2300115C97 /* VMHeap.h in Headers */,
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to