Diff
Modified: trunk/Source/bmalloc/ChangeLog (217810 => 217811)
--- trunk/Source/bmalloc/ChangeLog 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/ChangeLog 2017-06-06 02:21:11 UTC (rev 217811)
@@ -1,3 +1,135 @@
+2017-06-02 Geoffrey Garen <[email protected]>
+
+ bmalloc: Small and large objects should share memory
+ https://bugs.webkit.org/show_bug.cgi?id=172880
+ <rdar://problem/31494732>
+
+ Reviewed by Sam Weinig.
+
+ This reduces our high water mark memory usage on JetStream on macOS
+ by 10%-20%. It also has the nice side effect that we can free small
+ object metadata after returning from a high water mark.
+
+ No change in throughput.
+
+ Our old algorithm allocated small object chunks and large objects in
+ segregated virtual memory and never recycled addresses between them.
+ This provided a slight security benefit because we could apply guard
+ pages between the segregated ranges and we would never reuse the same
+ virtual address for object and metadata memory.
+
+ Our new algorithm allocates small object chunks from the large object
+ allocator. This naturally recycles memory between small chunks and large
+ objects, and between small chunks of different page classes. This allows
+ us to shift memory between allocation types as a program moves between
+ different phases of allocation, and to delete small object chunk metadata
+ when a program shrinks back from a high water mark.
+
+ Two intuitions I had about memory use turned out to be backwards in
+ this context:
+
+ (1) I thought that this optimization would work because it allowed you to
+ allocate and free a 4MB object and then reuse that large allocation to
+ service small allocations. In practice, the common benefit seems to be
+ the opposite: After you allocate and free many small objects, you can
+ stitch them together to allocate a large object without growing the heap.
+
+ (2) I thought that it would be more memory-efficient to allocate
+ fine-grained pages from the large object allocator. In practice, giving
+ the large object allocator too many arbitrarily-sized ranges to manage
+ leads to fragmentation. Meanwhile, segregated fit is a powerful memory
+ optimization. So, it's best to return small object memory to the large
+ allocator only when a whole small object chunk is free.
+
+ * bmalloc/Chunk.h:
+ (bmalloc::Chunk::ref):
+ (bmalloc::Chunk::deref):
+ (bmalloc::Chunk::refCount):
+ (bmalloc::Chunk::freePages): We keep a free list per chunk and refcount
+ each chunk so we can notice when a chunk becomes empty, and return it
+ to the large allocator.
+
+ (bmalloc::forEachPage): A new helper function for iterating the pages
+ in a Chunk.
+
+ (bmalloc::Chunk::Chunk): Use forEachPage instead of manual iteration.
+ Use { } initialization because we don't get zero-initialized by the OS
+ anymore.
+
+ * bmalloc/Heap.cpp:
+ (bmalloc::Heap::Heap):
+ (bmalloc::Heap::concurrentScavenge):
+ (bmalloc::Heap::scavenge): Don't bother unlocking while scavenging. I
+ wasn't able to show it to be a consistent speedup. A more promising
+ approach, if we find a motivating example, is for the scavenger to give
+ up and return early if any other client is waiting on the lock.
+
+ (bmalloc::Heap::allocateSmallChunk): New helper function for allocating
+ a small chunk. It allocates through the large allocator to facilitate
+ sharing. We still allocate a chunk at a time instead of a page at a time.
+ Surprisingly, more precise page-at-a-time allocation is worse for memory
+ use because of fragmentation. Segregated fit is a powerful optimization.
+
+ (bmalloc::Heap::deallocateSmallChunk): New helper function for deallocating
+ a small chunk.
+
+ (bmalloc::Heap::allocateSmallPage): Updated for new APIs.
+
+ (bmalloc::Heap::deallocateSmallLine): Updated for new APIs. Note that
+ we cache one free chunk per page class. This avoids churn in the large
+ allocator when you free(malloc(X)).
+
+ (bmalloc::Heap::allocateSmallBumpRangesByMetadata):
+ (bmalloc::Heap::allocateSmallBumpRangesByObject):
+ (bmalloc::Heap::tryAllocateLarge):
+ (bmalloc::Heap::scavengeSmallPages): Deleted.
+ (bmalloc::Heap::scavengeLargeObjects): Deleted.
+ * bmalloc/Heap.h:
+
+ * bmalloc/LargeMap.h:
+ (bmalloc::LargeMap::begin):
+ (bmalloc::LargeMap::end): Added iteration helpers for scavenging.
+
+ * bmalloc/LargeRange.h:
+ (bmalloc::LargeRange::physicalSize): Added a comment about something
+ that I confused myself about in this patch.
+
+ * bmalloc/List.h:
+ (bmalloc::List::iterator::operator*):
+ (bmalloc::List::iterator::operator->):
+ (bmalloc::List::iterator::operator!=):
+ (bmalloc::List::iterator::operator++):
+ (bmalloc::List::begin):
+ (bmalloc::List::end):
+ (bmalloc::List::pushFront):
+ (bmalloc::List::remove):
+ (bmalloc::ListNode::ListNode): Deleted. Added iteration helpers for
+ scavenging. Changed the default state of a Node to null pointers instead
+ of self pointers to distinguish the null node from the empty node for
+ easier debugging.
+
+ * bmalloc/Sizes.h: Changed the chunk size to 1MB to increase the chances
+ of a chunk becoming free and recyclable.
+
+ * bmalloc/SmallPage.h:
+ (bmalloc::SmallPage::hasPhysicalPages):
+ (bmalloc::SmallPage::setHasPhysicalPages): Track physical state by page
+ instead of implicitly by which list a page is in. It's simpler not
+ to have to move chunks and pages between physical vs virtual lists.
+
+ (bmalloc::SmallPage::SmallPage): Deleted.
+
+ * bmalloc/VMHeap.cpp:
+ (bmalloc::VMHeap::tryAllocateLargeChunk):
+ (bmalloc::VMHeap::allocateSmallChunk): Deleted.
+ * bmalloc/VMHeap.h:
+ (bmalloc::VMHeap::allocateSmallPage): Deleted.
+ (bmalloc::VMHeap::deallocateSmallPage): Deleted. Small chunk allocation
+ just forwards to the large allocator now.
+
+ * bmalloc/bmalloc.h:
+ (bmalloc::api::scavenge):
+
2017-05-28 Dan Bernstein <[email protected]>
[Xcode] ALWAYS_SEARCH_USER_PATHS is set to YES
Modified: trunk/Source/bmalloc/bmalloc/Chunk.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/Chunk.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/Chunk.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -35,11 +35,15 @@
namespace bmalloc {
-class Chunk {
+class Chunk : public ListNode<Chunk> {
public:
static Chunk* get(void*);
- Chunk(std::lock_guard<StaticMutex>&);
+ Chunk(size_t pageSize);
+
+ void ref() { ++m_refCount; }
+ void deref() { BASSERT(m_refCount); --m_refCount; }
+ unsigned refCount() { return m_refCount; }
size_t offset(void*);
@@ -50,10 +54,15 @@
char* bytes() { return reinterpret_cast<char*>(this); }
SmallLine* lines() { return &m_lines[0]; }
SmallPage* pages() { return &m_pages[0]; }
+
+ List<SmallPage>& freePages() { return m_freePages; }
private:
- std::array<SmallLine, chunkSize / smallLineSize> m_lines;
- std::array<SmallPage, chunkSize / smallPageSize> m_pages;
+ size_t m_refCount { };
+ List<SmallPage> m_freePages { };
+
+ std::array<SmallLine, chunkSize / smallLineSize> m_lines { };
+ std::array<SmallPage, chunkSize / smallPageSize> m_pages { };
};
struct ChunkHash {
@@ -64,10 +73,28 @@
}
};
-inline Chunk::Chunk(std::lock_guard<StaticMutex>&)
+template<typename Function> void forEachPage(Chunk* chunk, size_t pageSize, Function function)
{
+ // We align to at least the page size so we can service aligned allocations
+ // at equal and smaller powers of two, and also so we can vmDeallocatePhysicalPages().
+ size_t metadataSize = roundUpToMultipleOfNonPowerOfTwo(pageSize, sizeof(Chunk));
+
+ Object begin(chunk, metadataSize);
+ Object end(chunk, chunkSize);
+
+ for (auto it = begin; it + pageSize <= end; it = it + pageSize)
+ function(it.page());
}
+inline Chunk::Chunk(size_t pageSize)
+{
+ size_t smallPageCount = pageSize / smallPageSize;
+ forEachPage(this, pageSize, [&](SmallPage* page) {
+ for (size_t i = 0; i < smallPageCount; ++i)
+ page[i].setSlide(i);
+ });
+}
+
inline Chunk* Chunk::get(void* address)
{
return static_cast<Chunk*>(mask(address, chunkMask));
Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/Heap.cpp 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp 2017-06-06 02:21:11 UTC (rev 217811)
@@ -68,7 +68,7 @@
auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);
m_pressureHandlerDispatchSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0, DISPATCH_MEMORYPRESSURE_CRITICAL, queue);
dispatch_source_set_event_handler(m_pressureHandlerDispatchSource, ^{
- std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+ std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
scavenge(lock);
});
dispatch_resume(m_pressureHandlerDispatchSource);
@@ -150,7 +150,7 @@
void Heap::concurrentScavenge()
{
- std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+ std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
#if BOS(DARWIN)
pthread_set_qos_class_self_np(m_requestedScavengerThreadQOSClass, 0);
@@ -165,36 +165,30 @@
scavenge(lock);
}
-void Heap::scavenge(std::unique_lock<StaticMutex>& lock)
+void Heap::scavenge(std::lock_guard<StaticMutex>&)
{
- scavengeSmallPages(lock);
- scavengeLargeObjects(lock);
-}
-
-void Heap::scavengeSmallPages(std::unique_lock<StaticMutex>& lock)
-{
- for (size_t pageClass = 0; pageClass < pageClassCount; pageClass++) {
- auto& smallPages = m_smallPages[pageClass];
+ for (auto& list : m_freePages) {
+ for (auto* chunk : list) {
+ for (auto* page : chunk->freePages()) {
+ if (!page->hasPhysicalPages())
+ continue;
- while (!smallPages.isEmpty()) {
- SmallPage* page = smallPages.pop();
- m_vmHeap.deallocateSmallPage(lock, pageClass, page);
+ vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(&list - &m_freePages[0]));
+
+ page->setHasPhysicalPages(false);
+ }
}
}
-}
+
+ for (auto& list : m_chunkCache) {
+ while (!list.isEmpty())
+ deallocateSmallChunk(list.pop(), &list - &m_chunkCache[0]);
+ }
-void Heap::scavengeLargeObjects(std::unique_lock<StaticMutex>& lock)
-{
- auto& ranges = m_largeFree.ranges();
- for (size_t i = ranges.size(); i-- > 0; i = std::min(i, ranges.size())) {
- auto range = ranges.pop(i);
-
- lock.unlock();
+ for (auto& range : m_largeFree) {
vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
- lock.lock();
range.setPhysicalSize(0);
- ranges.push(range);
}
}
@@ -227,22 +221,78 @@
m_scavenger.runSoon();
}
+void Heap::allocateSmallChunk(std::lock_guard<StaticMutex>& lock, size_t pageClass)
+{
+ size_t pageSize = bmalloc::pageSize(pageClass);
+
+ Chunk* chunk = [&]() {
+ if (!m_chunkCache[pageClass].isEmpty())
+ return m_chunkCache[pageClass].pop();
+
+ void* memory = allocateLarge(lock, chunkSize, chunkSize);
+
+ Chunk* chunk = new (memory) Chunk(pageSize);
+
+ m_objectTypes.set(chunk, ObjectType::Small);
+
+ forEachPage(chunk, pageSize, [&](SmallPage* page) {
+ page->setHasPhysicalPages(true);
+ page->setHasFreeLines(lock, true);
+ chunk->freePages().push(page);
+ });
+
+ scheduleScavenger(0);
+
+ return chunk;
+ }();
+
+ m_freePages[pageClass].push(chunk);
+}
+
+void Heap::deallocateSmallChunk(Chunk* chunk, size_t pageClass)
+{
+ m_objectTypes.set(chunk, ObjectType::Large);
+
+ size_t size = m_largeAllocated.remove(chunk);
+
+ bool hasPhysicalPages = true;
+ forEachPage(chunk, pageSize(pageClass), [&](SmallPage* page) {
+ if (!page->hasPhysicalPages())
+ hasPhysicalPages = false;
+ });
+ size_t physicalSize = hasPhysicalPages ? size : 0;
+
+ m_largeFree.add(LargeRange(chunk, size, physicalSize));
+}
+
SmallPage* Heap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t sizeClass)
{
- if (!m_smallPagesWithFreeLines[sizeClass].isEmpty())
- return m_smallPagesWithFreeLines[sizeClass].popFront();
+ if (!m_freeLines[sizeClass].isEmpty())
+ return m_freeLines[sizeClass].popFront();
m_isGrowing = true;
SmallPage* page = [&]() {
size_t pageClass = m_pageClasses[sizeClass];
- if (!m_smallPages[pageClass].isEmpty())
- return m_smallPages[pageClass].pop();
+
+ if (m_freePages[pageClass].isEmpty())
+ allocateSmallChunk(lock, pageClass);
- scheduleScavengerIfUnderMemoryPressure(pageSize(pageClass));
-
- SmallPage* page = m_vmHeap.allocateSmallPage(lock, pageClass);
- m_objectTypes.set(Chunk::get(page), ObjectType::Small);
+ Chunk* chunk = m_freePages[pageClass].tail();
+
+ chunk->ref();
+
+ SmallPage* page = chunk->freePages().pop();
+ if (chunk->freePages().isEmpty())
+ m_freePages[pageClass].remove(chunk);
+
+ if (!page->hasPhysicalPages()) {
+ scheduleScavengerIfUnderMemoryPressure(pageSize(pageClass));
+
+ vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(pageClass));
+ page->setHasPhysicalPages(true);
+ }
+
return page;
}();
@@ -258,7 +308,7 @@
if (!page->hasFreeLines(lock)) {
page->setHasFreeLines(lock, true);
- m_smallPagesWithFreeLines[page->sizeClass()].push(page);
+ m_freeLines[page->sizeClass()].push(page);
}
if (page->refCount(lock))
@@ -267,9 +317,24 @@
size_t sizeClass = page->sizeClass();
size_t pageClass = m_pageClasses[sizeClass];
- m_smallPagesWithFreeLines[sizeClass].remove(page);
- m_smallPages[pageClass].push(page);
+ m_freeLines[sizeClass].remove(page);
+ Chunk* chunk = Chunk::get(page);
+ if (chunk->freePages().isEmpty())
+ m_freePages[pageClass].push(chunk);
+ chunk->freePages().push(page);
+
+ chunk->deref();
+
+ if (!chunk->refCount()) {
+ m_freePages[pageClass].remove(chunk);
+
+ if (!m_chunkCache[pageClass].isEmpty())
+ deallocateSmallChunk(m_chunkCache[pageClass].pop(), pageClass);
+
+ m_chunkCache[pageClass].push(chunk);
+ }
+
scheduleScavenger(pageSize(pageClass));
}
@@ -321,7 +386,7 @@
// In a fragmented page, some free ranges might not fit in the cache.
if (rangeCache.size() == rangeCache.capacity()) {
- m_smallPagesWithFreeLines[sizeClass].push(page);
+ m_freeLines[sizeClass].push(page);
BASSERT(allocator.canAllocate());
return;
}
@@ -375,7 +440,7 @@
// In a fragmented page, some free ranges might not fit in the cache.
if (rangeCache.size() == rangeCache.capacity()) {
- m_smallPagesWithFreeLines[sizeClass].push(page);
+ m_freeLines[sizeClass].push(page);
BASSERT(allocator.canAllocate());
return;
}
@@ -426,7 +491,7 @@
return range;
}
-void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
+void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
{
BASSERT(isPowerOfTwo(alignment));
@@ -444,11 +509,12 @@
LargeRange range = m_largeFree.remove(alignment, size);
if (!range) {
- range = m_vmHeap.tryAllocateLargeChunk(lock, alignment, size);
+ range = m_vmHeap.tryAllocateLargeChunk(alignment, size);
if (!range)
return nullptr;
m_largeFree.add(range);
+
range = m_largeFree.remove(alignment, size);
}
Modified: trunk/Source/bmalloc/bmalloc/Heap.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/Heap.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/Heap.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -70,7 +70,7 @@
size_t largeSize(std::lock_guard<StaticMutex>&, void*);
void shrinkLarge(std::lock_guard<StaticMutex>&, const Range&, size_t);
- void scavenge(std::unique_lock<StaticMutex>&);
+ void scavenge(std::lock_guard<StaticMutex>&);
size_t memoryFootprint();
double percentAvailableMemoryInUse();
@@ -100,9 +100,11 @@
size_t sizeClass, BumpAllocator&, BumpRangeCache&);
SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&, size_t sizeClass);
-
void deallocateSmallLine(std::lock_guard<StaticMutex>&, Object);
+ void allocateSmallChunk(std::lock_guard<StaticMutex>&, size_t pageClass);
+ void deallocateSmallChunk(Chunk*, size_t pageClass);
+
void mergeLarge(BeginTag*&, EndTag*&, Range&);
void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
@@ -113,8 +115,6 @@
void scheduleScavengerIfUnderMemoryPressure(size_t);
void concurrentScavenge();
- void scavengeSmallPages(std::unique_lock<StaticMutex>&);
- void scavengeLargeObjects(std::unique_lock<StaticMutex>&);
#if BPLATFORM(IOS)
void updateMemoryInUseParameters();
@@ -124,8 +124,9 @@
Vector<LineMetadata> m_smallLineMetadata;
std::array<size_t, sizeClassCount> m_pageClasses;
- std::array<List<SmallPage>, sizeClassCount> m_smallPagesWithFreeLines;
- std::array<List<SmallPage>, pageClassCount> m_smallPages;
+ std::array<List<SmallPage>, sizeClassCount> m_freeLines;
+ std::array<List<Chunk>, pageClassCount> m_freePages;
+ std::array<List<Chunk>, pageClassCount> m_chunkCache;
Map<void*, size_t, LargeObjectHash> m_largeAllocated;
LargeMap m_largeFree;
Modified: trunk/Source/bmalloc/bmalloc/LargeMap.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/LargeMap.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/LargeMap.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -34,6 +34,9 @@
class LargeMap {
public:
+ LargeRange* begin() { return m_free.begin(); }
+ LargeRange* end() { return m_free.end(); }
+
void add(const LargeRange&);
LargeRange remove(size_t alignment, size_t);
Vector<LargeRange>& ranges() { return m_free; }
Modified: trunk/Source/bmalloc/bmalloc/LargeRange.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/LargeRange.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/LargeRange.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -51,6 +51,8 @@
{
}
+ // Returns a lower bound on physical size. Ranges that span non-physical
+ // fragments only remember the physical size of the first fragment.
size_t physicalSize() const { return m_physicalSize; }
void setPhysicalSize(size_t physicalSize) { m_physicalSize = physicalSize; }
Modified: trunk/Source/bmalloc/bmalloc/List.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/List.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/List.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -30,24 +30,37 @@
template<typename T>
struct ListNode {
- ListNode()
- : prev(this)
- , next(this)
- {
- }
-
- ListNode<T>* prev;
- ListNode<T>* next;
+ ListNode<T>* prev { nullptr };
+ ListNode<T>* next { nullptr };
};
template<typename T>
class List {
static_assert(std::is_trivially_destructible<T>::value, "List must have a trivial destructor.");
+
+ struct iterator {
+ T* operator*() { return static_cast<T*>(m_node); }
+ T* operator->() { return static_cast<T*>(m_node); }
+
+ bool operator!=(const iterator& other) { return m_node != other.m_node; }
+
+ iterator& operator++()
+ {
+ m_node = m_node->next;
+ return *this;
+ }
+
+ ListNode<T>* m_node;
+ };
+
public:
bool isEmpty() { return m_root.next == &m_root; }
T* head() { return static_cast<T*>(m_root.next); }
T* tail() { return static_cast<T*>(m_root.prev); }
+
+ iterator begin() { return iterator { m_root.next }; }
+ iterator end() { return iterator { &m_root }; }
void push(T* node)
{
@@ -55,6 +68,12 @@
insertAfter(it, node);
}
+ void pushFront(T* node)
+ {
+ ListNode<T>* it = &m_root;
+ insertAfter(it, node);
+ }
+
T* pop()
{
ListNode<T>* result = tail();
@@ -89,12 +108,12 @@
next->prev = prev;
prev->next = next;
- node->prev = node;
- node->next = node;
+ node->prev = nullptr;
+ node->next = nullptr;
}
private:
- ListNode<T> m_root;
+ ListNode<T> m_root { &m_root, &m_root };
};
} // namespace bmalloc
Modified: trunk/Source/bmalloc/bmalloc/Sizes.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/Sizes.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/Sizes.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -46,7 +46,7 @@
static const size_t alignment = 8;
static const size_t alignmentMask = alignment - 1ul;
- static const size_t chunkSize = 2 * MB;
+ static const size_t chunkSize = 1 * MB;
static const size_t chunkMask = ~(chunkSize - 1ul);
static const size_t smallLineSize = 256;
Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/SmallPage.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -36,11 +36,6 @@
class SmallPage : public ListNode<SmallPage> {
public:
- SmallPage()
- : m_hasFreeLines(true)
- {
- }
-
void ref(std::lock_guard<StaticMutex>&);
bool deref(std::lock_guard<StaticMutex>&);
unsigned refCount(std::lock_guard<StaticMutex>&) { return m_refCount; }
@@ -51,6 +46,9 @@
bool hasFreeLines(std::lock_guard<StaticMutex>&) const { return m_hasFreeLines; }
void setHasFreeLines(std::lock_guard<StaticMutex>&, bool hasFreeLines) { m_hasFreeLines = hasFreeLines; }
+ bool hasPhysicalPages() { return m_hasPhysicalPages; }
+ void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
+
SmallLine* begin();
unsigned char slide() const { return m_slide; }
@@ -58,6 +56,7 @@
private:
unsigned char m_hasFreeLines: 1;
+ unsigned char m_hasPhysicalPages: 1;
unsigned char m_refCount: 7;
unsigned char m_sizeClass;
unsigned char m_slide;
Modified: trunk/Source/bmalloc/bmalloc/VMHeap.cpp (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/VMHeap.cpp 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.cpp 2017-06-06 02:21:11 UTC (rev 217811)
@@ -29,7 +29,7 @@
namespace bmalloc {
-LargeRange VMHeap::tryAllocateLargeChunk(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
+LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
{
// We allocate VM in aligned multiples to increase the chances that
// the OS will provide contiguous ranges that we can merge.
@@ -56,46 +56,4 @@
return LargeRange(chunk->bytes(), size, 0);
}
-void VMHeap::allocateSmallChunk(std::lock_guard<StaticMutex>& lock, size_t pageClass)
-{
- size_t pageSize = bmalloc::pageSize(pageClass);
- size_t smallPageCount = pageSize / smallPageSize;
-
- void* memory = vmAllocate(chunkSize, chunkSize);
- Chunk* chunk = static_cast<Chunk*>(memory);
-
- // We align to our page size in order to honor OS APIs and in order to
- // guarantee that we can service aligned allocation requests at equal
- // and smaller powers of two.
- size_t vmPageSize = roundUpToMultipleOf(bmalloc::vmPageSize(), pageSize);
- size_t metadataSize = roundUpToMultipleOfNonPowerOfTwo(vmPageSize, sizeof(Chunk));
-
- Object begin(chunk, metadataSize);
- Object end(chunk, chunkSize);
-
- // Establish guard pages before writing to Chunk memory to work around
- // an edge case in the Darwin VM system (<rdar://problem/25910098>).
- vmRevokePermissions(begin.address(), vmPageSize);
- vmRevokePermissions(end.address() - vmPageSize, vmPageSize);
-
- begin = begin + vmPageSize;
- end = end - vmPageSize;
- BASSERT(begin <= end && end.offset() - begin.offset() >= pageSize);
-
- new (chunk) Chunk(lock);
-
-#if BOS(DARWIN)
- m_zone.addRange(Range(begin.address(), end.address() - begin.address()));
-#endif
-
- for (Object it = begin; it + pageSize <= end; it = it + pageSize) {
- SmallPage* page = it.page();
-
- for (size_t i = 0; i < smallPageCount; ++i)
- page[i].setSlide(i);
-
- m_smallPages[pageClass].push(page);
- }
-}
-
} // namespace bmalloc
Modified: trunk/Source/bmalloc/bmalloc/VMHeap.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/VMHeap.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/VMHeap.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -45,40 +45,14 @@
class VMHeap {
public:
- SmallPage* allocateSmallPage(std::lock_guard<StaticMutex>&, size_t);
- void deallocateSmallPage(std::unique_lock<StaticMutex>&, size_t, SmallPage*);
-
- LargeRange tryAllocateLargeChunk(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
+ LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
private:
- void allocateSmallChunk(std::lock_guard<StaticMutex>&, size_t);
-
- std::array<List<SmallPage>, pageClassCount> m_smallPages;
-
#if BOS(DARWIN)
Zone m_zone;
#endif
};
-inline SmallPage* VMHeap::allocateSmallPage(std::lock_guard<StaticMutex>& lock, size_t pageClass)
-{
- if (m_smallPages[pageClass].isEmpty())
- allocateSmallChunk(lock, pageClass);
-
- SmallPage* page = m_smallPages[pageClass].pop();
- vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(pageClass));
- return page;
-}
-
-inline void VMHeap::deallocateSmallPage(std::unique_lock<StaticMutex>& lock, size_t pageClass, SmallPage* page)
-{
- lock.unlock();
- vmDeallocatePhysicalPagesSloppy(page->begin()->begin(), pageSize(pageClass));
- lock.lock();
-
- m_smallPages[pageClass].push(page);
-}
-
} // namespace bmalloc
#endif // VMHeap_h
Modified: trunk/Source/bmalloc/bmalloc/bmalloc.h (217810 => 217811)
--- trunk/Source/bmalloc/bmalloc/bmalloc.h 2017-06-06 02:03:29 UTC (rev 217810)
+++ trunk/Source/bmalloc/bmalloc/bmalloc.h 2017-06-06 02:21:11 UTC (rev 217811)
@@ -76,7 +76,7 @@
{
scavengeThisThread();
- std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
+ std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
PerProcess<Heap>::get()->scavenge(lock);
}