Title: [122677] trunk/Source/_javascript_Core
Revision
122677
Author
[email protected]
Date
2012-07-14 21:02:16 -0700 (Sat, 14 Jul 2012)

Log Message

Rationalize and optimize storage allocation
https://bugs.webkit.org/show_bug.cgi?id=91303

Reviewed by Oliver Hunt.

This implements a backwards bump allocator for copied space storage
allocation, shown in pseudo-code below:
        
    pointer bump(size) {
        pointer tmp = allocator->remaining;
        tmp -= size;
        if (tmp < 0)
            fail;
        allocator->remaining = tmp;
        return allocator->payloadEnd - tmp - size;
    }

The advantage of this allocator is that it:
        
- Only requires one comparison in the common case where size is known to
  not be huge, and this comparison can be done by checking the sign bit
  of the subtraction.
        
- Can be implemented even when only one register is available. This
  register is reused for both temporary storage during allocation and
  for the result.
        
- Preserves the behavior that memory in a block is filled in from lowest
  address to highest address, which allows for a cheap reallocation fast
  path.
        
- Is resilient against the block used for allocation being the last one
  in virtual memory, thereby otherwise leading to the risk of overflow
  in the bump pointer, despite only doing one branch.
        
In order to implement this allocator using the smallest possible chunk
of code, I refactored the copied space code so that all of the allocation
logic is in CopiedAllocator, and all of the state is in either
CopiedBlock or CopiedAllocator. This should make changing the allocation
fast path easier in the future.
        
In order to do this, I needed to add some new assembler support,
particularly for various forms of add(address, register) and negPtr().
        
This is performance neutral. The purpose of this change is to facilitate
further inlining of storage allocation without having to reserve
additional registers or emit too much code.

* assembler/MacroAssembler.h:
(JSC::MacroAssembler::addPtr):
(MacroAssembler):
(JSC::MacroAssembler::negPtr):
* assembler/MacroAssemblerARMv7.h:
(MacroAssemblerARMv7):
(JSC::MacroAssemblerARMv7::add32):
* assembler/MacroAssemblerX86.h:
(JSC::MacroAssemblerX86::add32):
(MacroAssemblerX86):
* assembler/MacroAssemblerX86_64.h:
(MacroAssemblerX86_64):
(JSC::MacroAssemblerX86_64::addPtr):
(JSC::MacroAssemblerX86_64::negPtr):
* assembler/X86Assembler.h:
(X86Assembler):
(JSC::X86Assembler::addl_mr):
(JSC::X86Assembler::addq_mr):
(JSC::X86Assembler::negq_r):
* heap/CopiedAllocator.h:
(CopiedAllocator):
(JSC::CopiedAllocator::isValid):
(JSC::CopiedAllocator::CopiedAllocator):
(JSC::CopiedAllocator::tryAllocate):
(JSC):
(JSC::CopiedAllocator::tryReallocate):
(JSC::CopiedAllocator::forceAllocate):
(JSC::CopiedAllocator::resetCurrentBlock):
(JSC::CopiedAllocator::setCurrentBlock):
(JSC::CopiedAllocator::currentCapacity):
* heap/CopiedBlock.h:
(CopiedBlock):
(JSC::CopiedBlock::create):
(JSC::CopiedBlock::zeroFillWilderness):
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::payloadEnd):
(JSC):
(JSC::CopiedBlock::payloadCapacity):
(JSC::CopiedBlock::data):
(JSC::CopiedBlock::dataEnd):
(JSC::CopiedBlock::dataSize):
(JSC::CopiedBlock::wilderness):
(JSC::CopiedBlock::wildernessEnd):
(JSC::CopiedBlock::wildernessSize):
(JSC::CopiedBlock::size):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::tryAllocateSlowCase):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocate):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::doneCopying):
* heap/CopiedSpace.h:
(CopiedSpace):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::startedCopying):
(JSC::CopiedSpace::allocateBlockForCopyingPhase):
(JSC::CopiedSpace::allocateBlock):
(JSC::CopiedSpace::tryAllocate):
(JSC):
* heap/MarkStack.cpp:
(JSC::SlotVisitor::startCopying):
(JSC::SlotVisitor::allocateNewSpace):
(JSC::SlotVisitor::doneCopying):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::SlotVisitor):
* jit/JIT.h:
* jit/JITInlineMethods.h:
(JSC::JIT::emitAllocateBasicStorage):
(JSC::JIT::emitAllocateJSArray):

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (122676 => 122677)


--- trunk/Source/_javascript_Core/ChangeLog	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/ChangeLog	2012-07-15 04:02:16 UTC (rev 122677)
@@ -1,3 +1,123 @@
+2012-07-14  Filip Pizlo  <[email protected]>
+
+        Rationalize and optimize storage allocation
+        https://bugs.webkit.org/show_bug.cgi?id=91303
+
+        Reviewed by Oliver Hunt.
+
+        This implements a backwards bump allocator for copied space storage
+        allocation, shown in pseudo-code below:
+        
+            pointer bump(size) {
+                pointer tmp = allocator->remaining;
+                tmp -= size;
+                if (tmp < 0)
+                    fail;
+                allocator->remaining = tmp;
+                return allocator->payloadEnd - tmp - size;
+            }
+
+        The advantage of this allocator is that it:
+        
+        - Only requires one comparison in the common case where size is known to
+          not be huge, and this comparison can be done by checking the sign bit
+          of the subtraction.
+        
+        - Can be implemented even when only one register is available. This
+          register is reused for both temporary storage during allocation and
+          for the result.
+        
+        - Preserves the behavior that memory in a block is filled in from lowest
+          address to highest address, which allows for a cheap reallocation fast
+          path.
+        
+        - Is resilient against the block used for allocation being the last one
+          in virtual memory, thereby otherwise leading to the risk of overflow
+          in the bump pointer, despite only doing one branch.
+        
+        In order to implement this allocator using the smallest possible chunk
+        of code, I refactored the copied space code so that all of the allocation
+        logic is in CopiedAllocator, and all of the state is in either
+        CopiedBlock or CopiedAllocator. This should make changing the allocation
+        fast path easier in the future.
+        
+        In order to do this, I needed to add some new assembler support,
+        particularly for various forms of add(address, register) and negPtr().
+        
+        This is performance neutral. The purpose of this change is to facilitate
+        further inlining of storage allocation without having to reserve
+        additional registers or emit too much code.
+
+        * assembler/MacroAssembler.h:
+        (JSC::MacroAssembler::addPtr):
+        (MacroAssembler):
+        (JSC::MacroAssembler::negPtr):
+        * assembler/MacroAssemblerARMv7.h:
+        (MacroAssemblerARMv7):
+        (JSC::MacroAssemblerARMv7::add32):
+        * assembler/MacroAssemblerX86.h:
+        (JSC::MacroAssemblerX86::add32):
+        (MacroAssemblerX86):
+        * assembler/MacroAssemblerX86_64.h:
+        (MacroAssemblerX86_64):
+        (JSC::MacroAssemblerX86_64::addPtr):
+        (JSC::MacroAssemblerX86_64::negPtr):
+        * assembler/X86Assembler.h:
+        (X86Assembler):
+        (JSC::X86Assembler::addl_mr):
+        (JSC::X86Assembler::addq_mr):
+        (JSC::X86Assembler::negq_r):
+        * heap/CopiedAllocator.h:
+        (CopiedAllocator):
+        (JSC::CopiedAllocator::isValid):
+        (JSC::CopiedAllocator::CopiedAllocator):
+        (JSC::CopiedAllocator::tryAllocate):
+        (JSC):
+        (JSC::CopiedAllocator::tryReallocate):
+        (JSC::CopiedAllocator::forceAllocate):
+        (JSC::CopiedAllocator::resetCurrentBlock):
+        (JSC::CopiedAllocator::setCurrentBlock):
+        (JSC::CopiedAllocator::currentCapacity):
+        * heap/CopiedBlock.h:
+        (CopiedBlock):
+        (JSC::CopiedBlock::create):
+        (JSC::CopiedBlock::zeroFillWilderness):
+        (JSC::CopiedBlock::CopiedBlock):
+        (JSC::CopiedBlock::payloadEnd):
+        (JSC):
+        (JSC::CopiedBlock::payloadCapacity):
+        (JSC::CopiedBlock::data):
+        (JSC::CopiedBlock::dataEnd):
+        (JSC::CopiedBlock::dataSize):
+        (JSC::CopiedBlock::wilderness):
+        (JSC::CopiedBlock::wildernessEnd):
+        (JSC::CopiedBlock::wildernessSize):
+        (JSC::CopiedBlock::size):
+        * heap/CopiedSpace.cpp:
+        (JSC::CopiedSpace::tryAllocateSlowCase):
+        (JSC::CopiedSpace::tryAllocateOversize):
+        (JSC::CopiedSpace::tryReallocate):
+        (JSC::CopiedSpace::doneFillingBlock):
+        (JSC::CopiedSpace::doneCopying):
+        * heap/CopiedSpace.h:
+        (CopiedSpace):
+        * heap/CopiedSpaceInlineMethods.h:
+        (JSC::CopiedSpace::startedCopying):
+        (JSC::CopiedSpace::allocateBlockForCopyingPhase):
+        (JSC::CopiedSpace::allocateBlock):
+        (JSC::CopiedSpace::tryAllocate):
+        (JSC):
+        * heap/MarkStack.cpp:
+        (JSC::SlotVisitor::startCopying):
+        (JSC::SlotVisitor::allocateNewSpace):
+        (JSC::SlotVisitor::doneCopying):
+        * heap/SlotVisitor.h:
+        (JSC::SlotVisitor::SlotVisitor):
+        * jit/JIT.h:
+        * jit/JITInlineMethods.h:
+        (JSC::JIT::emitAllocateBasicStorage):
+        (JSC::JIT::emitAllocateJSArray):
+
 2012-07-13  Mark Lam  <[email protected]>
 
         OfflineASM Pretty printing and commenting enhancements.

Modified: trunk/Source/_javascript_Core/assembler/MacroAssembler.h (122676 => 122677)


--- trunk/Source/_javascript_Core/assembler/MacroAssembler.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/assembler/MacroAssembler.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -280,6 +280,16 @@
     // On 32-bit platforms (i.e. x86), these methods directly map onto their 32-bit equivalents.
     // FIXME: should this use a test for 32-bitness instead of this specific exception?
 #if !CPU(X86_64)
+    void addPtr(Address src, RegisterID dest)
+    {
+        add32(src, dest);
+    }
+
+    void addPtr(AbsoluteAddress src, RegisterID dest)
+    {
+        add32(src, dest);
+    }
+
     void addPtr(RegisterID src, RegisterID dest)
     {
         add32(src, dest);
@@ -314,6 +324,11 @@
     {
         and32(imm, srcDest);
     }
+    
+    void negPtr(RegisterID dest)
+    {
+        neg32(dest);
+    }
 
     void orPtr(RegisterID src, RegisterID dest)
     {

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h (122676 => 122677)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -157,6 +157,12 @@
     {
         add32(imm, dest, dest);
     }
+    
+    void add32(AbsoluteAddress src, RegisterID dest)
+    {
+        load32(src.m_ptr, dataTempRegister);
+        add32(dataTempRegister, dest);
+    }
 
     void add32(TrustedImm32 imm, RegisterID src, RegisterID dest)
     {

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerX86.h (122676 => 122677)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerX86.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerX86.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -63,6 +63,11 @@
         m_assembler.addl_im(imm.m_value, address.m_ptr);
     }
     
+    void add32(AbsoluteAddress address, RegisterID dest)
+    {
+        m_assembler.addl_mr(address.m_ptr, dest);
+    }
+    
     void add64(TrustedImm32 imm, AbsoluteAddress address)
     {
         m_assembler.addl_im(imm.m_value, address.m_ptr);
@@ -78,7 +83,7 @@
     {
         m_assembler.orl_im(imm.m_value, address.m_ptr);
     }
-
+    
     void sub32(TrustedImm32 imm, AbsoluteAddress address)
     {
         m_assembler.subl_im(imm.m_value, address.m_ptr);

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h (122676 => 122677)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -63,6 +63,12 @@
         and32(imm, Address(scratchRegister));
     }
     
+    void add32(AbsoluteAddress address, RegisterID dest)
+    {
+        move(TrustedImmPtr(address.m_ptr), scratchRegister);
+        add32(Address(scratchRegister), dest);
+    }
+    
     void or32(TrustedImm32 imm, AbsoluteAddress address)
     {
         move(TrustedImmPtr(address.m_ptr), scratchRegister);
@@ -140,7 +146,18 @@
     {
         m_assembler.addq_rr(src, dest);
     }
+    
+    void addPtr(Address src, RegisterID dest)
+    {
+        m_assembler.addq_mr(src.offset, src.base, dest);
+    }
 
+    void addPtr(AbsoluteAddress src, RegisterID dest)
+    {
+        move(TrustedImmPtr(src.m_ptr), scratchRegister);
+        addPtr(Address(scratchRegister), dest);
+    }
+
     void addPtr(TrustedImm32 imm, RegisterID srcDest)
     {
         m_assembler.addq_ir(imm.m_value, srcDest);
@@ -182,6 +199,11 @@
     {
         m_assembler.andq_ir(imm.m_value, srcDest);
     }
+    
+    void negPtr(RegisterID dest)
+    {
+        m_assembler.negq_r(dest);
+    }
 
     void orPtr(RegisterID src, RegisterID dest)
     {

Modified: trunk/Source/_javascript_Core/assembler/X86Assembler.h (122676 => 122677)


--- trunk/Source/_javascript_Core/assembler/X86Assembler.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/assembler/X86Assembler.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -304,6 +304,13 @@
     {
         m_formatter.oneByteOp(OP_ADD_GvEv, dst, base, offset);
     }
+    
+#if !CPU(X86_64)
+    void addl_mr(const void* addr, RegisterID dst)
+    {
+        m_formatter.oneByteOp(OP_ADD_GvEv, dst, addr);
+    }
+#endif
 
     void addl_rm(RegisterID src, int offset, RegisterID base)
     {
@@ -338,6 +345,11 @@
         m_formatter.oneByteOp64(OP_ADD_EvGv, src, dst);
     }
 
+    void addq_mr(int offset, RegisterID base, RegisterID dst)
+    {
+        m_formatter.oneByteOp64(OP_ADD_GvEv, dst, base, offset);
+    }
+
     void addq_ir(int imm, RegisterID dst)
     {
         if (CAN_SIGN_EXTEND_8_32(imm)) {
@@ -443,6 +455,13 @@
         m_formatter.oneByteOp(OP_GROUP3_Ev, GROUP3_OP_NEG, dst);
     }
 
+#if CPU(X86_64)
+    void negq_r(RegisterID dst)
+    {
+        m_formatter.oneByteOp64(OP_GROUP3_Ev, GROUP3_OP_NEG, dst);
+    }
+#endif
+
     void negl_m(int offset, RegisterID base)
     {
         m_formatter.oneByteOp(OP_GROUP3_Ev, GROUP3_OP_NEG, base, offset);

Modified: trunk/Source/_javascript_Core/heap/CopiedAllocator.h (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/CopiedAllocator.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/CopiedAllocator.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -27,6 +27,8 @@
 #define CopiedAllocator_h
 
 #include "CopiedBlock.h"
+#include <wtf/CheckedBoolean.h>
+#include <wtf/DataLog.h>
 
 namespace JSC {
 
@@ -34,65 +36,109 @@
     friend class JIT;
 public:
     CopiedAllocator();
-    void* allocate(size_t);
-    bool fitsInCurrentBlock(size_t);
-    bool wasLastAllocation(void*, size_t);
-    void startedCopying();
-    void resetCurrentBlock(CopiedBlock*);
+    
+    CheckedBoolean tryAllocate(size_t bytes, void** outPtr);
+    CheckedBoolean tryReallocate(void *oldPtr, size_t oldBytes, size_t newBytes);
+    void* forceAllocate(size_t bytes);
+    CopiedBlock* resetCurrentBlock();
+    void setCurrentBlock(CopiedBlock*);
     size_t currentCapacity();
+    
+    bool isValid() { return !!m_currentBlock; }
 
 private:
     CopiedBlock* currentBlock() { return m_currentBlock; }
 
-    char* m_currentOffset;
+    size_t m_currentRemaining;
+    char* m_currentPayloadEnd;
     CopiedBlock* m_currentBlock; 
 };
 
 inline CopiedAllocator::CopiedAllocator()
-    : m_currentOffset(0)
+    : m_currentRemaining(0)
+    , m_currentPayloadEnd(0)
     , m_currentBlock(0)
 {
 }
 
-inline void* CopiedAllocator::allocate(size_t bytes)
+inline CheckedBoolean CopiedAllocator::tryAllocate(size_t bytes, void** outPtr)
 {
-    ASSERT(m_currentOffset);
     ASSERT(is8ByteAligned(reinterpret_cast<void*>(bytes)));
-    ASSERT(fitsInCurrentBlock(bytes));
-    void* ptr = static_cast<void*>(m_currentOffset);
-    m_currentOffset += bytes;
-    ASSERT(is8ByteAligned(ptr));
-    return ptr;
+    
+    // This code is written in a gratuitously low-level manner, in order to
+    // serve as a kind of template for what the JIT would do. Note that the
+    // way it's written it ought to only require one register, which doubles
+    // as the result, provided that the compiler does a minimal amount of
+    // control flow simplification and the bytes argument is a constant.
+    
+    size_t currentRemaining = m_currentRemaining;
+    if (bytes > currentRemaining)
+        return false;
+    currentRemaining -= bytes;
+    m_currentRemaining = currentRemaining;
+    *outPtr = m_currentPayloadEnd - currentRemaining - bytes;
+
+    ASSERT(is8ByteAligned(*outPtr));
+
+    return true;
 }
 
-inline bool CopiedAllocator::fitsInCurrentBlock(size_t bytes)
+inline CheckedBoolean CopiedAllocator::tryReallocate(
+    void* oldPtr, size_t oldBytes, size_t newBytes)
 {
-    return m_currentOffset + bytes < reinterpret_cast<char*>(m_currentBlock) + HeapBlock::s_blockSize && m_currentOffset + bytes > m_currentOffset;
+    ASSERT(is8ByteAligned(oldPtr));
+    ASSERT(is8ByteAligned(reinterpret_cast<void*>(oldBytes)));
+    ASSERT(is8ByteAligned(reinterpret_cast<void*>(newBytes)));
+    
+    ASSERT(newBytes > oldBytes);
+    
+    size_t additionalBytes = newBytes - oldBytes;
+    
+    size_t currentRemaining = m_currentRemaining;
+    if (m_currentPayloadEnd - currentRemaining - oldBytes != static_cast<char*>(oldPtr))
+        return false;
+    
+    if (additionalBytes > currentRemaining)
+        return false;
+    
+    m_currentRemaining = currentRemaining - additionalBytes;
+    
+    return true;
 }
 
-inline bool CopiedAllocator::wasLastAllocation(void* ptr, size_t size)
+inline void* CopiedAllocator::forceAllocate(size_t bytes)
 {
-    return static_cast<char*>(ptr) + size == m_currentOffset && ptr > m_currentBlock && ptr < reinterpret_cast<char*>(m_currentBlock) + HeapBlock::s_blockSize;
+    void* result = 0; // Needed because compilers don't realize this will always be assigned.
+    CheckedBoolean didSucceed = tryAllocate(bytes, &result);
+    ASSERT(didSucceed);
+    return result;
 }
 
-inline void CopiedAllocator::startedCopying()
+inline CopiedBlock* CopiedAllocator::resetCurrentBlock()
 {
-    if (m_currentBlock)
-        m_currentBlock->m_offset = static_cast<void*>(m_currentOffset);
-    m_currentOffset = 0;
-    m_currentBlock = 0;
+    CopiedBlock* result = m_currentBlock;
+    if (result) {
+        result->m_remaining = m_currentRemaining;
+        m_currentBlock = 0;
+        m_currentRemaining = 0;
+        m_currentPayloadEnd = 0;
+    }
+    return result;
 }
 
-inline void CopiedAllocator::resetCurrentBlock(CopiedBlock* newBlock)
+inline void CopiedAllocator::setCurrentBlock(CopiedBlock* newBlock)
 {
-    if (m_currentBlock)
-        m_currentBlock->m_offset = static_cast<void*>(m_currentOffset);
+    ASSERT(!m_currentBlock);
     m_currentBlock = newBlock;
-    m_currentOffset = static_cast<char*>(newBlock->m_offset);
+    ASSERT(newBlock);
+    m_currentRemaining = newBlock->m_remaining;
+    m_currentPayloadEnd = newBlock->payloadEnd();
 }
 
 inline size_t CopiedAllocator::currentCapacity()
 {
+    if (!m_currentBlock)
+        return 0;
     return m_currentBlock->capacity();
 }
 

Modified: trunk/Source/_javascript_Core/heap/CopiedBlock.h (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/CopiedBlock.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/CopiedBlock.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -42,15 +42,30 @@
     static CopiedBlock* createNoZeroFill(const PageAllocationAligned&);
     static PageAllocationAligned destroy(CopiedBlock*);
 
+    // The payload is the region of the block that is usable for allocations.
     char* payload();
+    char* payloadEnd();
+    size_t payloadCapacity();
+    
+    // The data is the region of the block that has been used for allocations.
+    char* data();
+    char* dataEnd();
+    size_t dataSize();
+    
+    // The wilderness is the region of the block that is usable for allocations
+    // but has not been so used.
+    char* wilderness();
+    char* wildernessEnd();
+    size_t wildernessSize();
+    
     size_t size();
     size_t capacity();
 
 private:
     CopiedBlock(const PageAllocationAligned&);
-    void zeroFillToEnd(); // Can be called at any time to zero-fill to the end of the block.
+    void zeroFillWilderness(); // Can be called at any time to zero-fill to the end of the block.
 
-    void* m_offset;
+    size_t m_remaining;
     uintptr_t m_isPinned;
 };
 
@@ -62,19 +77,18 @@
 inline CopiedBlock* CopiedBlock::create(const PageAllocationAligned& allocation)
 {
     CopiedBlock* block = createNoZeroFill(allocation);
-    block->zeroFillToEnd();
+    block->zeroFillWilderness();
     return block;
 }
 
-inline void CopiedBlock::zeroFillToEnd()
+inline void CopiedBlock::zeroFillWilderness()
 {
 #if USE(JSVALUE64)
-    char* offset = static_cast<char*>(m_offset);
-    memset(static_cast<void*>(offset), 0, static_cast<size_t>((reinterpret_cast<char*>(this) + m_allocation.size()) - offset));
+    memset(wilderness(), 0, wildernessSize());
 #else
     JSValue emptyValue;
-    JSValue* limit = reinterpret_cast_ptr<JSValue*>(reinterpret_cast<char*>(this) + m_allocation.size());
-    for (JSValue* currentValue = reinterpret_cast<JSValue*>(m_offset); currentValue < limit; currentValue++)
+    JSValue* limit = reinterpret_cast_ptr<JSValue*>(wildernessEnd());
+    for (JSValue* currentValue = reinterpret_cast<JSValue*>(wilderness()); currentValue < limit; currentValue++)
         *currentValue = emptyValue;
 #endif
 }
@@ -90,10 +104,10 @@
 
 inline CopiedBlock::CopiedBlock(const PageAllocationAligned& allocation)
     : HeapBlock(allocation)
-    , m_offset(payload())
+    , m_remaining(payloadCapacity())
     , m_isPinned(false)
 {
-    ASSERT(is8ByteAligned(static_cast<void*>(m_offset)));
+    ASSERT(is8ByteAligned(reinterpret_cast<void*>(m_remaining)));
 }
 
 inline char* CopiedBlock::payload()
@@ -101,9 +115,49 @@
     return reinterpret_cast<char*>(this) + ((sizeof(CopiedBlock) + 7) & ~7);
 }
 
+inline char* CopiedBlock::payloadEnd()
+{
+    return reinterpret_cast<char*>(this) + m_allocation.size();
+}
+
+inline size_t CopiedBlock::payloadCapacity()
+{
+    return payloadEnd() - payload();
+}
+
+inline char* CopiedBlock::data()
+{
+    return payload();
+}
+
+inline char* CopiedBlock::dataEnd()
+{
+    return payloadEnd() - m_remaining;
+}
+
+inline size_t CopiedBlock::dataSize()
+{
+    return dataEnd() - data();
+}
+
+inline char* CopiedBlock::wilderness()
+{
+    return dataEnd();
+}
+
+inline char* CopiedBlock::wildernessEnd()
+{
+    return payloadEnd();
+}
+
+inline size_t CopiedBlock::wildernessSize()
+{
+    return wildernessEnd() - wilderness();
+}
+
 inline size_t CopiedBlock::size()
 {
-    return static_cast<size_t>(static_cast<char*>(m_offset) - payload());
+    return dataSize();
 }
 
 inline size_t CopiedBlock::capacity()

Modified: trunk/Source/_javascript_Core/heap/CopiedSpace.cpp (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/CopiedSpace.cpp	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/CopiedSpace.cpp	2012-07-15 04:02:16 UTC (rev 122677)
@@ -71,8 +71,7 @@
 
     allocateBlock();
 
-    *outPtr = m_allocator.allocate(bytes);
-    ASSERT(*outPtr);
+    *outPtr = m_allocator.forceAllocate(bytes);
     return true;
 }
 
@@ -93,7 +92,10 @@
     m_blockFilter.add(reinterpret_cast<Bits>(block));
     m_blockSet.add(block);
     
-    *outPtr = allocateFromBlock(block, bytes);
+    CopiedAllocator allocator;
+    allocator.setCurrentBlock(block);
+    *outPtr = allocator.forceAllocate(bytes);
+    allocator.resetCurrentBlock();
 
     m_heap->didAllocate(blockSize);
 
@@ -107,18 +109,13 @@
     
     void* oldPtr = *ptr;
     ASSERT(!m_heap->globalData()->isInitializingObject());
-
+    
     if (isOversize(oldSize) || isOversize(newSize))
         return tryReallocateOversize(ptr, oldSize, newSize);
+    
+    if (m_allocator.tryReallocate(oldPtr, oldSize, newSize))
+        return true;
 
-    if (m_allocator.wasLastAllocation(oldPtr, oldSize)) {
-        size_t delta = newSize - oldSize;
-        if (m_allocator.fitsInCurrentBlock(delta)) {
-            (void)m_allocator.allocate(delta);
-            return true;
-        }
-    }
-
     void* result = 0;
     if (!tryAllocate(newSize, &result)) {
         *ptr = 0;
@@ -157,16 +154,17 @@
 
 void CopiedSpace::doneFillingBlock(CopiedBlock* block)
 {
-    ASSERT(block);
-    ASSERT(block->m_offset < reinterpret_cast<char*>(block) + HeapBlock::s_blockSize);
     ASSERT(m_inCopyingPhase);
+    
+    if (!block)
+        return;
 
-    if (block->m_offset == block->payload()) {
+    if (!block->dataSize()) {
         recycleBlock(block);
         return;
     }
 
-    block->zeroFillToEnd();
+    block->zeroFillWilderness();
 
     {
         SpinLockHolder locker(&m_toSpaceLock);
@@ -226,7 +224,7 @@
     if (!m_toSpace->head())
         allocateBlock();
     else
-        m_allocator.resetCurrentBlock(static_cast<CopiedBlock*>(m_toSpace->head()));
+        m_allocator.setCurrentBlock(static_cast<CopiedBlock*>(m_toSpace->head()));
 }
 
 size_t CopiedSpace::size()

Modified: trunk/Source/_javascript_Core/heap/CopiedSpace.h (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/CopiedSpace.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/CopiedSpace.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -77,9 +77,7 @@
     static CopiedBlock* blockFor(void*);
 
 private:
-    static void* allocateFromBlock(CopiedBlock*, size_t);
     static bool isOversize(size_t);
-    static bool fitsInBlock(CopiedBlock*, size_t);
     static CopiedBlock* oversizeBlockFor(void* ptr);
 
     CheckedBoolean tryAllocateSlowCase(size_t, void**);

Modified: trunk/Source/_javascript_Core/heap/CopiedSpaceInlineMethods.h (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/CopiedSpaceInlineMethods.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/CopiedSpaceInlineMethods.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -64,7 +64,7 @@
     m_toSpace = temp;
 
     m_blockFilter.reset();
-    m_allocator.startedCopying();
+    m_allocator.resetCurrentBlock();
 
     ASSERT(!m_inCopyingPhase);
     ASSERT(!m_numberOfLoanedBlocks);
@@ -94,7 +94,7 @@
         m_numberOfLoanedBlocks++;
     }
 
-    ASSERT(block->m_offset == block->payload());
+    ASSERT(!block->dataSize());
     return block;
 }
 
@@ -103,45 +103,27 @@
     if (m_heap->shouldCollect())
         m_heap->collect(Heap::DoNotSweep);
 
+    m_allocator.resetCurrentBlock();
+    
     CopiedBlock* block = CopiedBlock::create(m_heap->blockAllocator().allocate());
         
     m_toSpace->push(block);
     m_blockFilter.add(reinterpret_cast<Bits>(block));
     m_blockSet.add(block);
-    m_allocator.resetCurrentBlock(block);
+    m_allocator.setCurrentBlock(block);
 }
 
-inline bool CopiedSpace::fitsInBlock(CopiedBlock* block, size_t bytes)
-{
-    return static_cast<char*>(block->m_offset) + bytes < reinterpret_cast<char*>(block) + block->capacity() && static_cast<char*>(block->m_offset) + bytes > block->m_offset;
-}
-
 inline CheckedBoolean CopiedSpace::tryAllocate(size_t bytes, void** outPtr)
 {
     ASSERT(!m_heap->globalData()->isInitializingObject());
 
-    if (isOversize(bytes) || !m_allocator.fitsInCurrentBlock(bytes))
+    if (isOversize(bytes) || !m_allocator.tryAllocate(bytes, outPtr))
         return tryAllocateSlowCase(bytes, outPtr);
     
-    *outPtr = m_allocator.allocate(bytes);
     ASSERT(*outPtr);
     return true;
 }
 
-inline void* CopiedSpace::allocateFromBlock(CopiedBlock* block, size_t bytes)
-{
-    ASSERT(fitsInBlock(block, bytes));
-    ASSERT(is8ByteAligned(block->m_offset));
-    
-    void* ptr = block->m_offset;
-    ASSERT(block->m_offset >= block->payload() && block->m_offset < reinterpret_cast<char*>(block) + block->capacity());
-    block->m_offset = static_cast<void*>((static_cast<char*>(ptr) + bytes));
-    ASSERT(block->m_offset >= block->payload() && block->m_offset < reinterpret_cast<char*>(block) + block->capacity());
-
-    ASSERT(is8ByteAligned(ptr));
-    return ptr;
-}
-
 inline bool CopiedSpace::isOversize(size_t bytes)
 {
     return bytes > s_maxAllocationSize;

Modified: trunk/Source/_javascript_Core/heap/MarkStack.cpp (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/MarkStack.cpp	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/MarkStack.cpp	2012-07-15 04:02:16 UTC (rev 122677)
@@ -515,9 +515,8 @@
 
 void SlotVisitor::startCopying()
 {
-    ASSERT(!m_copyBlock);
-    m_copyBlock = m_shared.m_copiedSpace->allocateBlockForCopyingPhase();
-}    
+    ASSERT(!m_copiedAllocator.isValid());
+}
 
 void* SlotVisitor::allocateNewSpace(void* ptr, size_t bytes)
 {
@@ -528,18 +527,17 @@
 
     if (m_shared.m_copiedSpace->isPinned(ptr))
         return 0;
+    
+    void* result = 0; // Compilers don't realize that this will be assigned.
+    if (m_copiedAllocator.tryAllocate(bytes, &result))
+        return result;
+    
+    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock());
+    m_copiedAllocator.setCurrentBlock(m_shared.m_copiedSpace->allocateBlockForCopyingPhase());
 
-    // The only time it's possible to have a null copy block is if we have just started copying.
-    if (!m_copyBlock)
-        startCopying();
-
-    if (!CopiedSpace::fitsInBlock(m_copyBlock, bytes)) {
-        // We don't need to lock across these two calls because the master thread won't 
-        // call doneCopying() because this thread is considered active.
-        m_shared.m_copiedSpace->doneFillingBlock(m_copyBlock);
-        m_copyBlock = m_shared.m_copiedSpace->allocateBlockForCopyingPhase();
-    }
-    return CopiedSpace::allocateFromBlock(m_copyBlock, bytes);
+    CheckedBoolean didSucceed = m_copiedAllocator.tryAllocate(bytes, &result);
+    ASSERT(didSucceed);
+    return result;
 }
 
 ALWAYS_INLINE bool JSString::tryHashConstLock()
@@ -639,12 +637,10 @@
     
 void SlotVisitor::doneCopying()
 {
-    if (!m_copyBlock)
+    if (!m_copiedAllocator.isValid())
         return;
 
-    m_shared.m_copiedSpace->doneFillingBlock(m_copyBlock);
-
-    m_copyBlock = 0;
+    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock());
 }
 
 void SlotVisitor::harvestWeakReferences()

Modified: trunk/Source/_javascript_Core/heap/SlotVisitor.h (122676 => 122677)


--- trunk/Source/_javascript_Core/heap/SlotVisitor.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/heap/SlotVisitor.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -70,12 +70,11 @@
 
     void donateKnownParallel();
 
-    CopiedBlock* m_copyBlock;
+    CopiedAllocator m_copiedAllocator;
 };
 
 inline SlotVisitor::SlotVisitor(MarkStackThreadSharedData& shared)
     : MarkStack(shared)
-    , m_copyBlock(0)
 {
 }
 

Modified: trunk/Source/_javascript_Core/jit/JIT.h (122676 => 122677)


--- trunk/Source/_javascript_Core/jit/JIT.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/jit/JIT.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -435,7 +435,7 @@
         void emitWriteBarrier(JSCell* owner, RegisterID value, RegisterID scratch, WriteBarrierMode, WriteBarrierUseKind);
 
         template<typename ClassType, bool destructor, typename StructureType> void emitAllocateBasicJSObject(StructureType, RegisterID result, RegisterID storagePtr);
-        void emitAllocateBasicStorage(size_t, RegisterID result, RegisterID storagePtr);
+        void emitAllocateBasicStorage(size_t, RegisterID result);
         template<typename T> void emitAllocateJSFinalObject(T structure, RegisterID result, RegisterID storagePtr);
         void emitAllocateJSArray(unsigned valuesRegister, unsigned length, RegisterID cellResult, RegisterID storageResult, RegisterID storagePtr);
         

Modified: trunk/Source/_javascript_Core/jit/JITInlineMethods.h (122676 => 122677)


--- trunk/Source/_javascript_Core/jit/JITInlineMethods.h	2012-07-15 02:19:00 UTC (rev 122676)
+++ trunk/Source/_javascript_Core/jit/JITInlineMethods.h	2012-07-15 04:02:16 UTC (rev 122677)
@@ -437,25 +437,16 @@
     emitAllocateBasicJSObject<JSFinalObject, false, T>(structure, result, scratch);
 }
 
-inline void JIT::emitAllocateBasicStorage(size_t size, RegisterID result, RegisterID storagePtr)
+inline void JIT::emitAllocateBasicStorage(size_t size, RegisterID result)
 {
     CopiedAllocator* allocator = &m_globalData->heap.storageAllocator();
 
-    // FIXME: We need to check for wrap-around.
-    // Check to make sure that the allocation will fit in the current block.
-    loadPtr(&allocator->m_currentOffset, result);
-    addPtr(TrustedImm32(size), result);
-    loadPtr(&allocator->m_currentBlock, storagePtr);
-    addPtr(TrustedImm32(HeapBlock::s_blockSize), storagePtr);
-    addSlowCase(branchPtr(AboveOrEqual, result, storagePtr));
-
-    // Load the original offset.
-    loadPtr(&allocator->m_currentOffset, result);
-
-    // Bump the pointer forward.
-    move(result, storagePtr);
-    addPtr(TrustedImm32(size), storagePtr);
-    storePtr(storagePtr, &allocator->m_currentOffset);
+    loadPtr(&allocator->m_currentRemaining, result);
+    addSlowCase(branchSubPtr(Signed, TrustedImm32(size), result));
+    storePtr(result, &allocator->m_currentRemaining);
+    negPtr(result);
+    addPtr(AbsoluteAddress(&allocator->m_currentPayloadEnd), result);
+    subPtr(TrustedImm32(size), result);
 }
 
 inline void JIT::emitAllocateJSArray(unsigned valuesRegister, unsigned length, RegisterID cellResult, RegisterID storageResult, RegisterID storagePtr)
@@ -465,7 +456,7 @@
 
     // We allocate the backing store first to ensure that garbage collection 
     // doesn't happen during JSArray initialization.
-    emitAllocateBasicStorage(initialStorage, storageResult, storagePtr);
+    emitAllocateBasicStorage(initialStorage, storageResult);
 
     // Allocate the cell for the array.
     emitAllocateBasicJSObject<JSArray, false>(TrustedImmPtr(m_codeBlock->globalObject()->arrayStructure()), cellResult, storagePtr);
_______________________________________________
webkit-changes mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to