Title: [227717] trunk/Source/_javascript_Core
Revision
227717
Author
fpi...@apple.com
Date
2018-01-27 18:23:25 -0800 (Sat, 27 Jan 2018)

Log Message

MarkedBlock should have a footer instead of a header
https://bugs.webkit.org/show_bug.cgi?id=182217

Reviewed by JF Bastien.
        
This moves the MarkedBlock's meta-data from the header to the footer. This doesn't really
change anything except for some compile-time constants, so it should not affect performance.
        
This change is to help protect against Spectre attacks on structure checks, which allow for
small-offset out-of-bounds access. By putting the meta-data at the end of the block, small
OOBs will only get to other objects in the same block or the block footer. The block footer
is not super interesting. So, if we combine this with the TLC change (r227617), this means we
can use blocks as the mechanism of achieving distance between objects from different origins.
We just need to avoid ever putting objects from different origins in the same block. That's
what bug 181636 is about.
        
* heap/BlockDirectory.cpp:
(JSC::blockHeaderSize): Deleted.
(JSC::BlockDirectory::blockSizeForBytes): Deleted.
* heap/BlockDirectory.h:
* heap/HeapUtil.h:
(JSC::HeapUtil::findGCObjectPointersForMarking):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::~MarkedBlock):
(JSC::MarkedBlock::Footer::Footer):
(JSC::MarkedBlock::Footer::~Footer):
(JSC::MarkedBlock::Handle::stopAllocating):
(JSC::MarkedBlock::Handle::lastChanceToFinalize):
(JSC::MarkedBlock::Handle::resumeAllocating):
(JSC::MarkedBlock::aboutToMarkSlow):
(JSC::MarkedBlock::resetMarks):
(JSC::MarkedBlock::assertMarksNotStale):
(JSC::MarkedBlock::Handle::didConsumeFreeList):
(JSC::MarkedBlock::markCount):
(JSC::MarkedBlock::clearHasAnyMarked):
(JSC::MarkedBlock::Handle::didAddToDirectory):
(JSC::MarkedBlock::Handle::didRemoveFromDirectory):
(JSC::MarkedBlock::Handle::sweep):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::markingVersion const):
(JSC::MarkedBlock::lock):
(JSC::MarkedBlock::subspace const):
(JSC::MarkedBlock::footer):
(JSC::MarkedBlock::footer const):
(JSC::MarkedBlock::handle):
(JSC::MarkedBlock::handle const):
(JSC::MarkedBlock::Handle::blockFooter):
(JSC::MarkedBlock::isAtomAligned):
(JSC::MarkedBlock::Handle::cellAlign):
(JSC::MarkedBlock::blockFor):
(JSC::MarkedBlock::vm const):
(JSC::MarkedBlock::weakSet):
(JSC::MarkedBlock::cellSize):
(JSC::MarkedBlock::attributes const):
(JSC::MarkedBlock::atomNumber):
(JSC::MarkedBlock::areMarksStale):
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarkedRaw):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::testAndSetMarked):
(JSC::MarkedBlock::marks const):
(JSC::MarkedBlock::isAtom):
(JSC::MarkedBlock::Handle::forEachCell):
(JSC::MarkedBlock::hasAnyMarked const):
(JSC::MarkedBlock::noteMarked):
(WTF::MarkedBlockHash::hash):
(JSC::MarkedBlock::firstAtom): Deleted.
* heap/MarkedBlockInlines.h:
(JSC::MarkedBlock::marksConveyLivenessDuringMarking):
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::forEachLiveCell):
(JSC::MarkedBlock::Handle::forEachDeadCell):
(JSC::MarkedBlock::Handle::forEachMarkedCell):
* heap/MarkedSpace.cpp:
* heap/MarkedSpace.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (227716 => 227717)


--- trunk/Source/_javascript_Core/ChangeLog	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/ChangeLog	2018-01-28 02:23:25 UTC (rev 227717)
@@ -1,3 +1,86 @@
+2018-01-27  Filip Pizlo  <fpi...@apple.com>
+
+        MarkedBlock should have a footer instead of a header
+        https://bugs.webkit.org/show_bug.cgi?id=182217
+
+        Reviewed by JF Bastien.
+        
+        This moves the MarkedBlock's meta-data from the header to the footer. This doesn't really
+        change anything except for some compile-time constants, so it should not affect performance.
+        
+        This change is to help protect against Spectre attacks on structure checks, which allow for
+        small-offset out-of-bounds access. By putting the meta-data at the end of the block, small
+        OOBs will only get to other objects in the same block or the block footer. The block footer
+        is not super interesting. So, if we combine this with the TLC change (r227617), this means we
+        can use blocks as the mechanism of achieving distance between objects from different origins.
+        We just need to avoid ever putting objects from different origins in the same block. That's
+        what bug 181636 is about.
+        
+        * heap/BlockDirectory.cpp:
+        (JSC::blockHeaderSize): Deleted.
+        (JSC::BlockDirectory::blockSizeForBytes): Deleted.
+        * heap/BlockDirectory.h:
+        * heap/HeapUtil.h:
+        (JSC::HeapUtil::findGCObjectPointersForMarking):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::MarkedBlock):
+        (JSC::MarkedBlock::~MarkedBlock):
+        (JSC::MarkedBlock::Footer::Footer):
+        (JSC::MarkedBlock::Footer::~Footer):
+        (JSC::MarkedBlock::Handle::stopAllocating):
+        (JSC::MarkedBlock::Handle::lastChanceToFinalize):
+        (JSC::MarkedBlock::Handle::resumeAllocating):
+        (JSC::MarkedBlock::aboutToMarkSlow):
+        (JSC::MarkedBlock::resetMarks):
+        (JSC::MarkedBlock::assertMarksNotStale):
+        (JSC::MarkedBlock::Handle::didConsumeFreeList):
+        (JSC::MarkedBlock::markCount):
+        (JSC::MarkedBlock::clearHasAnyMarked):
+        (JSC::MarkedBlock::Handle::didAddToDirectory):
+        (JSC::MarkedBlock::Handle::didRemoveFromDirectory):
+        (JSC::MarkedBlock::Handle::sweep):
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::markingVersion const):
+        (JSC::MarkedBlock::lock):
+        (JSC::MarkedBlock::subspace const):
+        (JSC::MarkedBlock::footer):
+        (JSC::MarkedBlock::footer const):
+        (JSC::MarkedBlock::handle):
+        (JSC::MarkedBlock::handle const):
+        (JSC::MarkedBlock::Handle::blockFooter):
+        (JSC::MarkedBlock::isAtomAligned):
+        (JSC::MarkedBlock::Handle::cellAlign):
+        (JSC::MarkedBlock::blockFor):
+        (JSC::MarkedBlock::vm const):
+        (JSC::MarkedBlock::weakSet):
+        (JSC::MarkedBlock::cellSize):
+        (JSC::MarkedBlock::attributes const):
+        (JSC::MarkedBlock::atomNumber):
+        (JSC::MarkedBlock::areMarksStale):
+        (JSC::MarkedBlock::aboutToMark):
+        (JSC::MarkedBlock::isMarkedRaw):
+        (JSC::MarkedBlock::isMarked):
+        (JSC::MarkedBlock::testAndSetMarked):
+        (JSC::MarkedBlock::marks const):
+        (JSC::MarkedBlock::isAtom):
+        (JSC::MarkedBlock::Handle::forEachCell):
+        (JSC::MarkedBlock::hasAnyMarked const):
+        (JSC::MarkedBlock::noteMarked):
+        (WTF::MarkedBlockHash::hash):
+        (JSC::MarkedBlock::firstAtom): Deleted.
+        * heap/MarkedBlockInlines.h:
+        (JSC::MarkedBlock::marksConveyLivenessDuringMarking):
+        (JSC::MarkedBlock::Handle::isLive):
+        (JSC::MarkedBlock::Handle::specializedSweep):
+        (JSC::MarkedBlock::Handle::forEachLiveCell):
+        (JSC::MarkedBlock::Handle::forEachDeadCell):
+        (JSC::MarkedBlock::Handle::forEachMarkedCell):
+        * heap/MarkedSpace.cpp:
+        * heap/MarkedSpace.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+
 2018-01-27  Yusuke Suzuki  <utatane....@gmail.com>
 
         DFG strength reduction fails to convert NumberToStringWithValidRadixConstant for 0 to constant '0'

Modified: trunk/Source/_javascript_Core/heap/BlockDirectory.cpp (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/BlockDirectory.cpp	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/BlockDirectory.cpp	2018-01-28 02:23:25 UTC (rev 227717)
@@ -90,19 +90,6 @@
     return m_blocks[m_allocationCursor];
 }
 
-static size_t blockHeaderSize()
-{
-    return WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(sizeof(MarkedBlock));
-}
-
-size_t BlockDirectory::blockSizeForBytes(size_t bytes)
-{
-    size_t minBlockSize = MarkedBlock::blockSize;
-    size_t minAllocationSize = blockHeaderSize() + WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
-    minAllocationSize = WTF::roundUpToMultipleOf(WTF::pageSize(), minAllocationSize);
-    return std::max(minBlockSize, minAllocationSize);
-}
-
 MarkedBlock::Handle* BlockDirectory::tryAllocateBlock()
 {
     SuperSamplerScope superSamplerScope(false);

Modified: trunk/Source/_javascript_Core/heap/BlockDirectory.h (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/BlockDirectory.h	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/BlockDirectory.h	2018-01-28 02:23:25 UTC (rev 227717)
@@ -113,8 +113,6 @@
 
     bool isPagedOut(double deadline);
     
-    static size_t blockSizeForBytes(size_t);
-    
     Lock& bitvectorLock() { return m_bitvectorLock; }
 
 #define BLOCK_DIRECTORY_BIT_ACCESSORS(lowerBitName, capitalBitName)     \

Modified: trunk/Source/_javascript_Core/heap/HeapUtil.h (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/HeapUtil.h	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/HeapUtil.h	2018-01-28 02:23:25 UTC (rev 227717)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -124,7 +124,7 @@
     
         // Also, a butterfly could point at the end of an object plus sizeof(IndexingHeader). In that
         // case, this is pointing to the object to the right of the one we should be marking.
-        if (candidate->atomNumber(alignedPointer) > MarkedBlock::firstAtom()
+        if (candidate->atomNumber(alignedPointer) > 0
             && pointer <= alignedPointer + sizeof(IndexingHeader))
             tryPointer(alignedPointer - candidate->cellSize());
     }

Modified: trunk/Source/_javascript_Core/heap/MarkedBlock.cpp (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/MarkedBlock.cpp	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/MarkedBlock.cpp	2018-01-28 02:23:25 UTC (rev 227717)
@@ -86,14 +86,28 @@
 }
 
 MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
+{
+    new (&footer()) Footer(vm, handle);
+    if (false)
+        dataLog(RawPointer(this), ": Allocated.\n");
+}
+
+MarkedBlock::~MarkedBlock()
+{
+    footer().~Footer();
+}
+
+MarkedBlock::Footer::Footer(VM& vm, Handle& handle)
     : m_handle(handle)
     , m_vm(&vm)
     , m_markingVersion(MarkedSpace::nullVersion)
 {
-    if (false)
-        dataLog(RawPointer(this), ": Allocated.\n");
 }
 
+MarkedBlock::Footer::~Footer()
+{
+}
+
 void MarkedBlock::Handle::unsweepWithNoNewlyAllocated()
 {
     RELEASE_ASSERT(m_isFreeListed);
@@ -108,7 +122,7 @@
 
 void MarkedBlock::Handle::stopAllocating(const FreeList& freeList)
 {
-    auto locker = holdLock(block().m_lock);
+    auto locker = holdLock(blockFooter().m_lock);
     
     if (false)
         dataLog(RawPointer(this), ": MarkedBlock::Handle::stopAllocating!\n");
@@ -155,9 +169,9 @@
 {
     directory()->setIsAllocated(NoLockingNecessary, this, false);
     directory()->setIsDestructible(NoLockingNecessary, this, true);
-    m_block->m_marks.clearAll();
-    m_block->clearHasAnyMarked();
-    m_block->m_markingVersion = heap()->objectSpace().markingVersion();
+    blockFooter().m_marks.clearAll();
+    block().clearHasAnyMarked();
+    blockFooter().m_markingVersion = heap()->objectSpace().markingVersion();
     m_weakSet.lastChanceToFinalize();
     m_newlyAllocated.clearAll();
     m_newlyAllocatedVersion = heap()->objectSpace().newlyAllocatedVersion();
@@ -167,7 +181,7 @@
 void MarkedBlock::Handle::resumeAllocating(FreeList& freeList)
 {
     {
-        auto locker = holdLock(block().m_lock);
+        auto locker = holdLock(blockFooter().m_lock);
         
         if (false)
             dataLog(RawPointer(this), ": MarkedBlock::Handle::resumeAllocating!\n");
@@ -200,7 +214,7 @@
 void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
 {
     ASSERT(vm()->heap.objectSpace().isMarking());
-    auto locker = holdLock(m_lock);
+    auto locker = holdLock(footer().m_lock);
     
     if (!areMarksStale(markingVersion))
         return;
@@ -217,7 +231,7 @@
         // date version! If it does, then we want to leave the newlyAllocated alone, since that
         // means that we had allocated in this previously empty block but did not fill it up, so
         // we created a newlyAllocated.
-        m_marks.clearAll();
+        footer().m_marks.clearAll();
     } else {
         if (false)
             dataLog(RawPointer(this), ": Doing things.\n");
@@ -230,16 +244,16 @@
             // cannot be lastChanceToFinalize. So it must be stopAllocating. That means that we just
             // computed the newlyAllocated bits just before the start of an increment. When we are in that
             // mode, it seems as if newlyAllocated should subsume marks.
-            ASSERT(handle().m_newlyAllocated.subsumes(m_marks));
-            m_marks.clearAll();
+            ASSERT(handle().m_newlyAllocated.subsumes(footer().m_marks));
+            footer().m_marks.clearAll();
         } else {
-            handle().m_newlyAllocated.setAndClear(m_marks);
+            handle().m_newlyAllocated.setAndClear(footer().m_marks);
             handle().m_newlyAllocatedVersion = newlyAllocatedVersion;
         }
     }
     clearHasAnyMarked();
     WTF::storeStoreFence();
-    m_markingVersion = markingVersion;
+    footer().m_markingVersion = markingVersion;
     
     // This means we're the first ones to mark any object in this block.
     directory->setIsMarkingNotEmpty(holdLock(directory->bitvectorLock()), &handle(), true);
@@ -260,14 +274,14 @@
     // version is null, aboutToMarkSlow() will assume that the marks were not stale as of before
     // beginMarking(). Hence the need to whip the marks into shape.
     if (areMarksStale())
-        m_marks.clearAll();
-    m_markingVersion = MarkedSpace::nullVersion;
+        footer().m_marks.clearAll();
+    footer().m_markingVersion = MarkedSpace::nullVersion;
 }
 
 #if !ASSERT_DISABLED
 void MarkedBlock::assertMarksNotStale()
 {
-    ASSERT(m_markingVersion == vm()->heap.objectSpace().markingVersion());
+    ASSERT(footer().m_markingVersion == vm()->heap.objectSpace().markingVersion());
 }
 #endif // !ASSERT_DISABLED
 
@@ -288,7 +302,7 @@
 
 void MarkedBlock::Handle::didConsumeFreeList()
 {
-    auto locker = holdLock(block().m_lock);
+    auto locker = holdLock(blockFooter().m_lock);
     if (false)
         dataLog(RawPointer(this), ": MarkedBlock::Handle::didConsumeFreeList!\n");
     ASSERT(isFreeListed());
@@ -298,12 +312,12 @@
 
 size_t MarkedBlock::markCount()
 {
-    return areMarksStale() ? 0 : m_marks.count();
+    return areMarksStale() ? 0 : footer().m_marks.count();
 }
 
 void MarkedBlock::clearHasAnyMarked()
 {
-    m_biasedMarkCount = m_markCountBias;
+    footer().m_biasedMarkCount = footer().m_markCountBias;
 }
 
 void MarkedBlock::noteMarkedSlow()
@@ -329,11 +343,11 @@
     
     m_index = index;
     m_directory = directory;
-    m_block->m_subspace = directory->subspace();
+    blockFooter().m_subspace = directory->subspace();
     
     size_t cellSize = directory->cellSize();
     m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
-    m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
+    m_endAtom = endAtom - m_atomsPerCell + 1;
     
     m_attributes = directory->attributes();
 
@@ -347,7 +361,7 @@
     RELEASE_ASSERT(markCountBias < 0);
     
     // This means we haven't marked anything yet.
-    block().m_biasedMarkCount = block().m_markCountBias = static_cast<int16_t>(markCountBias);
+    blockFooter().m_biasedMarkCount = blockFooter().m_markCountBias = static_cast<int16_t>(markCountBias);
 }
 
 void MarkedBlock::Handle::didRemoveFromDirectory()
@@ -357,7 +371,7 @@
     
     m_index = std::numeric_limits<size_t>::max();
     m_directory = nullptr;
-    m_block->m_subspace = nullptr;
+    blockFooter().m_subspace = nullptr;
 }
 
 #if !ASSERT_DISABLED
@@ -410,7 +424,7 @@
     }
     
     if (space()->isMarking())
-        block().m_lock.lock();
+        blockFooter().m_lock.lock();
     
     subspace()->didBeginSweepingToFreeList(this);
     

Modified: trunk/Source/_javascript_Core/heap/MarkedBlock.h (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/MarkedBlock.h	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/MarkedBlock.h	2018-01-28 02:23:25 UTC (rev 227717)
@@ -43,7 +43,6 @@
 class SlotVisitor;
 class Subspace;
 
-typedef uintptr_t Bits;
 typedef uint32_t HeapVersion;
 
 // A marked block is a page-aligned container for heap-allocated objects.
@@ -60,16 +59,18 @@
     friend struct VerifyMarked;
 
 public:
+    class Footer;
     class Handle;
 private:
+    friend class Footer;
     friend class Handle;
 public:
-    static const size_t atomSize = 16; // bytes
-    static const size_t blockSize = 16 * KB;
-    static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
+    static constexpr size_t atomSize = 16; // bytes
+    static constexpr size_t blockSize = 16 * KB;
+    static constexpr size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
 
-    static const size_t atomsPerBlock = blockSize / atomSize;
-
+    static constexpr size_t atomsPerBlock = blockSize / atomSize;
+    
     static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
     static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
     
@@ -103,6 +104,7 @@
         ~Handle();
             
         MarkedBlock& block();
+        MarkedBlock::Footer& blockFooter();
             
         void* cellAlign(void*);
             
@@ -244,10 +246,71 @@
             
         MarkedBlock* m_block { nullptr };
     };
+
+private:    
+    static constexpr size_t atomAlignmentMask = atomSize - 1;
+
+    typedef char Atom[atomSize];
+
+public:
+    class Footer {
+    public:
+        Footer(VM&, Handle&);
+        ~Footer();
         
+    private:
+        friend class LLIntOffsetsExtractor;
+        friend class MarkedBlock;
+        
+        Handle& m_handle;
+        VM* m_vm;
+        Subspace* m_subspace;
+
+        CountingLock m_lock;
+    
+        // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
+        // that this count is racy. It will accurately detect whether or not exactly zero things were
+        // marked, but if N things got marked, then this may report anything in the range [1, N] (or
+        // before unbiased, it would be [1 + m_markCountBias, N + m_markCountBias].)
+        int16_t m_biasedMarkCount;
+    
+        // We bias the mark count so that if m_biasedMarkCount >= 0 then the block should be retired.
+        // We go to all this trouble to make marking a bit faster: this way, marking knows when to
+        // retire a block using a js/jns on m_biasedMarkCount.
+        //
+        // For example, if a block has room for 100 objects and retirement happens whenever 90% are
+        // live, then m_markCountBias will be -90. This way, when marking begins, this will cause us to
+        // set m_biasedMarkCount to -90 as well, since:
+        //
+        //     m_biasedMarkCount = actualMarkCount + m_markCountBias.
+        //
+        // Marking an object will increment m_biasedMarkCount. Once 90 objects get marked, we will have
+        // m_biasedMarkCount = 0, which will trigger retirement. In other words, we want to set
+        // m_markCountBias like so:
+        //
+        //     m_markCountBias = -(minMarkedBlockUtilization * cellsPerBlock)
+        //
+        // All of this also means that you can detect if any objects are marked by doing:
+        //
+        //     m_biasedMarkCount != m_markCountBias
+        int16_t m_markCountBias;
+
+        HeapVersion m_markingVersion;
+
+        Bitmap<atomsPerBlock> m_marks;
+    };
+        
+private:    
+    Footer& footer();
+    const Footer& footer() const;
+
+public:
+    static constexpr size_t endAtom = (blockSize - sizeof(Footer)) / atomSize;
+
     static MarkedBlock::Handle* tryCreate(Heap&, AlignedMemoryAllocator*);
         
     Handle& handle();
+    const Handle& handle() const;
         
     VM* vm() const;
     inline Heap* heap() const;
@@ -255,7 +318,6 @@
 
     static bool isAtomAligned(const void*);
     static MarkedBlock* blockFor(const void*);
-    static size_t firstAtom();
     size_t atomNumber(const void*);
         
     size_t markCount();
@@ -295,20 +357,19 @@
     void resetMarks();
     
     bool isMarkedRaw(const void* p);
-    HeapVersion markingVersion() const { return m_markingVersion; }
+    HeapVersion markingVersion() const { return footer().m_markingVersion; }
     
     const Bitmap<atomsPerBlock>& marks() const;
     
-    CountingLock& lock() { return m_lock; }
+    CountingLock& lock() { return footer().m_lock; }
     
-    Subspace* subspace() const { return m_subspace; }
+    Subspace* subspace() const { return footer().m_subspace; }
+    
+    static constexpr size_t offsetOfFooter = endAtom * atomSize;
 
 private:
-    static const size_t atomAlignmentMask = atomSize - 1;
-
-    typedef char Atom[atomSize];
-
     MarkedBlock(VM&, Handle&);
+    ~MarkedBlock();
     Atom* atoms();
         
     JS_EXPORT_PRIVATE void aboutToMarkSlow(HeapVersion markingVersion);
@@ -318,58 +379,36 @@
     
     inline bool marksConveyLivenessDuringMarking(HeapVersion markingVersion);
     inline bool marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion);
-        
-    Handle& m_handle;
-    VM* m_vm;
-    Subspace* m_subspace;
+};
 
-    CountingLock m_lock;
-    
-    // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
-    // that this count is racy. It will accurately detect whether or not exactly zero things were
-    // marked, but if N things got marked, then this may report anything in the range [1, N] (or
-    // before unbiased, it would be [1 + m_markCountBias, N + m_markCountBias].)
-    int16_t m_biasedMarkCount;
-    
-    // We bias the mark count so that if m_biasedMarkCount >= 0 then the block should be retired.
-    // We go to all this trouble to make marking a bit faster: this way, marking knows when to
-    // retire a block using a js/jns on m_biasedMarkCount.
-    //
-    // For example, if a block has room for 100 objects and retirement happens whenever 90% are
-    // live, then m_markCountBias will be -90. This way, when marking begins, this will cause us to
-    // set m_biasedMarkCount to -90 as well, since:
-    //
-    //     m_biasedMarkCount = actualMarkCount + m_markCountBias.
-    //
-    // Marking an object will increment m_biasedMarkCount. Once 90 objects get marked, we will have
-    // m_biasedMarkCount = 0, which will trigger retirement. In other words, we want to set
-    // m_markCountBias like so:
-    //
-    //     m_markCountBias = -(minMarkedBlockUtilization * cellsPerBlock)
-    //
-    // All of this also means that you can detect if any objects are marked by doing:
-    //
-    //     m_biasedMarkCount != m_markCountBias
-    int16_t m_markCountBias;
+inline MarkedBlock::Footer& MarkedBlock::footer()
+{
+    return *bitwise_cast<MarkedBlock::Footer*>(atoms() + endAtom);
+}
 
-    HeapVersion m_markingVersion;
+inline const MarkedBlock::Footer& MarkedBlock::footer() const
+{
+    return const_cast<MarkedBlock*>(this)->footer();
+}
 
-    Bitmap<atomsPerBlock> m_marks;
-};
-
 inline MarkedBlock::Handle& MarkedBlock::handle()
 {
-    return m_handle;
+    return footer().m_handle;
 }
 
+inline const MarkedBlock::Handle& MarkedBlock::handle() const
+{
+    return const_cast<MarkedBlock*>(this)->handle();
+}
+
 inline MarkedBlock& MarkedBlock::Handle::block()
 {
     return *m_block;
 }
 
-inline size_t MarkedBlock::firstAtom()
+inline MarkedBlock::Footer& MarkedBlock::Handle::blockFooter()
 {
-    return WTF::roundUpToMultipleOf<atomSize>(sizeof(MarkedBlock)) / atomSize;
+    return block().footer();
 }
 
 inline MarkedBlock::Atom* MarkedBlock::atoms()
@@ -379,13 +418,13 @@
 
 inline bool MarkedBlock::isAtomAligned(const void* p)
 {
-    return !(reinterpret_cast<Bits>(p) & atomAlignmentMask);
+    return !(reinterpret_cast<uintptr_t>(p) & atomAlignmentMask);
 }
 
 inline void* MarkedBlock::Handle::cellAlign(void* p)
 {
-    Bits base = reinterpret_cast<Bits>(block().atoms() + firstAtom());
-    Bits bits = reinterpret_cast<Bits>(p);
+    uintptr_t base = reinterpret_cast<uintptr_t>(block().atoms());
+    uintptr_t bits = reinterpret_cast<uintptr_t>(p);
     bits -= base;
     bits -= bits % cellSize();
     bits += base;
@@ -394,7 +433,7 @@
 
 inline MarkedBlock* MarkedBlock::blockFor(const void* p)
 {
-    return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
+    return reinterpret_cast<MarkedBlock*>(reinterpret_cast<uintptr_t>(p) & blockMask);
 }
 
 inline BlockDirectory* MarkedBlock::Handle::directory() const
@@ -419,7 +458,7 @@
 
 inline VM* MarkedBlock::vm() const
 {
-    return m_vm;
+    return footer().m_vm;
 }
 
 inline WeakSet& MarkedBlock::Handle::weakSet()
@@ -429,7 +468,7 @@
 
 inline WeakSet& MarkedBlock::weakSet()
 {
-    return m_handle.weakSet();
+    return handle().weakSet();
 }
 
 inline void MarkedBlock::Handle::shrink()
@@ -454,7 +493,7 @@
 
 inline size_t MarkedBlock::cellSize()
 {
-    return m_handle.cellSize();
+    return handle().cellSize();
 }
 
 inline const CellAttributes& MarkedBlock::Handle::attributes() const
@@ -464,7 +503,7 @@
 
 inline const CellAttributes& MarkedBlock::attributes() const
 {
-    return m_handle.attributes();
+    return handle().attributes();
 }
 
 inline bool MarkedBlock::Handle::needsDestruction() const
@@ -494,17 +533,17 @@
 
 inline size_t MarkedBlock::atomNumber(const void* p)
 {
-    return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
+    return (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(this)) / atomSize;
 }
 
 inline bool MarkedBlock::areMarksStale(HeapVersion markingVersion)
 {
-    return markingVersion != m_markingVersion;
+    return markingVersion != footer().m_markingVersion;
 }
 
 inline Dependency MarkedBlock::aboutToMark(HeapVersion markingVersion)
 {
-    HeapVersion version = m_markingVersion;
+    HeapVersion version = footer().m_markingVersion;
     if (UNLIKELY(version != markingVersion))
         aboutToMarkSlow(markingVersion);
     return Dependency::fence(version);
@@ -517,32 +556,32 @@
 
 inline bool MarkedBlock::isMarkedRaw(const void* p)
 {
-    return m_marks.get(atomNumber(p));
+    return footer().m_marks.get(atomNumber(p));
 }
 
 inline bool MarkedBlock::isMarked(HeapVersion markingVersion, const void* p)
 {
-    HeapVersion version = m_markingVersion;
+    HeapVersion version = footer().m_markingVersion;
     if (UNLIKELY(version != markingVersion))
         return false;
-    return m_marks.get(atomNumber(p), Dependency::fence(version));
+    return footer().m_marks.get(atomNumber(p), Dependency::fence(version));
 }
 
 inline bool MarkedBlock::isMarked(const void* p, Dependency dependency)
 {
     assertMarksNotStale();
-    return m_marks.get(atomNumber(p), dependency);
+    return footer().m_marks.get(atomNumber(p), dependency);
 }
 
 inline bool MarkedBlock::testAndSetMarked(const void* p, Dependency dependency)
 {
     assertMarksNotStale();
-    return m_marks.concurrentTestAndSet(atomNumber(p), dependency);
+    return footer().m_marks.concurrentTestAndSet(atomNumber(p), dependency);
 }
 
 inline const Bitmap<MarkedBlock::atomsPerBlock>& MarkedBlock::marks() const
 {
-    return m_marks;
+    return footer().m_marks;
 }
 
 inline bool MarkedBlock::Handle::isNewlyAllocated(const void* p)
@@ -569,13 +608,10 @@
 {
     ASSERT(MarkedBlock::isAtomAligned(p));
     size_t atomNumber = this->atomNumber(p);
-    size_t firstAtom = MarkedBlock::firstAtom();
-    if (atomNumber < firstAtom) // Filters pointers into MarkedBlock metadata.
+    if (atomNumber % handle().m_atomsPerCell) // Filters pointers into cell middles.
         return false;
-    if ((atomNumber - firstAtom) % m_handle.m_atomsPerCell) // Filters pointers into cell middles.
+    if (atomNumber >= handle().m_endAtom) // Filters pointers into invalid cells out of the range.
         return false;
-    if (atomNumber >= m_handle.m_endAtom) // Filters pointers into invalid cells out of the range.
-        return false;
     return true;
 }
 
@@ -583,7 +619,7 @@
 inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
 {
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (functor(cell, kind) == IterationStatus::Done)
             return IterationStatus::Done;
@@ -593,15 +629,15 @@
 
 inline bool MarkedBlock::hasAnyMarked() const
 {
-    return m_biasedMarkCount != m_markCountBias;
+    return footer().m_biasedMarkCount != footer().m_markCountBias;
 }
 
 inline void MarkedBlock::noteMarked()
 {
     // This is racy by design. We don't want to pay the price of an atomic increment!
-    int16_t biasedMarkCount = m_biasedMarkCount;
+    int16_t biasedMarkCount = footer().m_biasedMarkCount;
     ++biasedMarkCount;
-    m_biasedMarkCount = biasedMarkCount;
+    footer().m_biasedMarkCount = biasedMarkCount;
     if (UNLIKELY(!biasedMarkCount))
         noteMarkedSlow();
 }
@@ -616,7 +652,7 @@
         // Aligned VM regions tend to be monotonically increasing integers,
         // which is a great hash function, but we have to remove the low bits,
         // since they're always zero, which is a terrible hash function!
-        return reinterpret_cast<JSC::Bits>(key) / JSC::MarkedBlock::blockSize;
+        return reinterpret_cast<uintptr_t>(key) / JSC::MarkedBlock::blockSize;
     }
 };
 

Modified: trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h	2018-01-28 02:23:25 UTC (rev 227717)
@@ -67,7 +67,7 @@
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion)
 {
-    return marksConveyLivenessDuringMarking(m_markingVersion, markingVersion);
+    return marksConveyLivenessDuringMarking(footer().m_markingVersion, markingVersion);
 }
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion)
@@ -138,8 +138,9 @@
     // impact on perf - around 2% on splay if you get it wrong.
 
     MarkedBlock& block = this->block();
+    MarkedBlock::Footer& footer = block.footer();
     
-    auto count = block.m_lock.tryOptimisticFencelessRead();
+    auto count = footer.m_lock.tryOptimisticFencelessRead();
     if (count.value) {
         Dependency fenceBefore = Dependency::fence(count.input);
         MarkedBlock::Handle* fencedThis = fenceBefore.consume(this);
@@ -149,25 +150,26 @@
         HeapVersion myNewlyAllocatedVersion = fencedThis->m_newlyAllocatedVersion;
         if (myNewlyAllocatedVersion == newlyAllocatedVersion) {
             bool result = fencedThis->isNewlyAllocated(cell);
-            if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+            if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
                 return result;
         } else {
             MarkedBlock& fencedBlock = *fenceBefore.consume(&block);
+            MarkedBlock::Footer& fencedFooter = fencedBlock.footer();
             
-            HeapVersion myMarkingVersion = fencedBlock.m_markingVersion;
+            HeapVersion myMarkingVersion = fencedFooter.m_markingVersion;
             if (myMarkingVersion != markingVersion
                 && (!isMarking || !fencedBlock.marksConveyLivenessDuringMarking(myMarkingVersion, markingVersion))) {
-                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(myMarkingVersion)))
+                if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(myMarkingVersion)))
                     return false;
             } else {
-                bool result = fencedBlock.m_marks.get(block.atomNumber(cell));
-                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+                bool result = fencedFooter.m_marks.get(block.atomNumber(cell));
+                if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
                     return result;
             }
         }
     }
     
-    auto locker = holdLock(block.m_lock);
+    auto locker = holdLock(footer.m_lock);
 
     ASSERT(!isFreeListed());
     
@@ -182,7 +184,7 @@
             return false;
     }
     
-    return block.m_marks.get(block.atomNumber(cell));
+    return footer.m_marks.get(block.atomNumber(cell));
 }
 
 inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const void* p)
@@ -240,6 +242,7 @@
     SuperSamplerScope superSamplerScope(false);
 
     MarkedBlock& block = this->block();
+    MarkedBlock::Footer& footer = block.footer();
     
     if (false)
         dataLog(RawPointer(this), "/", RawPointer(&block), ": MarkedBlock::Handle::specializedSweep!\n");
@@ -262,12 +265,12 @@
         && newlyAllocatedMode == DoesNotHaveNewlyAllocated) {
         
         // This is an incredibly powerful assertion that checks the sanity of our block bits.
-        if (marksMode == MarksNotStale && !block.m_marks.isEmpty()) {
+        if (marksMode == MarksNotStale && !footer.m_marks.isEmpty()) {
             WTF::dataFile().atomically(
                 [&] (PrintStream& out) {
                     out.print("Block ", RawPointer(&block), ": marks not empty!\n");
-                    out.print("Block lock is held: ", block.m_lock.isHeld(), "\n");
-                    out.print("Marking version of block: ", block.m_markingVersion, "\n");
+                    out.print("Block lock is held: ", footer.m_lock.isHeld(), "\n");
+                    out.print("Marking version of block: ", footer.m_markingVersion, "\n");
                     out.print("Marking version of heap: ", space()->markingVersion(), "\n");
                     UNREACHABLE_FOR_PLATFORM();
                 });
@@ -276,12 +279,12 @@
         char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
         char* payloadEnd = startOfLastCell + cellSize;
         RELEASE_ASSERT(payloadEnd - MarkedBlock::blockSize <= bitwise_cast<char*>(&block));
-        char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
+        char* payloadBegin = bitwise_cast<char*>(block.atoms());
         
         if (sweepMode == SweepToFreeList)
             setIsFreeListed();
         if (space()->isMarking())
-            block.m_lock.unlock();
+            footer.m_lock.unlock();
         if (destructionMode != BlockHasNoDestructors) {
             for (char* cell = payloadBegin; cell < payloadEnd; cell += cellSize)
                 destroy(cell);
@@ -320,9 +323,9 @@
             ++count;
         }
     };
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         if (emptyMode == NotEmpty
-            && ((marksMode == MarksNotStale && block.m_marks.get(i))
+            && ((marksMode == MarksNotStale && footer.m_marks.get(i))
                 || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated.get(i)))) {
             isEmpty = false;
             continue;
@@ -340,7 +343,7 @@
         m_newlyAllocatedVersion = MarkedSpace::nullVersion;
     
     if (space()->isMarking())
-        block.m_lock.unlock();
+        footer.m_lock.unlock();
     
     if (destructionMode == BlockHasDestructorsAndCollectorIsRunning) {
         for (size_t i : deadCells)
@@ -492,7 +495,7 @@
     // https://bugs.webkit.org/show_bug.cgi?id=180315
     
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (!isLive(cell))
             continue;
@@ -507,7 +510,7 @@
 inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& functor)
 {
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (isLive(cell))
             continue;
@@ -527,8 +530,8 @@
     WTF::loadLoadFence();
     if (areMarksStale)
         return IterationStatus::Continue;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        if (!block.m_marks.get(i))
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
+        if (!block.footer().m_marks.get(i))
             continue;
 
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);

Modified: trunk/Source/_javascript_Core/heap/MarkedSpace.cpp (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/MarkedSpace.cpp	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/MarkedSpace.cpp	2018-01-28 02:23:25 UTC (rev 227717)
@@ -135,7 +135,6 @@
             // FIXME: All of these things should have IsoSubspaces.
             // https://bugs.webkit.org/show_bug.cgi?id=179876
             add(sizeof(UnlinkedFunctionCodeBlock));
-            add(sizeof(FunctionCodeBlock));
             add(sizeof(JSString));
             add(sizeof(JSFunction));
 

Modified: trunk/Source/_javascript_Core/heap/MarkedSpace.h (227716 => 227717)


--- trunk/Source/_javascript_Core/heap/MarkedSpace.h	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/heap/MarkedSpace.h	2018-01-28 02:23:25 UTC (rev 227717)
@@ -50,25 +50,25 @@
     WTF_MAKE_NONCOPYABLE(MarkedSpace);
 public:
     // sizeStep is really a synonym for atomSize; it's no accident that they are the same.
-    static const size_t sizeStep = MarkedBlock::atomSize;
+    static constexpr size_t sizeStep = MarkedBlock::atomSize;
     
     // Sizes up to this amount get a size class for each size step.
-    static const size_t preciseCutoff = 80;
+    static constexpr size_t preciseCutoff = 80;
     
-    // The amount of available payload in a block is the block's size minus the header. But the
+    // The amount of available payload in a block is the block's size minus the footer. But the
     // header size might not be atom size aligned, so we round down the result accordingly.
-    static const size_t blockPayload = (MarkedBlock::blockSize - sizeof(MarkedBlock)) & ~(MarkedBlock::atomSize - 1);
+    static constexpr size_t blockPayload = (MarkedBlock::blockSize - sizeof(MarkedBlock::Footer)) & ~(MarkedBlock::atomSize - 1);
     
     // The largest cell we're willing to allocate in a MarkedBlock the "normal way" (i.e. using size
     // classes, rather than a large allocation) is half the size of the payload, rounded down. This
     // ensures that we only use the size class approach if it means being able to pack two things
     // into one block.
-    static const size_t largeCutoff = (blockPayload / 2) & ~(sizeStep - 1);
+    static constexpr size_t largeCutoff = (blockPayload / 2) & ~(sizeStep - 1);
 
-    static const size_t numSizeClasses = largeCutoff / sizeStep;
+    static constexpr size_t numSizeClasses = largeCutoff / sizeStep;
     
-    static const HeapVersion nullVersion = 0; // The version of freshly allocated blocks.
-    static const HeapVersion initialVersion = 2; // The version that the heap starts out with. Set to make sure that nextVersion(nullVersion) != initialVersion.
+    static constexpr HeapVersion nullVersion = 0; // The version of freshly allocated blocks.
+    static constexpr HeapVersion initialVersion = 2; // The version that the heap starts out with. Set to make sure that nextVersion(nullVersion) != initialVersion.
     
     static HeapVersion nextVersion(HeapVersion version)
     {

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (227716 => 227717)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2018-01-28 02:23:25 UTC (rev 227717)
@@ -436,6 +436,7 @@
 
 const MarkedBlockSize = constexpr MarkedBlock::blockSize
 const MarkedBlockMask = ~(MarkedBlockSize - 1)
+const MarkedBlockFooterOffset = constexpr MarkedBlock::offsetOfFooter
 
 const BlackThreshold = constexpr blackThreshold
 

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm (227716 => 227717)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm	2018-01-28 02:23:25 UTC (rev 227717)
@@ -307,7 +307,7 @@
 _handleUncaughtException:
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -634,7 +634,7 @@
 macro branchIfException(label)
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     btiz VM::m_exception[t3], .noException
     jmp label
 .noException:
@@ -2000,7 +2000,7 @@
     # and have set VM::targetInterpreterPCForThrow.
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -2015,7 +2015,7 @@
 .isCatchableException:
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     loadi VM::m_exception[t3], t0
     storei 0, VM::m_exception[t3]
@@ -2053,7 +2053,7 @@
     # This essentially emulates the JIT's throwing protocol.
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
     jmp VM::targetMachinePCForThrow[t1]
 
@@ -2072,7 +2072,7 @@
     if X86 or X86_WIN
         subp 8, sp # align stack pointer
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t3
         storep cfr, VM::topCallFrame[t3]
         move cfr, a0  # a0 = ecx
         storep a0, [sp]
@@ -2082,7 +2082,7 @@
         call executableOffsetToFunction[t1]
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or C_LOOP or MIPS
         if MIPS
@@ -2095,7 +2095,7 @@
         end
         # t1 already contains the Callee.
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t1
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
         storep cfr, VM::topCallFrame[t1]
         move cfr, a0
         loadi Callee + PayloadOffset[cfr], t1
@@ -2108,7 +2108,7 @@
         end
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         if MIPS
             addp 24, sp
         else
@@ -2140,7 +2140,7 @@
     if X86 or X86_WIN
         subp 8, sp # align stack pointer
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t3
         storep cfr, VM::topCallFrame[t3]
         move cfr, a0  # a0 = ecx
         storep a0, [sp]
@@ -2149,13 +2149,13 @@
         call offsetOfFunction[t1]
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or C_LOOP or MIPS
         subp 8, sp # align stack pointer
         # t1 already contains the Callee.
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t1
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
         storep cfr, VM::topCallFrame[t1]
         move cfr, a0
         loadi Callee + PayloadOffset[cfr], t1
@@ -2167,7 +2167,7 @@
         end
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     else
         error

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm (227716 => 227717)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm	2018-01-27 18:14:06 UTC (rev 227716)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm	2018-01-28 02:23:25 UTC (rev 227717)
@@ -280,7 +280,7 @@
 _handleUncaughtException:
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -561,7 +561,7 @@
 macro branchIfException(label)
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     btqz VM::m_exception[t3], .noException
     jmp label
 .noException:
@@ -2002,7 +2002,7 @@
     # and have set VM::targetInterpreterPCForThrow.
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -2022,7 +2022,7 @@
 .isCatchableException:
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     loadq VM::m_exception[t3], t0
     storeq 0, VM::m_exception[t3]
@@ -2052,7 +2052,7 @@
 _llint_throw_from_slow_path_trampoline:
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
 
     callSlowPath(_llint_slow_path_handle_exception)
@@ -2062,7 +2062,7 @@
     # This essentially emulates the JIT's throwing protocol.
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     jmp VM::targetMachinePCForThrow[t1]
 
 
@@ -2077,7 +2077,7 @@
     storep 0, CodeBlock[cfr]
     loadp Callee[cfr], t0
     andp MarkedBlockMask, t0, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     storep cfr, VM::topCallFrame[t1]
     if ARM64 or C_LOOP
         storep lr, ReturnPC[cfr]
@@ -2104,7 +2104,7 @@
 
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     btqnz VM::m_exception[t3], .handleException
 
@@ -2121,7 +2121,7 @@
     storep 0, CodeBlock[cfr]
     loadp Callee[cfr], t0
     andp MarkedBlockMask, t0, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     storep cfr, VM::topCallFrame[t1]
     if ARM64 or C_LOOP
         storep lr, ReturnPC[cfr]
@@ -2147,7 +2147,7 @@
 
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     btqnz VM::m_exception[t3], .handleException
 
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to