Title: [225831] trunk/Source
Revision
225831
Author
fpi...@apple.com
Date
2017-12-12 18:35:54 -0800 (Tue, 12 Dec 2017)

Log Message

It should be possible to flag a cell for unconditional finalization
https://bugs.webkit.org/show_bug.cgi?id=180636

Reviewed by Saam Barati.
        
Source/_javascript_Core:

UnconditionalFinalizers were annoying - you had to allocate them and you had to manage a
global linked list - but they had some nice properties:
        
- You only did the hardest work (creating the UnconditionalFinalizer) on first GC where you
  survived and needed it.
    -> Just needing it wasn't enough.
    -> Just surviving wasn't enough.
        
The new API based on IsoSubspaces meant that just surviving was enough to cause unconditional
finalizer logic to be invoked. I think that's not great. InferredType got around this by
making InferredStructure a cell, but this was a gross hack. For one, it meant that
InferredStructure would survive during the GC in which its finalizer obviated the need for its
existence. It's not really an idiom I want us to repeat because it sounds like the sort of
thing that turns out to be subtly broken.
        
We really need to have a way of indicating when you have entered into the state that requires
your unconditional finalizer to be invoked. Basically, we want to be able to track the set of
objects that need unconditional finalizers. Only the subset of that set that overlaps with the
set of marked objects needs to be accurate. The easiest way to do this is a hierarchy of
bitvectors: one to say which MarkedBlocks have objects that have unconditional finalizers, and
another level to say which atoms within a MarkedBlock have unconditional finalizers.
        
This change introduces IsoCellSet, which couples itself to the MarkedAllocator of some
IsoSubspace to allow maintaining a set of objects (well, cells - you could do this with
auxiliaries) that belong to that IsoSubspace. It'll have undefined behavior if you try to
add/remove/contains an object that isn't in that IsoSubspace. For objects in that subspace,
you can add/remove/contains and forEachMarkedCell. The cost of each IsoCellSet is at worst
about 0.8% increase in size to every object in the subspace that the set is attached to. So,
it makes sense to have a handful per subspace max. This change only needs one per subspace,
but you could imagine more if we do this for WeakReferenceHarvester.
        
To absolutely minimize the possibility that this incurs costs, the add/remove/contains
functions can be used from any thread so long as forEachMarkedCell isn't running. This means
that InferredType only needs to add itself to the set during visitChildren. Thus, it needs to
both survive and need it for the hardest work to take place. The work of adding does involve
a gnarly load chain that ends in a CAS: load block handle from block, load index, load
segment, load bitvector, load bit -> if not set, then CAS. That's five dependent loads!
However, it's perfect for running in parallel since the only write operations are to widely
dispersed cache lines that contain the bits underlying the set.
        
The best part is how forEachMarkedCell works. That skips blocks that don't have any objects
that need unconditional finalizers, and only touches the memory of marked objects that have
the unconditional finalizer bit set. It will walk those objects in roughly address order. I
previously found that this speeds up walking over a lot of objects when I made similar changes
for DOM GC (calling visitAdditionalChildren via forEachMarkedCell rather than by walking a
HashSet).
        
This change makes InferredStructure be a malloc object again, but now it's in an IsoHeap.
        
My expectation for this change is that it's perf-neutral. Long-term, it gives us a path
forward for eliminating UnconditionalFinalizer and WeakReferenceHarvester while using
IsoSubspace in more places.

* _javascript_Core.xcodeproj/project.pbxproj:
* Sources.txt:
* heap/AtomIndices.h: Added.
(JSC::AtomIndices::AtomIndices):
* heap/Heap.cpp:
(JSC::Heap::finalizeUnconditionalFinalizers):
* heap/Heap.h:
* heap/IsoCellSet.cpp: Added.
(JSC::IsoCellSet::IsoCellSet):
(JSC::IsoCellSet::~IsoCellSet):
(JSC::IsoCellSet::addSlow):
(JSC::IsoCellSet::didResizeBits):
(JSC::IsoCellSet::didRemoveBlock):
(JSC::IsoCellSet::sweepToFreeList):
* heap/IsoCellSet.h: Added.
* heap/IsoCellSetInlines.h: Added.
(JSC::IsoCellSet::add):
(JSC::IsoCellSet::remove):
(JSC::IsoCellSet::contains const):
(JSC::IsoCellSet::forEachMarkedCell):
* heap/IsoSubspace.cpp:
(JSC::IsoSubspace::didResizeBits):
(JSC::IsoSubspace::didRemoveBlock):
(JSC::IsoSubspace::didBeginSweepingToFreeList):
* heap/IsoSubspace.h:
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
* heap/MarkedAllocator.h:
* heap/MarkedAllocatorInlines.h:
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::Handle::sweep):
(JSC::MarkedBlock::Handle::isEmpty): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::marks const):
(JSC::MarkedBlock::Handle::newlyAllocated const):
* heap/MarkedBlockInlines.h:
(JSC::MarkedBlock::Handle::isAllocated):
(JSC::MarkedBlock::Handle::isEmpty):
(JSC::MarkedBlock::Handle::emptyMode):
(JSC::MarkedBlock::Handle::forEachMarkedCell):
* heap/Subspace.cpp:
(JSC::Subspace::didResizeBits):
(JSC::Subspace::didRemoveBlock):
(JSC::Subspace::didBeginSweepingToFreeList):
* heap/Subspace.h:
* heap/SubspaceInlines.h:
(JSC::Subspace::forEachMarkedCell):
* runtime/InferredStructure.cpp:
(JSC::InferredStructure::InferredStructure):
(JSC::InferredStructure::create): Deleted.
(JSC::InferredStructure::destroy): Deleted.
(JSC::InferredStructure::createStructure): Deleted.
(JSC::InferredStructure::visitChildren): Deleted.
(JSC::InferredStructure::finalizeUnconditionally): Deleted.
(JSC::InferredStructure::finishCreation): Deleted.
* runtime/InferredStructure.h:
* runtime/InferredStructureWatchpoint.cpp:
(JSC::InferredStructureWatchpoint::fireInternal):
* runtime/InferredType.cpp:
(JSC::InferredType::visitChildren):
(JSC::InferredType::willStoreValueSlow):
(JSC::InferredType::makeTopSlow):
(JSC::InferredType::set):
(JSC::InferredType::removeStructure):
(JSC::InferredType::finalizeUnconditionally):
* runtime/InferredType.h:
* runtime/VM.cpp:
(JSC::VM::VM):
* runtime/VM.h:

Source/WTF:

This adds ConcurrentVector, which is like SegmentedVector, but wastes some space to allow
resizing to proceed concurrently to access. It's not possible to resize concurrently to
resizing, concurrent read/writes aren't protected from racing if they access the same element,
and who knows what you'll get if you iterate up to size() while someone else append()s. The
key insight is to stash all prior copies of the spine, so that nobody crashes trying to access
a stale spine.
        
I'm going to want to do the same thing for FastBitVector, by creating a segmented WordOwner
class. That would require repeating the dance of having a spine that can resize while stashing
old versions. So, the spine resizing logic is abstracted behind ConcurrentBuffer. You could
use that as a kind of "concurrent vector" for immutable data. That's how ConcurrentVector uses
it: it's an immutable array of segment pointers.

* WTF.xcodeproj/project.pbxproj:
* wtf/ConcurrentBuffer.h: Added.
(WTF::ConcurrentBuffer::ConcurrentBuffer):
(WTF::ConcurrentBuffer::~ConcurrentBuffer):
(WTF::ConcurrentBuffer::growExact):
(WTF::ConcurrentBuffer::grow):
(WTF::ConcurrentBuffer::array const):
(WTF::ConcurrentBuffer::operator[]):
(WTF::ConcurrentBuffer::operator[] const):
(WTF::ConcurrentBuffer::createArray):
* wtf/ConcurrentVector.h: Added.
(WTF::ConcurrentVectorIterator::~ConcurrentVectorIterator):
(WTF::ConcurrentVectorIterator::operator* const):
(WTF::ConcurrentVectorIterator::operator-> const):
(WTF::ConcurrentVectorIterator::operator++):
(WTF::ConcurrentVectorIterator::operator== const):
(WTF::ConcurrentVectorIterator::operator!= const):
(WTF::ConcurrentVectorIterator::operator=):
(WTF::ConcurrentVectorIterator::ConcurrentVectorIterator):
(WTF::ConcurrentVector::~ConcurrentVector):
(WTF::ConcurrentVector::size const):
(WTF::ConcurrentVector::isEmpty const):
(WTF::ConcurrentVector::at):
(WTF::ConcurrentVector::at const):
(WTF::ConcurrentVector::operator[]):
(WTF::ConcurrentVector::operator[] const):
(WTF::ConcurrentVector::first):
(WTF::ConcurrentVector::first const):
(WTF::ConcurrentVector::last):
(WTF::ConcurrentVector::last const):
(WTF::ConcurrentVector::takeLast):
(WTF::ConcurrentVector::append):
(WTF::ConcurrentVector::alloc):
(WTF::ConcurrentVector::removeLast):
(WTF::ConcurrentVector::grow):
(WTF::ConcurrentVector::begin):
(WTF::ConcurrentVector::end):
(WTF::ConcurrentVector::segmentExistsFor):
(WTF::ConcurrentVector::segmentFor):
(WTF::ConcurrentVector::subscriptFor):
(WTF::ConcurrentVector::ensureSegmentsFor):
(WTF::ConcurrentVector::ensureSegment):
(WTF::ConcurrentVector::allocateSegment):

Modified Paths

Added Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (225830 => 225831)


--- trunk/Source/_javascript_Core/ChangeLog	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/ChangeLog	2017-12-13 02:35:54 UTC (rev 225831)
@@ -1,3 +1,134 @@
+2017-12-11  Filip Pizlo  <fpi...@apple.com>
+
+        It should be possible to flag a cell for unconditional finalization
+        https://bugs.webkit.org/show_bug.cgi?id=180636
+
+        Reviewed by Saam Barati.
+        
+        UnconditionalFinalizers were annoying - you had to allocate them and you had to manage a
+        global linked list - but they had some nice properties:
+        
+        - You only did the hardest work (creating the UnconditionalFinalizer) on first GC where you
+          survived and needed it.
+            -> Just needing it wasn't enough.
+            -> Just surviving wasn't enough.
+        
+        The new API based on IsoSubspaces meant that just surviving was enough to cause unconditional
+        finalizer logic to be invoked. I think that's not great. InferredType got around this by
+        making InferredStructure a cell, but this was a gross hack. For one, it meant that
+        InferredStructure would survive during the GC in which its finalizer obviated the need for its
+        existence. It's not really an idiom I want us to repeat because it sounds like the sort of
+        thing that turns out to be subtly broken.
+        
+        We really need to have a way of indicating when you have entered into the state that requires
+        your unconditional finalizer to be invoked. Basically, we want to be able to track the set of
+        objects that need unconditional finalizers. Only the subset of that set that overlaps with the
+        set of marked objects needs to be accurate. The easiest way to do this is a hierarchy of
+        bitvectors: one to say which MarkedBlocks have objects that have unconditional finalizers, and
+        another level to say which atoms within a MarkedBlock have unconditional finalizers.
+        
+        This change introduces IsoCellSet, which couples itself to the MarkedAllocator of some
+        IsoSubspace to allow maintaining a set of objects (well, cells - you could do this with
+        auxiliaries) that belong to that IsoSubspace. It'll have undefined behavior if you try to
+        add/remove/contains an object that isn't in that IsoSubspace. For objects in that subspace,
+        you can add/remove/contains and forEachMarkedCell. The cost of each IsoCellSet is at worst
+        about 0.8% increase in size to every object in the subspace that the set is attached to. So,
+        it makes sense to have a handful per subspace max. This change only needs one per subspace,
+        but you could imagine more if we do this for WeakReferenceHarvester.
+        
+        To absolutely minimize the possibility that this incurs costs, the add/remove/contains
+        functions can be used from any thread so long as forEachMarkedCell isn't running. This means
+        that InferredType only needs to add itself to the set during visitChildren. Thus, it needs to
+        both survive and need it for the hardest work to take place. The work of adding does involve
+        a gnarly load chain that ends in a CAS: load block handle from block, load index, load
+        segment, load bitvector, load bit -> if not set, then CAS. That's five dependent loads!
+        However, it's perfect for running in parallel since the only write operations are to widely
+        dispersed cache lines that contain the bits underlying the set.
+        
+        The best part is how forEachMarkedCell works. That skips blocks that don't have any objects
+        that need unconditional finalizers, and only touches the memory of marked objects that have
+        the unconditional finalizer bit set. It will walk those objects in roughly address order. I
+        previously found that this speeds up walking over a lot of objects when I made similar changes
+        for DOM GC (calling visitAdditionalChildren via forEachMarkedCell rather than by walking a
+        HashSet).
+        
+        This change makes InferredStructure be a malloc object again, but now it's in an IsoHeap.
+        
+        My expectation for this change is that it's perf-neutral. Long-term, it gives us a path
+        forward for eliminating UnconditionalFinalizer and WeakReferenceHarvester while using
+        IsoSubspace in more places.
+
+        * _javascript_Core.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * heap/AtomIndices.h: Added.
+        (JSC::AtomIndices::AtomIndices):
+        * heap/Heap.cpp:
+        (JSC::Heap::finalizeUnconditionalFinalizers):
+        * heap/Heap.h:
+        * heap/IsoCellSet.cpp: Added.
+        (JSC::IsoCellSet::IsoCellSet):
+        (JSC::IsoCellSet::~IsoCellSet):
+        (JSC::IsoCellSet::addSlow):
+        (JSC::IsoCellSet::didResizeBits):
+        (JSC::IsoCellSet::didRemoveBlock):
+        (JSC::IsoCellSet::sweepToFreeList):
+        * heap/IsoCellSet.h: Added.
+        * heap/IsoCellSetInlines.h: Added.
+        (JSC::IsoCellSet::add):
+        (JSC::IsoCellSet::remove):
+        (JSC::IsoCellSet::contains const):
+        (JSC::IsoCellSet::forEachMarkedCell):
+        * heap/IsoSubspace.cpp:
+        (JSC::IsoSubspace::didResizeBits):
+        (JSC::IsoSubspace::didRemoveBlock):
+        (JSC::IsoSubspace::didBeginSweepingToFreeList):
+        * heap/IsoSubspace.h:
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::addBlock):
+        (JSC::MarkedAllocator::removeBlock):
+        * heap/MarkedAllocator.h:
+        * heap/MarkedAllocatorInlines.h:
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::Handle::sweep):
+        (JSC::MarkedBlock::Handle::isEmpty): Deleted.
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::marks const):
+        (JSC::MarkedBlock::Handle::newlyAllocated const):
+        * heap/MarkedBlockInlines.h:
+        (JSC::MarkedBlock::Handle::isAllocated):
+        (JSC::MarkedBlock::Handle::isEmpty):
+        (JSC::MarkedBlock::Handle::emptyMode):
+        (JSC::MarkedBlock::Handle::forEachMarkedCell):
+        * heap/Subspace.cpp:
+        (JSC::Subspace::didResizeBits):
+        (JSC::Subspace::didRemoveBlock):
+        (JSC::Subspace::didBeginSweepingToFreeList):
+        * heap/Subspace.h:
+        * heap/SubspaceInlines.h:
+        (JSC::Subspace::forEachMarkedCell):
+        * runtime/InferredStructure.cpp:
+        (JSC::InferredStructure::InferredStructure):
+        (JSC::InferredStructure::create): Deleted.
+        (JSC::InferredStructure::destroy): Deleted.
+        (JSC::InferredStructure::createStructure): Deleted.
+        (JSC::InferredStructure::visitChildren): Deleted.
+        (JSC::InferredStructure::finalizeUnconditionally): Deleted.
+        (JSC::InferredStructure::finishCreation): Deleted.
+        * runtime/InferredStructure.h:
+        * runtime/InferredStructureWatchpoint.cpp:
+        (JSC::InferredStructureWatchpoint::fireInternal):
+        * runtime/InferredType.cpp:
+        (JSC::InferredType::visitChildren):
+        (JSC::InferredType::willStoreValueSlow):
+        (JSC::InferredType::makeTopSlow):
+        (JSC::InferredType::set):
+        (JSC::InferredType::removeStructure):
+        (JSC::InferredType::finalizeUnconditionally):
+        * runtime/InferredType.h:
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        * runtime/VM.h:
+
 2017-12-12  Saam Barati  <sbar...@apple.com>
 
         ConstantFoldingPhase rule for GetMyArgumentByVal must check for negative indices

Modified: trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj (225830 => 225831)


--- trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2017-12-13 02:35:54 UTC (rev 225831)
@@ -474,6 +474,10 @@
 		0FB17663196B8F9E0091052A /* DFGPureValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB1765F196B8F9E0091052A /* DFGPureValue.h */; };
 		0FB3878E1BFBC44D00E3AB1E /* AirBlockWorklist.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB3878B1BFBC44D00E3AB1E /* AirBlockWorklist.h */; };
 		0FB387901BFBC44D00E3AB1E /* AirOptimizeBlockOrder.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB3878D1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.h */; };
+		0FB4677F1FDDA6E9003FCB09 /* AtomIndices.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4677E1FDDA6E5003FCB09 /* AtomIndices.h */; };
+		0FB467801FDDA6F1003FCB09 /* IsoCellSet.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4677D1FDDA6D9003FCB09 /* IsoCellSet.h */; settings = {ATTRIBUTES = (Private, ); }; };
+		0FB467811FDDA6F7003FCB09 /* IsoCellSetInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4677B1FDDA6D8003FCB09 /* IsoCellSetInlines.h */; };
+		0FB467851FDFB454003FCB09 /* InferredTypeInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB467841FDFB452003FCB09 /* InferredTypeInlines.h */; };
 		0FB4767E1D99AEA9008EA6CB /* GCDeferralContext.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		0FB4767F1D99AEAD008EA6CB /* GCDeferralContextInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */; };
 		0FB4FB741BC843140025CA5A /* FTLLazySlowPath.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */; };
@@ -2563,6 +2567,11 @@
 		0FB387911BFD31A100E3AB1E /* FTLCompile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLCompile.cpp; path = ftl/FTLCompile.cpp; sourceTree = "<group>"; };
 		0FB415831D78F98200DF8D09 /* ArrayConventions.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArrayConventions.cpp; sourceTree = "<group>"; };
 		0FB438A219270B1D00E1FBC9 /* StructureSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StructureSet.cpp; sourceTree = "<group>"; };
+		0FB4677B1FDDA6D8003FCB09 /* IsoCellSetInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = IsoCellSetInlines.h; sourceTree = "<group>"; };
+		0FB4677C1FDDA6D9003FCB09 /* IsoCellSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = IsoCellSet.cpp; sourceTree = "<group>"; };
+		0FB4677D1FDDA6D9003FCB09 /* IsoCellSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = IsoCellSet.h; sourceTree = "<group>"; };
+		0FB4677E1FDDA6E5003FCB09 /* AtomIndices.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AtomIndices.h; sourceTree = "<group>"; };
+		0FB467841FDFB452003FCB09 /* InferredTypeInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InferredTypeInlines.h; sourceTree = "<group>"; };
 		0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCDeferralContext.h; sourceTree = "<group>"; };
 		0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCDeferralContextInlines.h; sourceTree = "<group>"; };
 		0FB4B51016B3A964003F696B /* DFGMinifiedID.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGMinifiedID.h; path = dfg/DFGMinifiedID.h; sourceTree = "<group>"; };
@@ -5528,6 +5537,7 @@
 				0F9630351D4192C3005609D9 /* AllocatorAttributes.cpp */,
 				0F9630361D4192C3005609D9 /* AllocatorAttributes.h */,
 				0F30CB5D1FCE46B4004B5323 /* AllocatorForMode.h */,
+				0FB4677E1FDDA6E5003FCB09 /* AtomIndices.h */,
 				0FDE87F81DFD0C6D0064C390 /* CellContainer.cpp */,
 				0F070A421D543A89006E7232 /* CellContainer.h */,
 				0F070A431D543A89006E7232 /* CellContainerInlines.h */,
@@ -5614,6 +5624,9 @@
 				C25F8BCC157544A900245B71 /* IncrementalSweeper.h */,
 				0FDCE12F1FB11D9D006F3901 /* IsoAlignedMemoryAllocator.cpp */,
 				0FDCE1301FB11D9D006F3901 /* IsoAlignedMemoryAllocator.h */,
+				0FB4677C1FDDA6D9003FCB09 /* IsoCellSet.cpp */,
+				0FB4677D1FDDA6D9003FCB09 /* IsoCellSet.h */,
+				0FB4677B1FDDA6D8003FCB09 /* IsoCellSetInlines.h */,
 				0FDCE12C1FAFB4DE006F3901 /* IsoSubspace.cpp */,
 				0FDCE12B1FAFB4DE006F3901 /* IsoSubspace.h */,
 				0F766D2915A8CC34008F363E /* JITStubRoutineSet.cpp */,
@@ -6489,6 +6502,7 @@
 				0FBF92B71FD76FFC00AC28A8 /* InferredStructureWatchpoint.h */,
 				0F0A75201B94BFA900110660 /* InferredType.cpp */,
 				0F0A75211B94BFA900110660 /* InferredType.h */,
+				0FB467841FDFB452003FCB09 /* InferredTypeInlines.h */,
 				0FFC920F1B94D4DF0071DD66 /* InferredTypeTable.cpp */,
 				0FFC92101B94D4DF0071DD66 /* InferredTypeTable.h */,
 				0FF8BDE81AD4CF7100DFE884 /* InferredValue.cpp */,
@@ -8002,6 +8016,7 @@
 				0FEC85721BDACDC70080FF74 /* AirBasicBlock.h in Headers */,
 				0F2C63BC1E63440C00C13839 /* AirBlockInsertionSet.h in Headers */,
 				0FB3878E1BFBC44D00E3AB1E /* AirBlockWorklist.h in Headers */,
+				0FB467811FDDA6F7003FCB09 /* IsoCellSetInlines.h in Headers */,
 				0F79C7CA1E74C93B00EB34D1 /* AirBreakCriticalEdges.h in Headers */,
 				0F61832A1C45BF070072450B /* AirCCallingConvention.h in Headers */,
 				0FEC85741BDACDC70080FF74 /* AirCCallSpecial.h in Headers */,
@@ -8782,6 +8797,7 @@
 				70113D4C1A8DB093003848C4 /* IteratorOperations.h in Headers */,
 				70DC3E0A1B2DF2C700054299 /* IteratorPrototype.h in Headers */,
 				BC18C4130E16F5CD00B34460 /* _javascript_.h in Headers */,
+				0FB467851FDFB454003FCB09 /* InferredTypeInlines.h in Headers */,
 				A503FA1A188E0FB000110F14 /* _javascript_CallFrame.h in Headers */,
 				BC18C4140E16F5CD00B34460 /* _javascript_Core.h in Headers */,
 				BC18C4150E16F5CD00B34460 /* _javascript_CorePrefix.h in Headers */,
@@ -8978,6 +8994,7 @@
 				AD2FCBE51DB58DAD00B3E736 /* JSWebAssemblyInstance.h in Headers */,
 				ADE802991E08F1DE0058DE78 /* JSWebAssemblyLinkError.h in Headers */,
 				AD2FCBE71DB58DAD00B3E736 /* JSWebAssemblyMemory.h in Headers */,
+				0FB4677F1FDDA6E9003FCB09 /* AtomIndices.h in Headers */,
 				AD2FCC051DB58DAD00B3E736 /* JSWebAssemblyModule.h in Headers */,
 				AD2FCBE91DB58DAD00B3E736 /* JSWebAssemblyRuntimeError.h in Headers */,
 				AD2FCBEB1DB58DAD00B3E736 /* JSWebAssemblyTable.h in Headers */,
@@ -9267,6 +9284,7 @@
 				C20BA92D16BB1C1500B3AEA2 /* StructureRareDataInlines.h in Headers */,
 				0F9332A514CA7DDD0085F3C6 /* StructureSet.h in Headers */,
 				0F766D3915AE4A1F008F363E /* StructureStubClearingWatchpoint.h in Headers */,
+				0FB467801FDDA6F1003FCB09 /* IsoCellSet.h in Headers */,
 				BCCF0D080EF0AAB900413C8F /* StructureStubInfo.h in Headers */,
 				BC9041480EB9250900FE26FA /* StructureTransitionTable.h in Headers */,
 				0F7DF1371E2970E10095951B /* Subspace.h in Headers */,

Modified: trunk/Source/_javascript_Core/Sources.txt (225830 => 225831)


--- trunk/Source/_javascript_Core/Sources.txt	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/Sources.txt	2017-12-13 02:35:54 UTC (rev 225831)
@@ -495,6 +495,7 @@
 heap/HeapSnapshotBuilder.cpp
 heap/IncrementalSweeper.cpp
 heap/IsoAlignedMemoryAllocator.cpp
+heap/IsoCellSet.cpp
 heap/IsoSubspace.cpp
 heap/JITStubRoutineSet.cpp
 heap/LargeAllocation.cpp

Added: trunk/Source/_javascript_Core/heap/AtomIndices.h (0 => 225831)


--- trunk/Source/_javascript_Core/heap/AtomIndices.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/heap/AtomIndices.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "MarkedBlock.h"
+
+namespace JSC {
+
+struct AtomIndices {
+    AtomIndices() { }
+    
+    AtomIndices(HeapCell* cell)
+        : block(MarkedBlock::blockFor(cell))
+        , blockIndex(block->handle().index())
+        , atomNumber(block->atomNumber(cell))
+    {
+    }
+    
+    MarkedBlock* block;
+    size_t blockIndex;
+    size_t atomNumber;
+};
+
+} // namespace JSC
+

Modified: trunk/Source/_javascript_Core/heap/Heap.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/Heap.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/Heap.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -40,8 +40,9 @@
 #include "HeapSnapshot.h"
 #include "HeapVerifier.h"
 #include "IncrementalSweeper.h"
-#include "InferredStructure.h"
+#include "InferredTypeInlines.h"
 #include "Interpreter.h"
+#include "IsoCellSetInlines.h"
 #include "JITStubRoutineSet.h"
 #include "JITWorklist.h"
 #include "JSCInlines.h"
@@ -552,10 +553,10 @@
     }
 }
 
-template<typename CellType>
-void Heap::finalizeUnconditionalFinalizers(Subspace& subspace)
+template<typename CellType, typename CellSet>
+void Heap::finalizeUnconditionalFinalizers(CellSet& cellSet)
 {
-    subspace.forEachMarkedCell(
+    cellSet.forEachMarkedCell(
         [&] (HeapCell* cell, HeapCell::Kind) {
             static_cast<CellType*>(cell)->finalizeUnconditionally(*vm());
         });
@@ -563,7 +564,7 @@
 
 void Heap::finalizeUnconditionalFinalizers()
 {
-    finalizeUnconditionalFinalizers<InferredStructure>(vm()->inferredStructureSpace);
+    finalizeUnconditionalFinalizers<InferredType>(vm()->inferredTypesWithFinalizers);
     
     while (m_unconditionalFinalizers.hasNext()) {
         UnconditionalFinalizer* finalizer = m_unconditionalFinalizers.removeNext();

Modified: trunk/Source/_javascript_Core/heap/Heap.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/Heap.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/Heap.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -496,8 +496,8 @@
     void notifyIncrementalSweeper();
     void harvestWeakReferences();
 
-    template<typename CellType>
-    void finalizeUnconditionalFinalizers(Subspace&);
+    template<typename CellType, typename CellSet>
+    void finalizeUnconditionalFinalizers(CellSet&);
     
     void finalizeUnconditionalFinalizers();
     

Added: trunk/Source/_javascript_Core/heap/IsoCellSet.cpp (0 => 225831)


--- trunk/Source/_javascript_Core/heap/IsoCellSet.cpp	                        (rev 0)
+++ trunk/Source/_javascript_Core/heap/IsoCellSet.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,114 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "IsoCellSet.h"
+
+#include "MarkedAllocatorInlines.h"
+#include "MarkedBlockInlines.h"
+
+namespace JSC {
+
+IsoCellSet::IsoCellSet(IsoSubspace& subspace)
+    : m_subspace(subspace)
+{
+    size_t size = subspace.m_allocator.m_blocks.size();
+    m_blocksWithBits.resize(size);
+    m_bits.grow(size);
+    subspace.m_cellSets.append(this);
+}
+
+IsoCellSet::~IsoCellSet()
+{
+    if (isOnList())
+        BasicRawSentinelNode<IsoCellSet>::remove();
+}
+
+NEVER_INLINE Bitmap<MarkedBlock::atomsPerBlock>* IsoCellSet::addSlow(size_t blockIndex)
+{
+    auto locker = holdLock(m_subspace.m_allocator.m_bitvectorLock);
+    auto& bitsPtrRef = m_bits[blockIndex];
+    auto* bits = bitsPtrRef.get();
+    if (!bits) {
+        bitsPtrRef = std::make_unique<Bitmap<MarkedBlock::atomsPerBlock>>();
+        bits = bitsPtrRef.get();
+        WTF::storeStoreFence();
+        m_blocksWithBits[blockIndex] = true;
+    }
+    return bits;
+}
+
+void IsoCellSet::didResizeBits(size_t newSize)
+{
+    m_blocksWithBits.resize(newSize);
+    m_bits.grow(newSize);
+}
+
+void IsoCellSet::didRemoveBlock(size_t blockIndex)
+{
+    {
+        auto locker = holdLock(m_subspace.m_allocator.m_bitvectorLock);
+        m_blocksWithBits[blockIndex] = false;
+    }
+    m_bits[blockIndex] = nullptr;
+}
+
+void IsoCellSet::sweepToFreeList(MarkedBlock::Handle* block)
+{
+    RELEASE_ASSERT(!block->isAllocated());
+    
+    if (!m_blocksWithBits[block->index()])
+        return;
+    
+    WTF::loadLoadFence();
+    
+    if (!m_bits[block->index()]) {
+        dataLog("FATAL: for block index ", block->index(), ":\n");
+        dataLog("Blocks with bits says: ", !!m_blocksWithBits[block->index()], "\n");
+        dataLog("Bits says: ", RawPointer(m_bits[block->index()].get()), "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
+    if (block->hasAnyNewlyAllocated()) {
+        m_bits[block->index()]->concurrentFilter(block->newlyAllocated());
+        return;
+    }
+
+    if (block->isEmpty() || block->areMarksStale()) {
+        {
+            // Holding the bitvector lock happens to be enough because that's what we also hold in
+            // other places where we manipulate this bitvector.
+            auto locker = holdLock(m_subspace.m_allocator.m_bitvectorLock);
+            m_blocksWithBits[block->index()] = false;
+        }
+        m_bits[block->index()] = nullptr;
+        return;
+    }
+    
+    m_bits[block->index()]->concurrentFilter(block->block().marks());
+}
+
+} // namespace JSC
+

Added: trunk/Source/_javascript_Core/heap/IsoCellSet.h (0 => 225831)


--- trunk/Source/_javascript_Core/heap/IsoCellSet.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/heap/IsoCellSet.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/Bitmap.h>
+#include <wtf/ConcurrentVector.h>
+#include <wtf/FastBitVector.h>
+
+namespace JSC {
+
+class HeapCell;
+class IsoSubspace;
+
+// Create a set of cells that are in an IsoSubspace. This allows concurrent O(1) set insertion and
+// removal. Each such set should be thought of as a 0.8% increase in object size for objects in that
+// IsoSubspace (it's like adding 1 bit every 16 bytes, or 1 bit every 128 bits).
+class IsoCellSet : public BasicRawSentinelNode<IsoCellSet> {
+public:
+    IsoCellSet(IsoSubspace& subspace);
+    ~IsoCellSet();
+    
+    bool add(HeapCell* cell); // Returns true if the cell was newly added.
+    
+    bool remove(HeapCell* cell); // Returns true if the cell was previously present and got removed.
+    
+    bool contains(HeapCell* cell) const;
+    
+    // This will have to do a combined search over whatever Subspace::forEachMarkedCell uses and
+    // our m_blocksWithBits.
+    template<typename Func>
+    void forEachMarkedCell(const Func&);
+    
+private:
+    friend class IsoSubspace;
+    
+    Bitmap<MarkedBlock::atomsPerBlock>* addSlow(size_t blockIndex);
+    
+    void didResizeBits(size_t newSize);
+    void didRemoveBlock(size_t blockIndex);
+    void sweepToFreeList(MarkedBlock::Handle*);
+    
+    IsoSubspace& m_subspace;
+    
+    // Idea: sweeping to free-list clears bits for those cells that were free-listed. The first time
+    // we add a cell in a block, that block gets a free-list. Unless we do something that obviously
+    // clears all bits for a block, we keep it set in blocksWithBits.
+    
+    FastBitVector m_blocksWithBits;
+    ConcurrentVector<std::unique_ptr<Bitmap<MarkedBlock::atomsPerBlock>>> m_bits;
+};
+
+} // namespace JSC
+

Added: trunk/Source/_javascript_Core/heap/IsoCellSetInlines.h (0 => 225831)


--- trunk/Source/_javascript_Core/heap/IsoCellSetInlines.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/heap/IsoCellSetInlines.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,82 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "AtomIndices.h"
+#include "IsoCellSet.h"
+#include "MarkedBlockInlines.h"
+
+namespace JSC {
+
+inline bool IsoCellSet::add(HeapCell* cell)
+{
+    AtomIndices atomIndices(cell);
+    auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
+    auto* bits = bitsPtrRef.get();
+    if (UNLIKELY(!bits))
+        bits = addSlow(atomIndices.blockIndex);
+    return !bits->concurrentTestAndSet(atomIndices.atomNumber);
+}
+
+inline bool IsoCellSet::remove(HeapCell* cell)
+{
+    AtomIndices atomIndices(cell);
+    auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
+    auto* bits = bitsPtrRef.get();
+    if (!bits)
+        return false;
+    return bits->concurrentTestAndClear(atomIndices.atomNumber);
+}
+
+inline bool IsoCellSet::contains(HeapCell* cell) const
+{
+    AtomIndices atomIndices(cell);
+    auto* bits = m_bits[atomIndices.blockIndex].get();
+    if (bits)
+        return bits->get(atomIndices.atomNumber);
+    return false;
+}
+
+template<typename Func>
+void IsoCellSet::forEachMarkedCell(const Func& func)
+{
+    MarkedAllocator& allocator = m_subspace.m_allocator;
+    (allocator.m_markingNotEmpty & m_blocksWithBits).forEachSetBit(
+        [&] (size_t blockIndex) {
+            MarkedBlock::Handle* block = allocator.m_blocks[blockIndex];
+
+            auto* bits = m_bits[blockIndex].get();
+            block->forEachMarkedCell(
+                [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                    if (bits->get(atomNumber))
+                        func(cell, kind);
+                    return IterationStatus::Continue;
+                });
+        });
+}
+
+} // namespace JSC
+

Modified: trunk/Source/_javascript_Core/heap/IsoSubspace.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/IsoSubspace.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/IsoSubspace.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -67,5 +67,29 @@
     return result;
 }
 
+void IsoSubspace::didResizeBits(size_t blockIndex)
+{
+    m_cellSets.forEach(
+        [&] (IsoCellSet* set) {
+            set->didResizeBits(blockIndex);
+        });
+}
+
+void IsoSubspace::didRemoveBlock(size_t blockIndex)
+{
+    m_cellSets.forEach(
+        [&] (IsoCellSet* set) {
+            set->didRemoveBlock(blockIndex);
+        });
+}
+
+void IsoSubspace::didBeginSweepingToFreeList(MarkedBlock::Handle* block)
+{
+    m_cellSets.forEach(
+        [&] (IsoCellSet* set) {
+            set->sweepToFreeList(block);
+        });
+}
+
 } // namespace JSC
 

Modified: trunk/Source/_javascript_Core/heap/IsoSubspace.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/IsoSubspace.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/IsoSubspace.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -27,10 +27,12 @@
 
 #include "MarkedAllocator.h"
 #include "Subspace.h"
+#include <wtf/SinglyLinkedListWithTail.h>
 
 namespace JSC {
 
 class IsoAlignedMemoryAllocator;
+class IsoCellSet;
 
 class IsoSubspace : public Subspace {
 public:
@@ -46,9 +48,16 @@
     JS_EXPORT_PRIVATE void* allocateNonVirtual(size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode);
 
 private:
+    friend class IsoCellSet;
+    
+    void didResizeBits(size_t newSize) override;
+    void didRemoveBlock(size_t blockIndex) override;
+    void didBeginSweepingToFreeList(MarkedBlock::Handle*) override;
+    
     size_t m_size;
     MarkedAllocator m_allocator;
     std::unique_ptr<IsoAlignedMemoryAllocator> m_isoAlignedMemoryAllocator;
+    SentinelLinkedList<IsoCellSet, BasicRawSentinelNode<IsoCellSet>> m_cellSets;
 };
 
 inline MarkedAllocator* IsoSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode)

Modified: trunk/Source/_javascript_Core/heap/MarkedAllocator.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedAllocator.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedAllocator.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -261,6 +261,7 @@
             ASSERT(m_blocks.capacity() > oldCapacity);
             
             LockHolder locker(m_bitvectorLock);
+            subspace()->didResizeBits(m_blocks.capacity());
             forEachBitVector(
                 locker,
                 [&] (FastBitVector& vector) {
@@ -290,7 +291,9 @@
 {
     ASSERT(block->allocator() == this);
     ASSERT(m_blocks[block->index()] == block);
-
+    
+    subspace()->didRemoveBlock(block->index());
+    
     m_blocks[block->index()] = nullptr;
     m_freeBlockIndices.append(block->index());
     

Modified: trunk/Source/_javascript_Core/heap/MarkedAllocator.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedAllocator.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedAllocator.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -38,6 +38,7 @@
 
 class GCDeferralContext;
 class Heap;
+class IsoCellSet;
 class MarkedSpace;
 class LLIntOffsetsExtractor;
 
@@ -162,6 +163,7 @@
     void dumpBits(PrintStream& = WTF::dataFile());
     
 private:
+    friend class IsoCellSet;
     friend class MarkedBlock;
     
     JS_EXPORT_PRIVATE void* allocateSlowCase(GCDeferralContext*, AllocationFailureMode failureMode);

Modified: trunk/Source/_javascript_Core/heap/MarkedAllocatorInlines.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedAllocatorInlines.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedAllocatorInlines.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -27,6 +27,7 @@
 
 #include "FreeListInlines.h"
 #include "MarkedAllocator.h"
+#include "VM.h"
 
 namespace JSC {
 

Modified: trunk/Source/_javascript_Core/heap/MarkedBlock.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedBlock.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedBlock.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -301,11 +301,6 @@
     return areMarksStale() ? 0 : m_marks.count();
 }
 
-bool MarkedBlock::Handle::isEmpty()
-{
-    return m_allocator->isEmpty(NoLockingNecessary, this);
-}
-
 void MarkedBlock::clearHasAnyMarked()
 {
     m_biasedMarkCount = m_markCountBias;
@@ -403,11 +398,13 @@
         return;
 
     RELEASE_ASSERT(!m_isFreeListed);
-    RELEASE_ASSERT(!m_allocator->isAllocated(NoLockingNecessary, this));
+    RELEASE_ASSERT(!isAllocated());
     
     if (space()->isMarking())
         block().m_lock.lock();
     
+    subspace()->didBeginSweepingToFreeList(this);
+    
     if (needsDestruction) {
         subspace()->finishSweep(*this, freeList);
         return;

Modified: trunk/Source/_javascript_Core/heap/MarkedBlock.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedBlock.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedBlock.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -162,6 +162,8 @@
         size_t markCount();
         size_t size();
         
+        bool isAllocated();
+        
         bool isLive(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const HeapCell*);
         inline bool isLiveCell(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const void*);
 
@@ -173,6 +175,7 @@
         bool isNewlyAllocated(const void*);
         void setNewlyAllocated(const void*);
         void clearNewlyAllocated(const void*);
+        const Bitmap<atomsPerBlock>& newlyAllocated() const;
         
         HeapVersion newlyAllocatedVersion() const { return m_newlyAllocatedVersion; }
         
@@ -227,7 +230,7 @@
         size_t m_atomsPerCell { std::numeric_limits<size_t>::max() };
         size_t m_endAtom { std::numeric_limits<size_t>::max() }; // This is a fuzzy end. Always test for < m_endAtom.
             
-        WTF::Bitmap<atomsPerBlock> m_newlyAllocated;
+        Bitmap<atomsPerBlock> m_newlyAllocated;
             
         AllocatorAttributes m_attributes;
         bool m_isFreeListed { false };
@@ -294,6 +297,8 @@
     bool isMarkedRaw(const void* p);
     HeapVersion markingVersion() const { return m_markingVersion; }
     
+    const Bitmap<atomsPerBlock>& marks() const;
+    
     CountingLock& lock() { return m_lock; }
 
 private:
@@ -346,7 +351,7 @@
 
     HeapVersion m_markingVersion;
 
-    WTF::Bitmap<atomsPerBlock> m_marks;
+    Bitmap<atomsPerBlock> m_marks;
 };
 
 inline MarkedBlock::Handle& MarkedBlock::handle()
@@ -532,6 +537,11 @@
     return m_marks.concurrentTestAndSet(atomNumber(p), dependency);
 }
 
+inline const Bitmap<MarkedBlock::atomsPerBlock>& MarkedBlock::marks() const
+{
+    return m_marks;
+}
+
 inline bool MarkedBlock::Handle::isNewlyAllocated(const void* p)
 {
     return m_newlyAllocated.get(m_block->atomNumber(p));
@@ -547,6 +557,11 @@
     m_newlyAllocated.clear(m_block->atomNumber(p));
 }
 
+inline const Bitmap<MarkedBlock::atomsPerBlock>& MarkedBlock::Handle::newlyAllocated() const
+{
+    return m_newlyAllocated;
+}
+
 inline bool MarkedBlock::isAtom(const void* p)
 {
     ASSERT(MarkedBlock::isAtomAligned(p));

Modified: trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/MarkedBlockInlines.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -91,6 +91,11 @@
         || MarkedSpace::nextVersion(myMarkingVersion) == markingVersion;
 }
 
+inline bool MarkedBlock::Handle::isAllocated()
+{
+    return m_allocator->isAllocated(NoLockingNecessary, this);
+}
+
 ALWAYS_INLINE bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const HeapCell* cell)
 {
     if (allocator()->isAllocated(NoLockingNecessary, this))
@@ -434,6 +439,11 @@
     return BlockHasNoDestructors;
 }
 
+inline bool MarkedBlock::Handle::isEmpty()
+{
+    return m_allocator->isEmpty(NoLockingNecessary, this);
+}
+
 inline MarkedBlock::Handle::EmptyMode MarkedBlock::Handle::emptyMode()
 {
     // It's not obvious, but this is the only way to know if the block is empty. It's the only
@@ -441,7 +451,7 @@
     // - It's true when the block is freshly allocated.
     // - It's true if the block had been swept in the past, all destructors were called, and that
     //   sweep proved that the block is empty.
-    return m_allocator->isEmpty(NoLockingNecessary, this) ? IsEmpty : NotEmpty;
+    return isEmpty() ? IsEmpty : NotEmpty;
 }
 
 inline MarkedBlock::Handle::ScribbleMode MarkedBlock::Handle::scribbleMode()
@@ -512,11 +522,12 @@
     if (areMarksStale)
         return IterationStatus::Continue;
     for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
-        if (!block.isMarkedRaw(cell))
+        if (!block.m_marks.get(i))
             continue;
 
-        if (functor(cell, kind) == IterationStatus::Done)
+        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+
+        if (functor(i, cell, kind) == IterationStatus::Done)
             return IterationStatus::Done;
     }
     return IterationStatus::Continue;

Modified: trunk/Source/_javascript_Core/heap/Subspace.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/Subspace.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/Subspace.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -126,5 +126,17 @@
         });
 }
 
+void Subspace::didResizeBits(size_t)
+{
+}
+
+void Subspace::didRemoveBlock(size_t)
+{
+}
+
+void Subspace::didBeginSweepingToFreeList(MarkedBlock::Handle*)
+{
+}
+
 } // namespace JSC
 

Modified: trunk/Source/_javascript_Core/heap/Subspace.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/Subspace.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/Subspace.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -93,6 +93,10 @@
     
     Subspace* nextSubspaceInAlignedMemoryAllocator() const { return m_nextSubspaceInAlignedMemoryAllocator; }
     void setNextSubspaceInAlignedMemoryAllocator(Subspace* subspace) { m_nextSubspaceInAlignedMemoryAllocator = subspace; }
+    
+    virtual void didResizeBits(size_t newSize);
+    virtual void didRemoveBlock(size_t blockIndex);
+    virtual void didBeginSweepingToFreeList(MarkedBlock::Handle*);
 
 protected:
     void initialize(HeapCellType*, AlignedMemoryAllocator*);

Modified: trunk/Source/_javascript_Core/heap/SubspaceInlines.h (225830 => 225831)


--- trunk/Source/_javascript_Core/heap/SubspaceInlines.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/heap/SubspaceInlines.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -71,7 +71,7 @@
     forEachNotEmptyMarkedBlock(
         [&] (MarkedBlock::Handle* handle) {
             handle->forEachMarkedCell(
-                [&] (HeapCell* cell, HeapCell::Kind kind) -> IterationStatus { 
+                [&] (size_t, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
                     func(cell, kind);
                     return IterationStatus::Continue;
                 });
@@ -99,7 +99,7 @@
         {
             while (MarkedBlock::Handle* handle = m_blockSource->run()) {
                 handle->forEachMarkedCell(
-                    [&] (HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                    [&] (size_t, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
                         m_func(visitor, cell, kind);
                         return IterationStatus::Continue;
                     });

Modified: trunk/Source/_javascript_Core/runtime/InferredStructure.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredStructure.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/InferredStructure.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -27,66 +27,19 @@
 #include "InferredStructure.h"
 
 #include "JSCInlines.h"
+#include <wtf/IsoMallocInlines.h>
 
 namespace JSC {
 
-const ClassInfo InferredStructure::s_info = { "InferredStructure", nullptr, nullptr, nullptr, CREATE_METHOD_TABLE(InferredStructure) };
+WTF_MAKE_ISO_ALLOCATED_IMPL(InferredStructure);
 
-InferredStructure* InferredStructure::create(VM& vm, GCDeferralContext& deferralContext, InferredType* parent, Structure* structure)
-{
-    InferredStructure* result = new (NotNull, allocateCell<InferredStructure>(vm.heap, &deferralContext)) InferredStructure(vm, parent, structure);
-    result->finishCreation(vm, structure);
-    return result;
-}
-
-void InferredStructure::destroy(JSCell* cell)
-{
-    InferredStructure* inferredStructure = static_cast<InferredStructure*>(cell);
-    inferredStructure->InferredStructure::~InferredStructure();
-}
-
-Structure* InferredStructure::createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
-{
-    return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
-}
-
-void InferredStructure::visitChildren(JSCell* cell, SlotVisitor& visitor)
-{
-    InferredStructure* inferredStructure = jsCast<InferredStructure*>(cell);
-    
-    visitor.append(inferredStructure->m_parent);
-}
-
-void InferredStructure::finalizeUnconditionally(VM&)
-{
-    ASSERT(Heap::isMarked(m_parent.get()));
-    
-    // Monotonicity ensures that we shouldn't see a new structure that is different from us, but we
-    // could have been nulled. We only rely on it being the null case only in debug.
-    if (this == m_parent->m_structure.get()) {
-        if (!Heap::isMarked(m_structure.get()))
-            m_parent->removeStructure();
-    } else
-        ASSERT(!m_parent->m_structure);
-    
-    // We'll get deleted on the next GC. That's a little weird, but not harmful, since it only happens
-    // when the InferredType that we were paired to would have survived. It just means a tiny missed
-    // opportunity for heap size reduction.
-}
-
 InferredStructure::InferredStructure(VM& vm, InferredType* parent, Structure* structure)
-    : Base(vm, vm.inferredStructureStructure.get())
-    , m_parent(vm, this, parent)
-    , m_structure(vm, this, structure)
+    : parent(parent)
+    , structure(vm, parent, structure)
 {
+    structure->addTransitionWatchpoint(&watchpoint);
 }
 
-void InferredStructure::finishCreation(VM& vm, Structure* structure)
-{
-    Base::finishCreation(vm);
-    structure->addTransitionWatchpoint(&m_watchpoint);
-}
-
 } // namespace JSC
 
 

Modified: trunk/Source/_javascript_Core/runtime/InferredStructure.h (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredStructure.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/InferredStructure.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -26,51 +26,23 @@
 #pragma once
 
 #include "InferredStructureWatchpoint.h"
-#include "JSCell.h"
-#include "VM.h"
+#include "WriteBarrier.h"
+#include <wtf/IsoMalloc.h>
 
 namespace JSC {
 
 class InferredType;
+class Structure;
 
-class InferredStructure final : public JSCell {
+class InferredStructure final {
+    WTF_MAKE_ISO_ALLOCATED(InferredStructure);
 public:
-    typedef JSCell Base;
-    
-    template<typename CellType>
-    static IsoSubspace* subspaceFor(VM& vm)
-    {
-        return &vm.inferredStructureSpace;
-    }
-    
-    static InferredStructure* create(VM&, GCDeferralContext&, InferredType* parent, Structure*);
-    
-    static const bool needsDestruction = true;
-    static void destroy(JSCell*);
-    
-    static const unsigned StructureFlags = StructureIsImmortal | Base::StructureFlags;
+    InferredStructure(VM&, InferredType* parent, Structure*);
 
-    static Structure* createStructure(VM&, JSGlobalObject*, JSValue prototype);
+    InferredType* parent;
+    WriteBarrier<Structure> structure;
     
-    static void visitChildren(JSCell*, SlotVisitor&);
-
-    DECLARE_INFO;
-    
-    InferredType* parent() const { return m_parent.get(); }
-    Structure* structure() const { return m_structure.get(); }
-    
-    void finalizeUnconditionally(VM&);
-
-private:
-    InferredStructure(VM&, InferredType* parent, Structure*);
-    
-    void finishCreation(VM&, Structure*);
-    
-    WriteBarrier<InferredType> m_parent;
-    WriteBarrier<Structure> m_structure;
-    
-    friend class InferredStructureWatchpoint;
-    InferredStructureWatchpoint m_watchpoint;
+    InferredStructureWatchpoint watchpoint;
 };
 
 } // namespace JSC

Modified: trunk/Source/_javascript_Core/runtime/InferredStructureWatchpoint.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredStructureWatchpoint.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/InferredStructureWatchpoint.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -34,12 +34,14 @@
 {
     InferredStructure* inferredStructure =
         bitwise_cast<InferredStructure*>(
-            bitwise_cast<char*>(this) - OBJECT_OFFSETOF(InferredStructure, m_watchpoint));
+            bitwise_cast<char*>(this) - OBJECT_OFFSETOF(InferredStructure, watchpoint));
 
-    if (!inferredStructure->isLive())
+    InferredType* inferredType = inferredStructure->parent;
+    
+    if (!inferredType->isLive())
         return;
     
-    inferredStructure->m_parent->removeStructure();
+    inferredType->removeStructure();
 }
 
 } // namespace JSC

Modified: trunk/Source/_javascript_Core/runtime/InferredType.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredType.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/InferredType.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "InferredType.h"
 
-#include "GCDeferralContextInlines.h"
+#include "IsoCellSetInlines.h"
 #include "JSCInlines.h"
 
 namespace JSC {
@@ -86,7 +86,8 @@
 void InferredType::visitChildren(JSCell* cell, SlotVisitor& visitor)
 {
     InferredType* inferredType = jsCast<InferredType*>(cell);
-    visitor.append(inferredType->m_structure);
+    if (inferredType->m_structure)
+        visitor.vm().inferredTypesWithFinalizers.add(inferredType);
 }
 
 InferredType::Kind InferredType::kindForFlags(PutByIdFlags flags)
@@ -399,7 +400,6 @@
     Descriptor myType;
     bool result;
     {
-        GCDeferralContext deferralContext(vm.heap);
         ConcurrentJSLocker locker(m_lock);
         oldType = descriptor(locker);
         myType = Descriptor::forValue(value);
@@ -408,7 +408,7 @@
         
         ASSERT(oldType != myType); // The type must have changed if we're on the slow path.
 
-        bool setResult = set(deferralContext, locker, vm, myType);
+        bool setResult = set(locker, vm, myType);
         result = kind(locker) != Top;
         if (!setResult)
             return result;
@@ -423,10 +423,9 @@
 {
     Descriptor oldType;
     {
-        GCDeferralContext deferralContext(vm.heap);
         ConcurrentJSLocker locker(m_lock);
         oldType = descriptor(locker);
-        if (!set(deferralContext, locker, vm, Top))
+        if (!set(locker, vm, Top))
             return;
     }
 
@@ -434,14 +433,8 @@
     m_watchpointSet.fireAll(vm, detail);
 }
 
-bool InferredType::set(GCDeferralContext& deferralContext, const ConcurrentJSLocker& locker, VM& vm, Descriptor newDescriptor)
+bool InferredType::set(const ConcurrentJSLocker& locker, VM& vm, Descriptor newDescriptor)
 {
-    // We will trigger write barriers while holding our lock. Currently, write barriers don't GC, but that
-    // could change. If it does, we don't want to deadlock. Note that we could have used
-    // GCSafeConcurrentJSLocker in the caller, but the caller is on a fast path so maybe that wouldn't be
-    // a good idea.
-    DeferGCForAWhile deferGC(vm.heap);
-    
     // Be defensive: if we're not really changing the type, then we don't have to do anything.
     if (descriptor(locker) == newDescriptor)
         return false;
@@ -469,19 +462,13 @@
         shouldFireWatchpointSet = true;
     }
 
-    // Remove the old InferredStructure object if we no longer need it.
     if (!newDescriptor.structure())
-        m_structure.clear();
-
-    // Add a new InferredStructure object if we need one now.
-    if (newDescriptor.structure()) {
-        if (m_structure) {
-            // We should agree on the structures if we get here.
-            ASSERT(newDescriptor.structure() == m_structure->structure());
-        } else {
-            auto* structure = InferredStructure::create(vm, deferralContext, this, newDescriptor.structure());
-            m_structure.set(vm, this, structure);
-        }
+        m_structure = nullptr;
+    else {
+        if (m_structure)
+            ASSERT(newDescriptor.structure() == m_structure->structure.get());
+        else
+            m_structure = std::make_unique<InferredStructure>(vm, this, newDescriptor.structure());
     }
 
     // Finally, set the descriptor kind.
@@ -503,13 +490,12 @@
     Descriptor oldDescriptor;
     Descriptor newDescriptor;
     {
-        GCDeferralContext deferralContext(vm.heap);
         ConcurrentJSLocker locker(m_lock);
         oldDescriptor = descriptor(locker);
         newDescriptor = oldDescriptor;
         newDescriptor.removeStructure();
         
-        if (!set(deferralContext, locker, vm, newDescriptor))
+        if (!set(locker, vm, newDescriptor))
             return;
     }
 

Modified: trunk/Source/_javascript_Core/runtime/InferredType.h (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredType.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/InferredType.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -27,6 +27,7 @@
 
 #include "ConcurrentJSLock.h"
 #include "InferredStructure.h"
+#include "IsoCellSet.h"
 #include "JSCell.h"
 #include "PropertyName.h"
 #include "PutByIdFlags.h"
@@ -185,7 +186,7 @@
 
     Descriptor descriptorMainThread() const
     {
-        return Descriptor(m_kind, m_structure ? m_structure->structure() : nullptr);
+        return Descriptor(m_kind, m_structure ? m_structure->structure.get() : nullptr);
     }
     
     Descriptor descriptor(const ConcurrentJSLocker&) const
@@ -230,7 +231,7 @@
 
     void dump(PrintStream&) const;
     
-    void finalizeUnconditionally();
+    void finalizeUnconditionally(VM&);
 
 private:
     InferredType(VM&);
@@ -241,7 +242,7 @@
 
     // Helper for willStoreValueSlow() and makeTopSlow(). This returns true if we should fire the
     // watchpoint set.
-    bool set(GCDeferralContext&, const ConcurrentJSLocker&, VM&, Descriptor);
+    bool set(const ConcurrentJSLocker&, VM&, Descriptor);
     
     void removeStructure();
     
@@ -252,7 +253,9 @@
     
     Kind m_kind { Bottom };
 
-    WriteBarrier<InferredStructure> m_structure;
+    // FIXME: This should be Poisoned.
+    // https://bugs.webkit.org/show_bug.cgi?id=180715
+    std::unique_ptr<InferredStructure> m_structure;
 
     // NOTE: If this is being watched, we transform to Top because that implies that it wouldn't be
     // profitable to watch it again. Also, this set is initialized clear, and is never exposed to the DFG

Added: trunk/Source/_javascript_Core/runtime/InferredTypeInlines.h (0 => 225831)


--- trunk/Source/_javascript_Core/runtime/InferredTypeInlines.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/runtime/InferredTypeInlines.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "InferredType.h"
+
+namespace JSC {
+
+inline void InferredType::finalizeUnconditionally(VM& vm)
+{
+    ASSERT(Heap::isMarked(this));
+    
+    if (m_structure) {
+        if (Heap::isMarked(m_structure->structure.get()))
+            return;
+        
+        removeStructure();
+    }
+    
+    vm.inferredTypesWithFinalizers.remove(this);
+}
+
+} // namespace JSC
+

Modified: trunk/Source/_javascript_Core/runtime/VM.cpp (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/VM.cpp	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/VM.cpp	2017-12-13 02:35:54 UTC (rev 225831)
@@ -206,11 +206,11 @@
     , directEvalExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), DirectEvalExecutable)
     , functionExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), FunctionExecutable)
     , indirectEvalExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), IndirectEvalExecutable)
-    , inferredStructureSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), InferredStructure)
     , inferredTypeSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), InferredType)
     , moduleProgramExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), ModuleProgramExecutable)
     , nativeExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), NativeExecutable)
     , programExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), ProgramExecutable)
+    , inferredTypesWithFinalizers(inferredTypeSpace)
     , vmType(vmType)
     , clientData(0)
     , topEntryFrame(nullptr)
@@ -293,7 +293,6 @@
     unlinkedFunctionCodeBlockStructure.set(*this, UnlinkedFunctionCodeBlock::createStructure(*this, 0, jsNull()));
     unlinkedModuleProgramCodeBlockStructure.set(*this, UnlinkedModuleProgramCodeBlock::createStructure(*this, 0, jsNull()));
     propertyTableStructure.set(*this, PropertyTable::createStructure(*this, 0, jsNull()));
-    inferredStructureStructure.set(*this, InferredStructure::createStructure(*this, 0, jsNull()));
     inferredTypeStructure.set(*this, InferredType::createStructure(*this, 0, jsNull()));
     inferredTypeTableStructure.set(*this, InferredTypeTable::createStructure(*this, 0, jsNull()));
     inferredValueStructure.set(*this, InferredValue::createStructure(*this, 0, jsNull()));

Modified: trunk/Source/_javascript_Core/runtime/VM.h (225830 => 225831)


--- trunk/Source/_javascript_Core/runtime/VM.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/_javascript_Core/runtime/VM.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -40,6 +40,7 @@
 #include "FunctionHasExecutedCache.h"
 #include "Heap.h"
 #include "Intrinsic.h"
+#include "IsoCellSet.h"
 #include "IsoSubspace.h"
 #include "JITThunks.h"
 #include "JSCJSValue.h"
@@ -340,11 +341,12 @@
     IsoSubspace directEvalExecutableSpace;
     IsoSubspace functionExecutableSpace;
     IsoSubspace indirectEvalExecutableSpace;
-    IsoSubspace inferredStructureSpace;
     IsoSubspace inferredTypeSpace;
     IsoSubspace moduleProgramExecutableSpace;
     IsoSubspace nativeExecutableSpace;
     IsoSubspace programExecutableSpace;
+    
+    IsoCellSet inferredTypesWithFinalizers;
 
     VMType vmType;
     ClientData* clientData;
@@ -392,7 +394,6 @@
     Strong<Structure> unlinkedFunctionCodeBlockStructure;
     Strong<Structure> unlinkedModuleProgramCodeBlockStructure;
     Strong<Structure> propertyTableStructure;
-    Strong<Structure> inferredStructureStructure;
     Strong<Structure> inferredTypeStructure;
     Strong<Structure> inferredTypeTableStructure;
     Strong<Structure> inferredValueStructure;

Modified: trunk/Source/WTF/ChangeLog (225830 => 225831)


--- trunk/Source/WTF/ChangeLog	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/WTF/ChangeLog	2017-12-13 02:35:54 UTC (rev 225831)
@@ -1,3 +1,67 @@
+2017-12-11  Filip Pizlo  <fpi...@apple.com>
+
+        It should be possible to flag a cell for unconditional finalization
+        https://bugs.webkit.org/show_bug.cgi?id=180636
+
+        Reviewed by Saam Barati.
+        
+        This adds ConcurrentVector, which is like SegmentedVector, but wastes some space to allow
+        resizing to proceed concurrently to access. It's not possible to resize concurrently to
+        resizing, concurrent read/writes aren't protected from racing if they access the same element,
+        and who knows what you'll get if you iterate up to size() while someone else append()s. The
+        key insight is to stash all prior copies of the spine, so that nobody crashes trying to access
+        a stale spine.
+        
+        I'm going to want to do the same thing for FastBitVector, by creating a segmented WordOwner
+        class. That would require repeating the dance of having a spine that can resize while stashing
+        old versions. So, the spine resizing logic is abstracted behind ConcurrentBuffer. You could
+        use that as a kind of "concurrent vector" for immutable data. That's how ConcurrentVector uses
+        it: it's an immutable array of segment pointers.
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/ConcurrentBuffer.h: Added.
+        (WTF::ConcurrentBuffer::ConcurrentBuffer):
+        (WTF::ConcurrentBuffer::~ConcurrentBuffer):
+        (WTF::ConcurrentBuffer::growExact):
+        (WTF::ConcurrentBuffer::grow):
+        (WTF::ConcurrentBuffer::array const):
+        (WTF::ConcurrentBuffer::operator[]):
+        (WTF::ConcurrentBuffer::operator[] const):
+        (WTF::ConcurrentBuffer::createArray):
+        * wtf/ConcurrentVector.h: Added.
+        (WTF::ConcurrentVectorIterator::~ConcurrentVectorIterator):
+        (WTF::ConcurrentVectorIterator::operator* const):
+        (WTF::ConcurrentVectorIterator::operator-> const):
+        (WTF::ConcurrentVectorIterator::operator++):
+        (WTF::ConcurrentVectorIterator::operator== const):
+        (WTF::ConcurrentVectorIterator::operator!= const):
+        (WTF::ConcurrentVectorIterator::operator=):
+        (WTF::ConcurrentVectorIterator::ConcurrentVectorIterator):
+        (WTF::ConcurrentVector::~ConcurrentVector):
+        (WTF::ConcurrentVector::size const):
+        (WTF::ConcurrentVector::isEmpty const):
+        (WTF::ConcurrentVector::at):
+        (WTF::ConcurrentVector::at const):
+        (WTF::ConcurrentVector::operator[]):
+        (WTF::ConcurrentVector::operator[] const):
+        (WTF::ConcurrentVector::first):
+        (WTF::ConcurrentVector::first const):
+        (WTF::ConcurrentVector::last):
+        (WTF::ConcurrentVector::last const):
+        (WTF::ConcurrentVector::takeLast):
+        (WTF::ConcurrentVector::append):
+        (WTF::ConcurrentVector::alloc):
+        (WTF::ConcurrentVector::removeLast):
+        (WTF::ConcurrentVector::grow):
+        (WTF::ConcurrentVector::begin):
+        (WTF::ConcurrentVector::end):
+        (WTF::ConcurrentVector::segmentExistsFor):
+        (WTF::ConcurrentVector::segmentFor):
+        (WTF::ConcurrentVector::subscriptFor):
+        (WTF::ConcurrentVector::ensureSegmentsFor):
+        (WTF::ConcurrentVector::ensureSegment):
+        (WTF::ConcurrentVector::allocateSegment):
+
 2017-12-12  JF Bastien  <jfbast...@apple.com>
 
         makeString: support more integral types

Modified: trunk/Source/WTF/WTF.xcodeproj/project.pbxproj (225830 => 225831)


--- trunk/Source/WTF/WTF.xcodeproj/project.pbxproj	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/WTF/WTF.xcodeproj/project.pbxproj	2017-12-13 02:35:54 UTC (rev 225831)
@@ -239,6 +239,8 @@
 		0FB14E18180FA218009B6B4D /* Bag.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Bag.h; sourceTree = "<group>"; };
 		0FB14E1A1810E1DA009B6B4D /* BagToHashMap.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = BagToHashMap.h; sourceTree = "<group>"; };
 		0FB317C31C488001007E395A /* SystemTracing.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SystemTracing.h; sourceTree = "<group>"; };
+		0FB467821FDE282B003FCB09 /* ConcurrentBuffer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConcurrentBuffer.h; sourceTree = "<group>"; };
+		0FB467831FDE282C003FCB09 /* ConcurrentVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConcurrentVector.h; sourceTree = "<group>"; };
 		0FC4488216FE9FE100844BE9 /* ProcessID.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProcessID.h; sourceTree = "<group>"; };
 		0FC4EDE51696149600F65041 /* CommaPrinter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CommaPrinter.h; sourceTree = "<group>"; };
 		0FD81AC4154FB22E00983E72 /* FastBitVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FastBitVector.h; sourceTree = "<group>"; };
@@ -808,8 +810,10 @@
 				0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */,
 				A8A47270151A825A004123FF /* Compiler.h */,
 				46BA9EAB1F4CD61E009A2BBC /* CompletionHandler.h */,
+				0FB467821FDE282B003FCB09 /* ConcurrentBuffer.h */,
 				0F30CB581FCDF133004B5323 /* ConcurrentPtrHashSet.cpp */,
 				0F30CB591FCDF133004B5323 /* ConcurrentPtrHashSet.h */,
+				0FB467831FDE282C003FCB09 /* ConcurrentVector.h */,
 				0FDB698D1B7C643A000C1078 /* Condition.h */,
 				0FFBCBFA1FD37E0F0072AAF0 /* CountingLock.h */,
 				0F8E85DA1FD485B000691889 /* CountingLock.cpp */,

Modified: trunk/Source/WTF/wtf/Bitmap.h (225830 => 225831)


--- trunk/Source/WTF/wtf/Bitmap.h	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/WTF/wtf/Bitmap.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -58,6 +58,8 @@
     void merge(const Bitmap&);
     void filter(const Bitmap&);
     void exclude(const Bitmap&);
+
+    void concurrentFilter(const Bitmap&);
     
     bool subsumes(const Bitmap&) const;
     
@@ -301,6 +303,26 @@
 }
 
 template<size_t bitmapSize, typename WordType>
+inline void Bitmap<bitmapSize, WordType>::concurrentFilter(const Bitmap& other)
+{
+    for (size_t i = 0; i < words; ++i) {
+        for (;;) {
+            WordType otherBits = other.bits[i];
+            if (!otherBits) {
+                bits[i] = 0;
+                break;
+            }
+            WordType oldBits = bits[i];
+            WordType filteredBits = oldBits & otherBits;
+            if (oldBits == filteredBits)
+                break;
+            if (atomicCompareExchangeWeakRelaxed(&bits[i], oldBits, filteredBits))
+                break;
+        }
+    }
+}
+
+template<size_t bitmapSize, typename WordType>
 inline bool Bitmap<bitmapSize, WordType>::subsumes(const Bitmap& other) const
 {
     for (size_t i = 0; i < words; ++i) {

Modified: trunk/Source/WTF/wtf/CMakeLists.txt (225830 => 225831)


--- trunk/Source/WTF/wtf/CMakeLists.txt	2017-12-13 01:57:45 UTC (rev 225830)
+++ trunk/Source/WTF/wtf/CMakeLists.txt	2017-12-13 02:35:54 UTC (rev 225831)
@@ -18,7 +18,9 @@
     ClockType.h
     CompilationThread.h
     Compiler.h
+    ConcurrentBuffer.h
     ConcurrentPtrHashSet.h
+    ConcurrentVector.h
     Condition.h
     CountingLock.h
     CrossThreadCopier.h

Added: trunk/Source/WTF/wtf/ConcurrentBuffer.h (0 => 225831)


--- trunk/Source/WTF/wtf/ConcurrentBuffer.h	                        (rev 0)
+++ trunk/Source/WTF/wtf/ConcurrentBuffer.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,111 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/Atomics.h>
+#include <wtf/FastMalloc.h>
+#include <wtf/HashFunctions.h>
+#include <wtf/Lock.h>
+#include <wtf/Noncopyable.h>
+#include <wtf/Vector.h>
+
+namespace WTF {
+
+// ConcurrentBuffer is suitable for when you plan to store immutable data and sometimes append to it.
+// It supports storing data that is not copy-constructable but bit-copyable.
+template<typename T>
+class ConcurrentBuffer {
+    WTF_MAKE_NONCOPYABLE(ConcurrentBuffer);
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+    
+    ConcurrentBuffer()
+    {
+    }
+    
+    ~ConcurrentBuffer()
+    {
+        if (Array* array = m_array) {
+            for (size_t i = 0; i < array->size; ++i)
+                array->data[i].~T();
+        }
+        for (Array* array : m_allArrays)
+            fastFree(array);
+    }
+
+    // Growing is not concurrent. This assumes you are holding some other lock before you do this.
+    void growExact(size_t newSize)
+    {
+        Array* array = m_array;
+        if (array && newSize <= array->size)
+            return;
+        Array* newArray = createArray(newSize);
+        // This allows us to do ConcurrentBuffer<std::unique_ptr<>>.
+        if (array)
+            memcpy(newArray->data, array->data, sizeof(T) * array->size);
+        for (size_t i = array ? array->size : 0; i < newSize; ++i)
+            new (newArray->data + i) T();
+        WTF::storeStoreFence();
+        m_array = newArray;
+        WTF::storeStoreFence();
+        m_allArrays.append(newArray);
+    }
+    
+    void grow(size_t newSize)
+    {
+        size_t size = m_array ? m_array->size : 0;
+        if (newSize <= size)
+            return;
+        growExact(std::max(newSize, size * 2));
+    }
+    
+    struct Array {
+        size_t size; // This is an immutable size.
+        T data[1];
+    };
+    
+    Array* array() const { return m_array; }
+    
+    T& operator[](size_t index) { return m_array->data[index]; }
+    const T& operator[](size_t index) const { return m_array->data[index]; }
+    
+private:
+    Array* createArray(size_t size)
+    {
+        Checked<size_t> objectSize = sizeof(T);
+        objectSize *= size;
+        objectSize += static_cast<size_t>(OBJECT_OFFSETOF(Array, data));
+        Array* result = static_cast<Array*>(fastMalloc(objectSize.unsafeGet()));
+        result->size = size;
+        return result;
+    }
+    
+    Array* m_array { nullptr };
+    Vector<Array*> m_allArrays;
+};
+
+} // namespace WTF
+

Added: trunk/Source/WTF/wtf/ConcurrentVector.h (0 => 225831)


--- trunk/Source/WTF/wtf/ConcurrentVector.h	                        (rev 0)
+++ trunk/Source/WTF/wtf/ConcurrentVector.h	2017-12-13 02:35:54 UTC (rev 225831)
@@ -0,0 +1,269 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1.  Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ * 2.  Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in the
+ *     documentation and/or other materials provided with the distribution.
+ * 3.  Neither the name of Apple Inc. ("Apple") nor the names of
+ *     its contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef ConcurrentVector_h
+#define ConcurrentVector_h
+
+#include <wtf/ConcurrentBuffer.h>
+#include <wtf/Noncopyable.h>
+
+namespace WTF {
+
+// An iterator for ConcurrentVector. It supports only the pre ++ operator
+template <typename T, size_t SegmentSize = 8> class ConcurrentVector;
+template <typename T, size_t SegmentSize = 8> class ConcurrentVectorIterator {
+private:
+    friend class ConcurrentVector<T, SegmentSize>;
+public:
+    typedef ConcurrentVectorIterator<T, SegmentSize> Iterator;
+
+    ~ConcurrentVectorIterator() { }
+
+    T& operator*() const { return m_vector.at(m_index); }
+    T* operator->() const { return &m_vector.at(m_index); }
+
+    // Only prefix ++ operator supported
+    Iterator& operator++()
+    {
+        m_index++;
+        return *this;
+    }
+
+    bool operator==(const Iterator& other) const
+    {
+        return m_index == other.m_index && &m_vector == &other.m_vector;
+    }
+
+    bool operator!=(const Iterator& other) const
+    {
+        return m_index != other.m_index || &m_vector != &other.m_vector;
+    }
+
+    ConcurrentVectorIterator& operator=(const ConcurrentVectorIterator<T, SegmentSize>& other)
+    {
+        m_vector = other.m_vector;
+        m_index = other.m_index;
+        return *this;
+    }
+
+private:
+    ConcurrentVectorIterator(ConcurrentVector<T, SegmentSize>& vector, size_t index)
+        : m_vector(vector)
+        , m_index(index)
+    {
+    }
+
+    ConcurrentVector<T, SegmentSize>& m_vector;
+    size_t m_index;
+};
+
+// ConcurrentVector is like SegmentedVector, but suitable for scenarios where one thread appends
+// elements and another thread continues to access elements at lower indices. Only one thread can
+// append at a time, so that activity still needs locking. size() and last() are racy with append(),
+// in the sense that last() may crash if an append() is running concurrently because size()-1 does yet
+// have a segment.
+//
+// Typical users of ConcurrentVector already have some way of ensuring that by the time someone is
+// trying to use an index, some synchronization has happened to ensure that this index contains fully
+// initialized data. Thereafter, the keeper of that index is allowed to use it on this vector without
+// any locking other than what is needed to protect the integrity of the element at that index. This
+// works because we guarantee shrinking the vector is impossible and that growing the vector doesn't
+// delete old vector spines.
+template <typename T, size_t SegmentSize>
+class ConcurrentVector {
+    friend class ConcurrentVectorIterator<T, SegmentSize>;
+    WTF_MAKE_NONCOPYABLE(ConcurrentVector);
+    WTF_MAKE_FAST_ALLOCATED;
+
+public:
+    typedef ConcurrentVectorIterator<T, SegmentSize> Iterator;
+
+    ConcurrentVector() = default;
+
+    ~ConcurrentVector()
+    {
+    }
+
+    // This may return a size that is bigger than the underlying storage, since this does not fence
+    // manipulations of size. So if you access at size()-1, you may crash because this hasn't
+    // allocated storage for that index yet.
+    size_t size() const { return m_size; }
+
+    bool isEmpty() const { return !size(); }
+
+    T& at(size_t index)
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(index < m_size);
+        return segmentFor(index)->entries[subscriptFor(index)];
+    }
+
+    const T& at(size_t index) const
+    {
+        return const_cast<ConcurrentVector<T, SegmentSize>*>(this)->at(index);
+    }
+
+    T& operator[](size_t index)
+    {
+        return at(index);
+    }
+
+    const T& operator[](size_t index) const
+    {
+        return at(index);
+    }
+
+    T& first()
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(!isEmpty());
+        return at(0);
+    }
+    const T& first() const
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(!isEmpty());
+        return at(0);
+    }
+    
+    // This may crash if run concurrently to append(). If you want to accurately track the size of
+    // this vector, you'll have to do it yourself, with your own fencing.
+    T& last()
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(!isEmpty());
+        return at(size() - 1);
+    }
+    const T& last() const
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(!isEmpty());
+        return at(size() - 1);
+    }
+
+    T takeLast()
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(!isEmpty());
+        T result = WTFMove(last());
+        --m_size;
+        return result;
+    }
+
+    template<typename... Args>
+    void append(Args&&... args)
+    {
+        ++m_size;
+        if (!segmentExistsFor(m_size - 1))
+            allocateSegment();
+        new (NotNull, &last()) T(std::forward<Args>(args)...);
+    }
+
+    template<typename... Args>
+    T& alloc(Args&&... args)
+    {
+        append(std::forward<Args>(args)...);
+        return last();
+    }
+
+    void removeLast()
+    {
+        last().~T();
+        --m_size;
+    }
+
+    void grow(size_t size)
+    {
+        if (size == m_size)
+            return;
+        ASSERT(size > m_size);
+        ensureSegmentsFor(size);
+        size_t oldSize = m_size;
+        m_size = size;
+        for (size_t i = oldSize; i < m_size; ++i)
+            new (NotNull, &at(i)) T();
+    }
+
+    Iterator begin()
+    {
+        return Iterator(*this, 0);
+    }
+
+    Iterator end()
+    {
+        return Iterator(*this, m_size);
+    }
+
+private:
+    struct Segment {
+        WTF_MAKE_STRUCT_FAST_ALLOCATED;
+            
+        T entries[SegmentSize];
+    };
+
+    bool segmentExistsFor(size_t index)
+    {
+        return index / SegmentSize < m_numSegments;
+    }
+
+    Segment* segmentFor(size_t index)
+    {
+        return m_segments[index / SegmentSize].get();
+    }
+
+    size_t subscriptFor(size_t index)
+    {
+        return index % SegmentSize;
+    }
+
+    void ensureSegmentsFor(size_t size)
+    {
+        size_t segmentCount = (m_size + SegmentSize - 1) / SegmentSize;
+        size_t neededSegmentCount = (size + SegmentSize - 1) / SegmentSize;
+
+        for (size_t i = segmentCount ? segmentCount - 1 : 0; i < neededSegmentCount; ++i)
+            ensureSegment(i);
+    }
+
+    void ensureSegment(size_t segmentIndex)
+    {
+        ASSERT_WITH_SECURITY_IMPLICATION(segmentIndex <= m_numSegments);
+        if (segmentIndex == m_numSegments)
+            allocateSegment();
+    }
+
+    void allocateSegment()
+    {
+        m_segments.grow(m_numSegments + 1);
+        m_segments[m_numSegments++] = std::make_unique<Segment>();
+    }
+
+    size_t m_size { 0 };
+    ConcurrentBuffer<std::unique_ptr<Segment>> m_segments;
+    size_t m_numSegments { 0 };
+};
+
+} // namespace WTF
+
+using WTF::ConcurrentVector;
+
+#endif // ConcurrentVector_h
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to