Title: [191313] trunk/Source/_javascript_Core
Revision
191313
Author
sbar...@apple.com
Date
2015-10-19 13:29:45 -0700 (Mon, 19 Oct 2015)

Log Message

FTL should generate a unique OSR exit for each duplicated OSR exit stackmap intrinsic.
https://bugs.webkit.org/show_bug.cgi?id=149970

Reviewed by Filip Pizlo.

When we lower DFG to LLVM, we generate a stackmap intrnsic for OSR 
exits. We also recorded the OSR exit inside FTL::JITCode during lowering.
This stackmap intrinsic may be duplicated or even removed by LLVM.
When the stackmap intrinsic is duplicated, we used to generate just
a single OSR exit data structure. Then, when we compiled an OSR exit, we 
would look for the first record in the record list that had the same stackmap ID
as what the OSR exit data structure had. We did this even when the OSR exit
stackmap intrinsic was duplicated. This would lead us to grab the wrong FTL::StackMaps::Record.

Now, each OSR exit knows exactly which FTL::StackMaps::Record it corresponds to.
We accomplish this by having an OSRExitDescriptor that is recorded during
lowering. Each descriptor may be referenced my zero, one, or more OSRExits.
Now, no more than one stackmap intrinsic corresponds to the same index inside 
JITCode's OSRExit Vector. Also, each OSRExit jump now jumps to a code location.

* ftl/FTLCompile.cpp:
(JSC::FTL::mmAllocateDataSection):
* ftl/FTLJITCode.cpp:
(JSC::FTL::JITCode::validateReferences):
(JSC::FTL::JITCode::liveRegistersToPreserveAtExceptionHandlingCallSite):
* ftl/FTLJITCode.h:
* ftl/FTLJITFinalizer.cpp:
(JSC::FTL::JITFinalizer::finalizeFunction):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::DFG::LowerDFGToLLVM::compileInvalidationPoint):
(JSC::FTL::DFG::LowerDFGToLLVM::compileIsUndefined):
(JSC::FTL::DFG::LowerDFGToLLVM::appendOSRExit):
(JSC::FTL::DFG::LowerDFGToLLVM::emitOSRExitCall):
(JSC::FTL::DFG::LowerDFGToLLVM::buildExitArguments):
(JSC::FTL::DFG::LowerDFGToLLVM::callStackmap):
* ftl/FTLOSRExit.cpp:
(JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
(JSC::FTL::OSRExitDescriptor::validateReferences):
(JSC::FTL::OSRExit::OSRExit):
(JSC::FTL::OSRExit::codeLocationForRepatch):
(JSC::FTL::OSRExit::validateReferences): Deleted.
* ftl/FTLOSRExit.h:
(JSC::FTL::OSRExit::considerAddingAsFrequentExitSite):
* ftl/FTLOSRExitCompilationInfo.h:
(JSC::FTL::OSRExitCompilationInfo::OSRExitCompilationInfo):
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
(JSC::FTL::compileFTLOSRExit):
* ftl/FTLStackMaps.cpp:
(JSC::FTL::StackMaps::computeRecordMap):
* ftl/FTLStackMaps.h:

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (191312 => 191313)


--- trunk/Source/_javascript_Core/ChangeLog	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ChangeLog	2015-10-19 20:29:45 UTC (rev 191313)
@@ -1,3 +1,57 @@
+2015-10-19  Saam barati  <sbar...@apple.com>
+
+        FTL should generate a unique OSR exit for each duplicated OSR exit stackmap intrinsic.
+        https://bugs.webkit.org/show_bug.cgi?id=149970
+
+        Reviewed by Filip Pizlo.
+
+        When we lower DFG to LLVM, we generate a stackmap intrnsic for OSR 
+        exits. We also recorded the OSR exit inside FTL::JITCode during lowering.
+        This stackmap intrinsic may be duplicated or even removed by LLVM.
+        When the stackmap intrinsic is duplicated, we used to generate just
+        a single OSR exit data structure. Then, when we compiled an OSR exit, we 
+        would look for the first record in the record list that had the same stackmap ID
+        as what the OSR exit data structure had. We did this even when the OSR exit
+        stackmap intrinsic was duplicated. This would lead us to grab the wrong FTL::StackMaps::Record.
+
+        Now, each OSR exit knows exactly which FTL::StackMaps::Record it corresponds to.
+        We accomplish this by having an OSRExitDescriptor that is recorded during
+        lowering. Each descriptor may be referenced my zero, one, or more OSRExits.
+        Now, no more than one stackmap intrinsic corresponds to the same index inside 
+        JITCode's OSRExit Vector. Also, each OSRExit jump now jumps to a code location.
+
+        * ftl/FTLCompile.cpp:
+        (JSC::FTL::mmAllocateDataSection):
+        * ftl/FTLJITCode.cpp:
+        (JSC::FTL::JITCode::validateReferences):
+        (JSC::FTL::JITCode::liveRegistersToPreserveAtExceptionHandlingCallSite):
+        * ftl/FTLJITCode.h:
+        * ftl/FTLJITFinalizer.cpp:
+        (JSC::FTL::JITFinalizer::finalizeFunction):
+        * ftl/FTLLowerDFGToLLVM.cpp:
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileInvalidationPoint):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileIsUndefined):
+        (JSC::FTL::DFG::LowerDFGToLLVM::appendOSRExit):
+        (JSC::FTL::DFG::LowerDFGToLLVM::emitOSRExitCall):
+        (JSC::FTL::DFG::LowerDFGToLLVM::buildExitArguments):
+        (JSC::FTL::DFG::LowerDFGToLLVM::callStackmap):
+        * ftl/FTLOSRExit.cpp:
+        (JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
+        (JSC::FTL::OSRExitDescriptor::validateReferences):
+        (JSC::FTL::OSRExit::OSRExit):
+        (JSC::FTL::OSRExit::codeLocationForRepatch):
+        (JSC::FTL::OSRExit::validateReferences): Deleted.
+        * ftl/FTLOSRExit.h:
+        (JSC::FTL::OSRExit::considerAddingAsFrequentExitSite):
+        * ftl/FTLOSRExitCompilationInfo.h:
+        (JSC::FTL::OSRExitCompilationInfo::OSRExitCompilationInfo):
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        (JSC::FTL::compileFTLOSRExit):
+        * ftl/FTLStackMaps.cpp:
+        (JSC::FTL::StackMaps::computeRecordMap):
+        * ftl/FTLStackMaps.h:
+
 2015-10-16  Brian Burg  <bb...@apple.com>
 
         Unify handling of _javascript_Core scripts that are used in WebCore

Modified: trunk/Source/_javascript_Core/ftl/FTLCompile.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLCompile.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLCompile.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -144,9 +144,9 @@
     StackMaps::RecordMap::iterator iter = recordMap.find(stackmapID);
     RELEASE_ASSERT(iter != recordMap.end());
     RELEASE_ASSERT(iter->value.size() == 1);
-    RELEASE_ASSERT(iter->value[0].locations.size() == 1);
+    RELEASE_ASSERT(iter->value[0].record.locations.size() == 1);
     Location capturedLocation =
-        Location::forStackmaps(nullptr, iter->value[0].locations[0]);
+        Location::forStackmaps(nullptr, iter->value[0].record.locations[0]);
     RELEASE_ASSERT(capturedLocation.kind() == Location::Register);
     RELEASE_ASSERT(capturedLocation.gpr() == GPRInfo::callFrameRegister);
     RELEASE_ASSERT(!(capturedLocation.addend() % sizeof(Register)));
@@ -209,12 +209,12 @@
         return;
     }
     
-    Vector<StackMaps::Record>& records = iter->value;
+    Vector<StackMaps::RecordAndIndex>& records = iter->value;
     
     RELEASE_ASSERT(records.size() == ic.m_generators.size());
     
     for (unsigned i = records.size(); i--;) {
-        StackMaps::Record& record = records[i];
+        StackMaps::Record& record = records[i].record;
         auto generator = ic.m_generators[i];
 
         CCallHelpers fastPathJIT(&vm, codeBlock);
@@ -247,12 +247,12 @@
         return;
     }
     
-    Vector<StackMaps::Record>& records = iter->value;
+    Vector<StackMaps::RecordAndIndex>& records = iter->value;
     
     RELEASE_ASSERT(records.size() == ic.m_generators.size());
 
     for (unsigned i = records.size(); i--;) {
-        StackMaps::Record& record = records[i];
+        StackMaps::Record& record = records[i].record;
         auto generator = ic.m_generators[i];
 
         StructureStubInfo& stubInfo = *generator.m_stub;
@@ -318,7 +318,7 @@
         
         for (unsigned j = 0; j < iter->value.size(); ++j) {
             CallType copy = call;
-            copy.m_instructionOffset = iter->value[j].instructionOffset;
+            copy.m_instructionOffset = iter->value[j].record.instructionOffset;
             calls.append(copy);
         }
     }
@@ -389,6 +389,22 @@
         state.finalizer->handleExceptionsLinkBuffer = WTF::move(linkBuffer);
     }
 
+    RELEASE_ASSERT(state.jitCode->osrExit.size() == 0);
+    for (unsigned i = 0; i < state.jitCode->osrExitDescriptors.size(); i++) {
+        OSRExitDescriptor& exitDescriptor = state.jitCode->osrExitDescriptors[i];
+        auto iter = recordMap.find(exitDescriptor.m_stackmapID);
+        if (iter == recordMap.end()) {
+            // It was optimized out.
+            continue;
+        }
+
+        for (unsigned j = 0; j < iter->value.size(); j++) {
+            uint32_t stackmapRecordIndex = iter->value[j].index;
+            OSRExit exit(exitDescriptor, stackmapRecordIndex);
+            state.jitCode->osrExit.append(exit);
+            state.finalizer->osrExit.append(OSRExitCompilationInfo());
+        }
+    }
     ExitThunkGenerator exitThunkGenerator(state);
     exitThunkGenerator.emitThunks();
     if (exitThunkGenerator.didThings()) {
@@ -408,28 +424,22 @@
             OSRExit& exit = jitCode->osrExit[i];
             
             if (verboseCompilationEnabled())
-                dataLog("Handling OSR stackmap #", exit.m_stackmapID, " for ", exit.m_codeOrigin, "\n");
+                dataLog("Handling OSR stackmap #", exit.m_descriptor.m_stackmapID, " for ", exit.m_codeOrigin, "\n");
 
-            auto iter = recordMap.find(exit.m_stackmapID);
-            if (iter == recordMap.end()) {
-                // It was optimized out.
-                continue;
-            }
-            
             info.m_thunkAddress = linkBuffer->locationOf(info.m_thunkLabel);
             exit.m_patchableCodeOffset = linkBuffer->offsetOf(info.m_thunkJump);
             
-            for (unsigned j = exit.m_values.size(); j--;)
-                exit.m_values[j] = exit.m_values[j].withLocalsOffset(localsOffset);
-            for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
+            for (unsigned j = exit.m_descriptor.m_values.size(); j--;)
+                exit.m_descriptor.m_values[j] = exit.m_descriptor.m_values[j].withLocalsOffset(localsOffset);
+            for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations)
                 materialization->accountForLocalsOffset(localsOffset);
             
             if (verboseCompilationEnabled()) {
                 DumpContext context;
-                dataLog("    Exit values: ", inContext(exit.m_values, &context), "\n");
-                if (!exit.m_materializations.isEmpty()) {
+                dataLog("    Exit values: ", inContext(exit.m_descriptor.m_values, &context), "\n");
+                if (!exit.m_descriptor.m_materializations.isEmpty()) {
                     dataLog("    Materializations: \n");
-                    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
+                    for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations)
                         dataLog("        Materialize(", pointerDump(materialization), ")\n");
                 }
             }
@@ -460,7 +470,7 @@
             
             CodeOrigin codeOrigin = getById.codeOrigin();
             for (unsigned i = 0; i < iter->value.size(); ++i) {
-                StackMaps::Record& record = iter->value[i];
+                StackMaps::Record& record = iter->value[i].record;
             
                 RegisterSet usedRegisters = usedRegistersFor(record);
                 
@@ -499,7 +509,7 @@
             
             CodeOrigin codeOrigin = putById.codeOrigin();
             for (unsigned i = 0; i < iter->value.size(); ++i) {
-                StackMaps::Record& record = iter->value[i];
+                StackMaps::Record& record = iter->value[i].record;
                 
                 RegisterSet usedRegisters = usedRegistersFor(record);
                 
@@ -539,7 +549,7 @@
             
             CodeOrigin codeOrigin = checkIn.codeOrigin();
             for (unsigned i = 0; i < iter->value.size(); ++i) {
-                StackMaps::Record& record = iter->value[i];
+                StackMaps::Record& record = iter->value[i].record;
                 RegisterSet usedRegisters = usedRegistersFor(record);
                 GPRReg result = record.locations[0].directGPR();
                 GPRReg obj = record.locations[1].directGPR();
@@ -576,7 +586,7 @@
             }
             CodeOrigin codeOrigin = descriptor.codeOrigin();
             for (unsigned i = 0; i < iter->value.size(); ++i) {
-                StackMaps::Record& record = iter->value[i];
+                StackMaps::Record& record = iter->value[i].record;
                 RegisterSet usedRegisters = usedRegistersFor(record);
                 Vector<Location> locations;
                 for (auto location : record.locations)
@@ -693,7 +703,7 @@
     // path, for some kinds of functions.
     if (iter != recordMap.end()) {
         for (unsigned i = iter->value.size(); i--;) {
-            StackMaps::Record& record = iter->value[i];
+            StackMaps::Record& record = iter->value[i].record;
             
             CodeLocationLabel source = CodeLocationLabel(
                 bitwise_cast<char*>(generatedFunction) + record.instructionOffset);
@@ -709,7 +719,7 @@
     // path, for some kinds of functions.
     if (iter != recordMap.end()) {
         for (unsigned i = iter->value.size(); i--;) {
-            StackMaps::Record& record = iter->value[i];
+            StackMaps::Record& record = iter->value[i].record;
             
             CodeLocationLabel source = CodeLocationLabel(
                 bitwise_cast<char*>(generatedFunction) + record.instructionOffset);
@@ -721,26 +731,21 @@
     for (unsigned exitIndex = 0; exitIndex < jitCode->osrExit.size(); ++exitIndex) {
         OSRExitCompilationInfo& info = state.finalizer->osrExit[exitIndex];
         OSRExit& exit = jitCode->osrExit[exitIndex];
-        iter = recordMap.find(exit.m_stackmapID);
         
         Vector<const void*> codeAddresses;
         
-        if (iter != recordMap.end()) {
-            for (unsigned i = iter->value.size(); i--;) {
-                StackMaps::Record& record = iter->value[i];
-                
-                CodeLocationLabel source = CodeLocationLabel(
-                    bitwise_cast<char*>(generatedFunction) + record.instructionOffset);
-                
-                codeAddresses.append(bitwise_cast<char*>(generatedFunction) + record.instructionOffset + MacroAssembler::maxJumpReplacementSize());
-                
-                if (info.m_isInvalidationPoint)
-                    jitCode->common.jumpReplacements.append(JumpReplacement(source, info.m_thunkAddress));
-                else
-                    MacroAssembler::replaceWithJump(source, info.m_thunkAddress);
-            }
-        }
+        StackMaps::Record& record = jitCode->stackmaps.records[exit.m_stackmapRecordIndex];
         
+        CodeLocationLabel source = CodeLocationLabel(
+            bitwise_cast<char*>(generatedFunction) + record.instructionOffset);
+        
+        codeAddresses.append(bitwise_cast<char*>(generatedFunction) + record.instructionOffset + MacroAssembler::maxJumpReplacementSize());
+        
+        if (exit.m_descriptor.m_isInvalidationPoint)
+            jitCode->common.jumpReplacements.append(JumpReplacement(source, info.m_thunkAddress));
+        else
+            MacroAssembler::replaceWithJump(source, info.m_thunkAddress);
+        
         if (graph.compilation())
             graph.compilation()->addOSRExitSite(codeAddresses);
     }

Modified: trunk/Source/_javascript_Core/ftl/FTLJITCode.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLJITCode.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLJITCode.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -141,7 +141,7 @@
     common.validateReferences(trackedReferences);
     
     for (OSRExit& exit : osrExit)
-        exit.validateReferences(trackedReferences);
+        exit.m_descriptor.validateReferences(trackedReferences);
 }
 
 RegisterSet JITCode::liveRegistersToPreserveAtExceptionHandlingCallSite(CodeBlock*, CallSiteIndex)

Modified: trunk/Source/_javascript_Core/ftl/FTLJITCode.h (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLJITCode.h	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLJITCode.h	2015-10-19 20:29:45 UTC (rev 191313)
@@ -86,6 +86,7 @@
     
     DFG::CommonData common;
     SegmentedVector<OSRExit, 8> osrExit;
+    SegmentedVector<OSRExitDescriptor, 8> osrExitDescriptors;
     StackMaps stackmaps;
     Vector<std::unique_ptr<LazySlowPath>> lazySlowPaths;
     

Modified: trunk/Source/_javascript_Core/ftl/FTLJITFinalizer.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLJITFinalizer.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLJITFinalizer.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -79,17 +79,8 @@
     }
     
     if (exitThunksLinkBuffer) {
-        StackMaps::RecordMap recordMap = jitCode->stackmaps.computeRecordMap();
-        
         for (unsigned i = 0; i < osrExit.size(); ++i) {
             OSRExitCompilationInfo& info = osrExit[i];
-            OSRExit& exit = jitCode->osrExit[i];
-            StackMaps::RecordMap::iterator iter = recordMap.find(exit.m_stackmapID);
-            if (iter == recordMap.end()) {
-                // It's OK, it was optimized out.
-                continue;
-            }
-            
             exitThunksLinkBuffer->link(
                 info.m_thunkJump,
                 CodeLocationLabel(

Modified: trunk/Source/_javascript_Core/ftl/FTLLowerDFGToLLVM.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLLowerDFGToLLVM.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLLowerDFGToLLVM.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -4921,22 +4921,20 @@
 
         DFG_ASSERT(m_graph, m_node, m_origin.exitOK);
         
-        m_ftlState.jitCode->osrExit.append(OSRExit(
+        m_ftlState.jitCode->osrExitDescriptors.append(OSRExitDescriptor(
             UncountableInvalidation, DataFormatNone, MethodOfGettingAValueProfile(),
             m_origin.forExit, m_origin.semantic,
             availabilityMap().m_locals.numberOfArguments(),
             availabilityMap().m_locals.numberOfLocals()));
-        m_ftlState.finalizer->osrExit.append(OSRExitCompilationInfo());
         
-        OSRExit& exit = m_ftlState.jitCode->osrExit.last();
-        OSRExitCompilationInfo& info = m_ftlState.finalizer->osrExit.last();
+        OSRExitDescriptor& exitDescriptor = m_ftlState.jitCode->osrExitDescriptors.last();
         
         ExitArgumentList arguments;
         
-        buildExitArguments(exit, arguments, FormattedValue(), exit.m_codeOrigin);
-        callStackmap(exit, arguments);
+        buildExitArguments(exitDescriptor, arguments, FormattedValue(), exitDescriptor.m_codeOrigin);
+        callStackmap(exitDescriptor, arguments);
         
-        info.m_isInvalidationPoint = true;
+        exitDescriptor.m_isInvalidationPoint = true;
     }
     
     void compileIsUndefined()
@@ -8703,7 +8701,7 @@
         ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition)
     {
         if (verboseCompilationEnabled()) {
-            dataLog("    OSR exit #", m_ftlState.jitCode->osrExit.size(), " with availability: ", availabilityMap(), "\n");
+            dataLog("    OSR exit #", m_ftlState.jitCode->osrExitDescriptors.size(), " with availability: ", availabilityMap(), "\n");
             if (!m_availableRecoveries.isEmpty())
                 dataLog("        Available recoveries: ", listDump(m_availableRecoveries), "\n");
         }
@@ -8732,19 +8730,16 @@
         if (failCondition == m_out.booleanFalse)
             return;
 
-        ASSERT(m_ftlState.jitCode->osrExit.size() == m_ftlState.finalizer->osrExit.size());
-        
-        m_ftlState.jitCode->osrExit.append(OSRExit(
+        m_ftlState.jitCode->osrExitDescriptors.append(OSRExitDescriptor(
             kind, lowValue.format(), m_graph.methodOfGettingAValueProfileFor(highValue),
             m_origin.forExit, m_origin.semantic,
             availabilityMap().m_locals.numberOfArguments(),
             availabilityMap().m_locals.numberOfLocals()));
-        m_ftlState.finalizer->osrExit.append(OSRExitCompilationInfo());
 
-        OSRExit& exit = m_ftlState.jitCode->osrExit.last();
+        OSRExitDescriptor& exitDescriptor = m_ftlState.jitCode->osrExitDescriptors.last();
 
         if (failCondition == m_out.booleanTrue) {
-            emitOSRExitCall(exit, lowValue);
+            emitOSRExitCall(exitDescriptor, lowValue);
             return;
         }
 
@@ -8758,26 +8753,26 @@
         
         lastNext = m_out.appendTo(failCase, continuation);
         
-        emitOSRExitCall(exit, lowValue);
+        emitOSRExitCall(exitDescriptor, lowValue);
         
         m_out.unreachable();
         
         m_out.appendTo(continuation, lastNext);
     }
     
-    void emitOSRExitCall(OSRExit& exit, FormattedValue lowValue)
+    void emitOSRExitCall(OSRExitDescriptor& exitDescriptor, FormattedValue lowValue)
     {
         ExitArgumentList arguments;
         
-        CodeOrigin codeOrigin = exit.m_codeOrigin;
+        CodeOrigin codeOrigin = exitDescriptor.m_codeOrigin;
         
-        buildExitArguments(exit, arguments, lowValue, codeOrigin);
+        buildExitArguments(exitDescriptor, arguments, lowValue, codeOrigin);
         
-        callStackmap(exit, arguments);
+        callStackmap(exitDescriptor, arguments);
     }
     
     void buildExitArguments(
-        OSRExit& exit, ExitArgumentList& arguments, FormattedValue lowValue,
+        OSRExitDescriptor& exitDescriptor, ExitArgumentList& arguments, FormattedValue lowValue,
         CodeOrigin codeOrigin)
     {
         if (!!lowValue)
@@ -8799,12 +8794,12 @@
                 auto result = map.add(node, nullptr);
                 if (result.isNewEntry) {
                     result.iterator->value =
-                        exit.m_materializations.add(node->op(), node->origin.semantic);
+                        exitDescriptor.m_materializations.add(node->op(), node->origin.semantic);
                 }
             });
         
-        for (unsigned i = 0; i < exit.m_values.size(); ++i) {
-            int operand = exit.m_values.operandForIndex(i);
+        for (unsigned i = 0; i < exitDescriptor.m_values.size(); ++i) {
+            int operand = exitDescriptor.m_values.operandForIndex(i);
             
             Availability availability = availabilityMap.m_locals[i];
             
@@ -8814,7 +8809,7 @@
                     (!(availability.isDead() && m_graph.isLiveInBytecode(VirtualRegister(operand), codeOrigin))) || m_graph.m_plan.mode == FTLForOSREntryMode);
             }
             
-            exit.m_values[i] = exitValueForAvailability(arguments, map, availability);
+            exitDescriptor.m_values[i] = exitValueForAvailability(arguments, map, availability);
         }
         
         for (auto heapPair : availabilityMap.m_heap) {
@@ -8826,20 +8821,20 @@
         }
         
         if (verboseCompilationEnabled()) {
-            dataLog("        Exit values: ", exit.m_values, "\n");
-            if (!exit.m_materializations.isEmpty()) {
+            dataLog("        Exit values: ", exitDescriptor.m_values, "\n");
+            if (!exitDescriptor.m_materializations.isEmpty()) {
                 dataLog("        Materializations: \n");
-                for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
+                for (ExitTimeObjectMaterialization* materialization : exitDescriptor.m_materializations)
                     dataLog("            ", pointerDump(materialization), "\n");
             }
         }
     }
     
-    void callStackmap(OSRExit& exit, ExitArgumentList& arguments)
+    void callStackmap(OSRExitDescriptor& exitDescriptor, ExitArgumentList& arguments)
     {
-        exit.m_stackmapID = m_stackmapIDs++;
+        exitDescriptor.m_stackmapID = m_stackmapIDs++;
         arguments.insert(0, m_out.constInt32(MacroAssembler::maxJumpReplacementSize()));
-        arguments.insert(0, m_out.constInt64(exit.m_stackmapID));
+        arguments.insert(0, m_out.constInt64(exitDescriptor.m_stackmapID));
         
         m_out.call(m_out.stackmapIntrinsic(), arguments);
     }

Modified: trunk/Source/_javascript_Core/ftl/FTLOSRExit.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLOSRExit.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLOSRExit.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -40,19 +40,38 @@
 
 using namespace DFG;
 
-OSRExit::OSRExit(
+OSRExitDescriptor::OSRExitDescriptor(
     ExitKind exitKind, DataFormat profileDataFormat,
     MethodOfGettingAValueProfile valueProfile, CodeOrigin codeOrigin,
     CodeOrigin originForProfile, unsigned numberOfArguments,
     unsigned numberOfLocals)
-    : OSRExitBase(exitKind, codeOrigin, originForProfile)
+    : m_kind(exitKind)
+    , m_codeOrigin(codeOrigin)
+    , m_codeOriginForExitProfile(originForProfile)
     , m_profileDataFormat(profileDataFormat)
     , m_valueProfile(valueProfile)
-    , m_patchableCodeOffset(0)
     , m_values(numberOfArguments, numberOfLocals)
+    , m_isInvalidationPoint(false)
 {
 }
 
+void OSRExitDescriptor::validateReferences(const TrackedReferences& trackedReferences)
+{
+    for (unsigned i = m_values.size(); i--;)
+        m_values[i].validateReferences(trackedReferences);
+    
+    for (ExitTimeObjectMaterialization* materialization : m_materializations)
+        materialization->validateReferences(trackedReferences);
+}
+
+
+OSRExit::OSRExit(OSRExitDescriptor& descriptor, uint32_t stackmapRecordIndex)
+    : OSRExitBase(descriptor.m_kind, descriptor.m_codeOrigin, descriptor.m_codeOriginForExitProfile)
+    , m_descriptor(descriptor)
+    , m_stackmapRecordIndex(stackmapRecordIndex)
+{
+}
+
 CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* ftlCodeBlock) const
 {
     return CodeLocationJump(
@@ -61,15 +80,6 @@
         m_patchableCodeOffset);
 }
 
-void OSRExit::validateReferences(const TrackedReferences& trackedReferences)
-{
-    for (unsigned i = m_values.size(); i--;)
-        m_values[i].validateReferences(trackedReferences);
-    
-    for (ExitTimeObjectMaterialization* materialization : m_materializations)
-        materialization->validateReferences(trackedReferences);
-}
-
 } } // namespace JSC::FTL
 
 #endif // ENABLE(FTL_JIT)

Modified: trunk/Source/_javascript_Core/ftl/FTLOSRExit.h (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLOSRExit.h	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLOSRExit.h	2015-10-19 20:29:45 UTC (rev 191313)
@@ -133,14 +133,16 @@
 //   intrinsics (or meta-data, or something) to inform the backend that it's safe to
 //   make the predicate passed to 'exitIf()' more truthy.
 
-struct OSRExit : public DFG::OSRExitBase {
-    OSRExit(
+struct OSRExitDescriptor {
+    OSRExitDescriptor(
         ExitKind, DataFormat profileDataFormat, MethodOfGettingAValueProfile,
         CodeOrigin, CodeOrigin originForProfile,
         unsigned numberOfArguments, unsigned numberOfLocals);
+
+    ExitKind m_kind;
+    CodeOrigin m_codeOrigin;
+    CodeOrigin m_codeOriginForExitProfile;
     
-    MacroAssemblerCodeRef m_code;
-    
     // The first argument to the exit call may be a value we wish to profile.
     // If that's the case, the format will be not Invalid and we'll have a
     // method of getting a value profile. Note that all of the ExitArgument's
@@ -149,22 +151,30 @@
     DataFormat m_profileDataFormat;
     MethodOfGettingAValueProfile m_valueProfile;
     
-    // Offset within the exit stubs of the stub for this exit.
-    unsigned m_patchableCodeOffset;
-    
     Operands<ExitValue> m_values;
     Bag<ExitTimeObjectMaterialization> m_materializations;
     
     uint32_t m_stackmapID;
+    bool m_isInvalidationPoint;
     
+    void validateReferences(const TrackedReferences&);
+};
+
+struct OSRExit : public DFG::OSRExitBase {
+    OSRExit(OSRExitDescriptor&, uint32_t stackmapRecordIndex);
+
+    OSRExitDescriptor& m_descriptor;
+    MacroAssemblerCodeRef m_code;
+    // Offset within the exit stubs of the stub for this exit.
+    unsigned m_patchableCodeOffset;
+    // Offset within Stackmap::records
+    uint32_t m_stackmapRecordIndex;
+
     CodeLocationJump codeLocationForRepatch(CodeBlock* ftlCodeBlock) const;
-    
     void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock)
     {
         OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromFTL);
     }
-    
-    void validateReferences(const TrackedReferences&);
 };
 
 } } // namespace JSC::FTL

Modified: trunk/Source/_javascript_Core/ftl/FTLOSRExitCompilationInfo.h (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLOSRExitCompilationInfo.h	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLOSRExitCompilationInfo.h	2015-10-19 20:29:45 UTC (rev 191313)
@@ -35,14 +35,12 @@
 
 struct OSRExitCompilationInfo {
     OSRExitCompilationInfo()
-        : m_isInvalidationPoint(false)
     {
     }
     
     MacroAssembler::Label m_thunkLabel;
     MacroAssembler::PatchableJump m_thunkJump;
     CodeLocationLabel m_thunkAddress;
-    bool m_isInvalidationPoint;
 };
 
 } } // namespace JSC::FTL

Modified: trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -176,16 +176,9 @@
 static void compileStub(
     unsigned exitID, JITCode* jitCode, OSRExit& exit, VM* vm, CodeBlock* codeBlock)
 {
-    StackMaps::Record* record = nullptr;
+    StackMaps::Record* record = &jitCode->stackmaps.records[exit.m_stackmapRecordIndex];
+    RELEASE_ASSERT(record->patchpointID == exit.m_descriptor.m_stackmapID);
     
-    for (unsigned i = jitCode->stackmaps.records.size(); i--;) {
-        record = &jitCode->stackmaps.records[i];
-        if (record->patchpointID == exit.m_stackmapID)
-            break;
-    }
-    
-    RELEASE_ASSERT(record->patchpointID == exit.m_stackmapID);
-    
     // This code requires framePointerRegister is the same as callFrameRegister
     static_assert(MacroAssembler::framePointerRegister == GPRInfo::callFrameRegister, "MacroAssembler::framePointerRegister and GPRInfo::callFrameRegister must be the same");
 
@@ -198,7 +191,7 @@
     // Figure out how much space we need for those object allocations.
     unsigned numMaterializations = 0;
     size_t maxMaterializationNumArguments = 0;
-    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations) {
+    for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations) {
         numMaterializations++;
         
         maxMaterializationNumArguments = std::max(
@@ -208,18 +201,18 @@
     
     ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(
         sizeof(EncodedJSValue) * (
-            exit.m_values.size() + numMaterializations + maxMaterializationNumArguments) +
+            exit.m_descriptor.m_values.size() + numMaterializations + maxMaterializationNumArguments) +
         requiredScratchMemorySizeInBytes() +
         codeBlock->calleeSaveRegisters()->size() * sizeof(uint64_t));
     EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
-    EncodedJSValue* materializationPointers = scratch + exit.m_values.size();
+    EncodedJSValue* materializationPointers = scratch + exit.m_descriptor.m_values.size();
     EncodedJSValue* materializationArguments = materializationPointers + numMaterializations;
     char* registerScratch = bitwise_cast<char*>(materializationArguments + maxMaterializationNumArguments);
     uint64_t* unwindScratch = bitwise_cast<uint64_t*>(registerScratch + requiredScratchMemorySizeInBytes());
     
     HashMap<ExitTimeObjectMaterialization*, EncodedJSValue*> materializationToPointer;
     unsigned materializationCount = 0;
-    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations) {
+    for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations) {
         materializationToPointer.add(
             materialization, materializationPointers + materializationCount++);
     }
@@ -253,10 +246,10 @@
     jit.move(MacroAssembler::TrustedImm64(TagMask), GPRInfo::tagMaskRegister);
     
     // Do some value profiling.
-    if (exit.m_profileDataFormat != DataFormatNone) {
+    if (exit.m_descriptor.m_profileDataFormat != DataFormatNone) {
         record->locations[0].restoreInto(jit, jitCode->stackmaps, registerScratch, GPRInfo::regT0);
         reboxAccordingToFormat(
-            exit.m_profileDataFormat, jit, GPRInfo::regT0, GPRInfo::regT1, GPRInfo::regT2);
+            exit.m_descriptor.m_profileDataFormat, jit, GPRInfo::regT0, GPRInfo::regT1, GPRInfo::regT2);
         
         if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
             CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
@@ -270,8 +263,8 @@
             }
         }
 
-        if (!!exit.m_valueProfile)
-            jit.store64(GPRInfo::regT0, exit.m_valueProfile.getSpecFailBucket(0));
+        if (!!exit.m_descriptor.m_valueProfile)
+            jit.store64(GPRInfo::regT0, exit.m_descriptor.m_valueProfile.getSpecFailBucket(0));
     }
 
     // Materialize all objects. Don't materialize an object until all
@@ -281,7 +274,7 @@
     // allocation of the former.
 
     HashSet<ExitTimeObjectMaterialization*> toMaterialize;
-    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
+    for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations)
         toMaterialize.add(materialization);
 
     while (!toMaterialize.isEmpty()) {
@@ -344,7 +337,7 @@
     // Now that all the objects have been allocated, we populate them
     // with the correct values. This time we can recover all the
     // fields, including those that are only needed for the allocation.
-    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations) {
+    for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations) {
         for (unsigned propertyIndex = materialization->properties().size(); propertyIndex--;) {
             const ExitValue& value = materialization->properties()[propertyIndex].value();
             compileRecovery(
@@ -365,16 +358,16 @@
     // Save all state from wherever the exit data tells us it was, into the appropriate place in
     // the scratch buffer. This also does the reboxing.
     
-    for (unsigned index = exit.m_values.size(); index--;) {
+    for (unsigned index = exit.m_descriptor.m_values.size(); index--;) {
         compileRecovery(
-            jit, exit.m_values[index], record, jitCode->stackmaps, registerScratch,
+            jit, exit.m_descriptor.m_values[index], record, jitCode->stackmaps, registerScratch,
             materializationToPointer);
         jit.store64(GPRInfo::regT0, scratch + index);
     }
     
     // Henceforth we make it look like the exiting function was called through a register
     // preservation wrapper. This implies that FP must be nudged down by a certain amount. Then
-    // we restore the various things according to either exit.m_values or by copying from the
+    // we restore the various things according to either exit.m_descriptor.m_values or by copying from the
     // old frame, and finally we save the various callee-save registers into where the
     // restoration thunk would restore them from.
     
@@ -422,7 +415,7 @@
 
     // First set up SP so that our data doesn't get clobbered by signals.
     unsigned conservativeStackDelta =
-        (exit.m_values.numberOfLocals() + baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters()) * sizeof(Register) +
+        (exit.m_descriptor.m_values.numberOfLocals() + baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters()) * sizeof(Register) +
         maxFrameExtentForSlowPathCall;
     conservativeStackDelta = WTF::roundUpToMultipleOf(
         stackAlignmentBytes(), conservativeStackDelta);
@@ -476,8 +469,8 @@
 
     // Now get state out of the scratch buffer and place it back into the stack. The values are
     // already reboxed so we just move them.
-    for (unsigned index = exit.m_values.size(); index--;) {
-        VirtualRegister reg = exit.m_values.virtualRegisterForIndex(index);
+    for (unsigned index = exit.m_descriptor.m_values.size(); index--;) {
+        VirtualRegister reg = exit.m_descriptor.m_values.virtualRegisterForIndex(index);
 
         if (reg.isLocal() && reg.toLocal() < static_cast<int>(baselineVirtualRegistersForCalleeSaves))
             continue;
@@ -497,7 +490,7 @@
         ("FTL OSR exit #%u (%s, %s) from %s, with operands = %s, and record = %s",
             exitID, toCString(exit.m_codeOrigin).data(),
             exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
-            toCString(ignoringContext<DumpContext>(exit.m_values)).data(),
+            toCString(ignoringContext<DumpContext>(exit.m_descriptor.m_values)).data(),
             toCString(*record).data()));
 }
 
@@ -527,10 +520,10 @@
         dataLog("    Origin: ", exit.m_codeOrigin, "\n");
         if (exit.m_codeOriginForExitProfile != exit.m_codeOrigin)
             dataLog("    Origin for exit profile: ", exit.m_codeOriginForExitProfile, "\n");
-        dataLog("    Exit values: ", exit.m_values, "\n");
-        if (!exit.m_materializations.isEmpty()) {
+        dataLog("    Exit values: ", exit.m_descriptor.m_values, "\n");
+        if (!exit.m_descriptor.m_materializations.isEmpty()) {
             dataLog("    Materializations:\n");
-            for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
+            for (ExitTimeObjectMaterialization* materialization : exit.m_descriptor.m_materializations)
                 dataLog("        ", pointerDump(materialization), "\n");
         }
     }

Modified: trunk/Source/_javascript_Core/ftl/FTLStackMaps.cpp (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLStackMaps.cpp	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLStackMaps.cpp	2015-10-19 20:29:45 UTC (rev 191313)
@@ -251,7 +251,7 @@
 {
     RecordMap result;
     for (unsigned i = records.size(); i--;)
-        result.add(records[i].patchpointID, Vector<Record>()).iterator->value.append(records[i]);
+        result.add(records[i].patchpointID, Vector<RecordAndIndex>()).iterator->value.append(RecordAndIndex{ records[i], i });
     return result;
 }
 

Modified: trunk/Source/_javascript_Core/ftl/FTLStackMaps.h (191312 => 191313)


--- trunk/Source/_javascript_Core/ftl/FTLStackMaps.h	2015-10-19 20:18:43 UTC (rev 191312)
+++ trunk/Source/_javascript_Core/ftl/FTLStackMaps.h	2015-10-19 20:29:45 UTC (rev 191313)
@@ -122,7 +122,11 @@
     void dump(PrintStream&) const;
     void dumpMultiline(PrintStream&, const char* prefix) const;
     
-    typedef HashMap<uint32_t, Vector<Record>, WTF::IntHash<uint32_t>, WTF::UnsignedWithZeroKeyHashTraits<uint32_t>> RecordMap;
+    struct RecordAndIndex {
+        Record record;
+        uint32_t index;
+    };
+    typedef HashMap<uint32_t, Vector<RecordAndIndex>, WTF::IntHash<uint32_t>, WTF::UnsignedWithZeroKeyHashTraits<uint32_t>> RecordMap;
     
     RecordMap computeRecordMap() const;
 
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to