Title: [291736] trunk
Revision
291736
Author
ysuz...@apple.com
Date
2022-03-22 23:45:52 -0700 (Tue, 22 Mar 2022)

Log Message

[JSC] Test DFG / FTL DataIC
https://bugs.webkit.org/show_bug.cgi?id=231224

Reviewed by Saam Barati.

JSTests:

* microbenchmarks/deltablue-varargs.js:
* microbenchmarks/richards-try-catch.js:

Source/_javascript_Core:

This patch revives DataIC in DFG and FTL, and re-enable testing to make it usable
for unlinked DFG. Currently, only x64 / ARM64 are supported.

* bytecode/InlineAccess.cpp:
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::isCacheableStringLength):
(JSC::InlineAccess::rewireStubAsJumpInAccess):
(JSC::InlineAccess::resetStubAsJumpInAccess):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileGetById):
(JSC::DFG::SpeculativeJIT::compileGetByIdFlush):
(JSC::DFG::SpeculativeJIT::compileInById):
* dfg/DFGSpeculativeJIT.h:
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
(JSC::DFG::SpeculativeJIT::compile):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::cachedPutById):
(JSC::FTL::DFG::LowerDFGToB3::compileCompareStrictEq):
* jit/JITCode.h:
(JSC::JITCode::useDataIC):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::generateFastCommon):
(JSC::generateGetByIdInlineAccess):
(JSC::JITGetByIdGenerator::generateFastPath):
(JSC::JITGetByIdWithThisGenerator::generateFastPath):
(JSC::generatePutByIdInlineAccess):
(JSC::JITPutByIdGenerator::generateBaselineDataICFastPath):
(JSC::JITPutByIdGenerator::generateFastPath):
(JSC::JITDelByValGenerator::generateFastPath):
(JSC::JITDelByIdGenerator::generateFastPath):
(JSC::JITInByValGenerator::generateFastPath):
(JSC::generateInByIdInlineAccess):
(JSC::JITInByIdGenerator::generateFastPath):
(JSC::JITInByIdGenerator::generateBaselineDataICFastPath):
(JSC::JITInstanceOfGenerator::generateFastPath):
(JSC::JITGetByValGenerator::generateFastPath):
(JSC::JITPutByValGenerator::generateFastPath):
(JSC::JITPrivateBrandAccessGenerator::generateFastPath):
* jit/JITInlineCacheGenerator.h:
(JSC::JITInlineCacheGenerator::reportSlowPathCall):
(JSC::JITInlineCacheGenerator::slowPathBegin const):
(JSC::JITByIdGenerator::slowPathJump const):
(JSC::JITInByValGenerator::slowPathJump const):
* runtime/Options.cpp:
(JSC::Options::recomputeDependentOptions):
* runtime/OptionsList.h:

Tools:

* Scripts/run-jsc-stress-tests:

Modified Paths

Diff

Modified: trunk/JSTests/ChangeLog (291735 => 291736)


--- trunk/JSTests/ChangeLog	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/JSTests/ChangeLog	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1,3 +1,13 @@
+2022-03-22  Yusuke Suzuki  <ysuz...@apple.com>
+
+        [JSC] Test DFG / FTL DataIC
+        https://bugs.webkit.org/show_bug.cgi?id=231224
+
+        Reviewed by Saam Barati.
+
+        * microbenchmarks/deltablue-varargs.js:
+        * microbenchmarks/richards-try-catch.js:
+
 2022-03-21  Yusuke Suzuki  <ysuz...@apple.com>
 
         [JSC] Change Date.parse to stop returning numbers with fractional part

Modified: trunk/JSTests/microbenchmarks/deltablue-varargs.js (291735 => 291736)


--- trunk/JSTests/microbenchmarks/deltablue-varargs.js	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/JSTests/microbenchmarks/deltablue-varargs.js	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1,5 +1,5 @@
 //@ skip if $model == "Apple Watch Series 3" # added by mark-jsc-stress-test.py
-//@ requireOptions("--useDataIC=true", "--useDataICSharing=true")
+//@ requireOptions("--useDataICInOptimizingJIT=true", "--useDataICSharing=true")
 
 // Copyright 2008 the V8 project authors. All rights reserved.
 // Copyright 1996 John Maloney and Mario Wolczko.

Modified: trunk/JSTests/microbenchmarks/richards-try-catch.js (291735 => 291736)


--- trunk/JSTests/microbenchmarks/richards-try-catch.js	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/JSTests/microbenchmarks/richards-try-catch.js	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1,5 +1,5 @@
 //@ skip if $model == "Apple Watch Series 3" # added by mark-jsc-stress-test.py
-//@ requireOptions("--useDataIC=true", "--useDataICSharing=true")
+//@ requireOptions("--useDataICInOptimizingJIT=true", "--useDataICSharing=true")
 
 // Copyright 2006-2008 the V8 project authors. All rights reserved.
 // Redistribution and use in source and binary forms, with or without

Modified: trunk/Source/_javascript_Core/ChangeLog (291735 => 291736)


--- trunk/Source/_javascript_Core/ChangeLog	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/ChangeLog	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1,3 +1,63 @@
+2022-03-22  Yusuke Suzuki  <ysuz...@apple.com>
+
+        [JSC] Test DFG / FTL DataIC
+        https://bugs.webkit.org/show_bug.cgi?id=231224
+
+        Reviewed by Saam Barati.
+
+        This patch revives DataIC in DFG and FTL, and re-enable testing to make it usable
+        for unlinked DFG. Currently, only x64 / ARM64 are supported.
+
+        * bytecode/InlineAccess.cpp:
+        (JSC::InlineAccess::isCacheableArrayLength):
+        (JSC::InlineAccess::isCacheableStringLength):
+        (JSC::InlineAccess::rewireStubAsJumpInAccess):
+        (JSC::InlineAccess::resetStubAsJumpInAccess):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileGetById):
+        (JSC::DFG::SpeculativeJIT::compileGetByIdFlush):
+        (JSC::DFG::SpeculativeJIT::compileInById):
+        * dfg/DFGSpeculativeJIT.h:
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        (JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        (JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::cachedPutById):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCompareStrictEq):
+        * jit/JITCode.h:
+        (JSC::JITCode::useDataIC):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::generateFastCommon):
+        (JSC::generateGetByIdInlineAccess):
+        (JSC::JITGetByIdGenerator::generateFastPath):
+        (JSC::JITGetByIdWithThisGenerator::generateFastPath):
+        (JSC::generatePutByIdInlineAccess):
+        (JSC::JITPutByIdGenerator::generateBaselineDataICFastPath):
+        (JSC::JITPutByIdGenerator::generateFastPath):
+        (JSC::JITDelByValGenerator::generateFastPath):
+        (JSC::JITDelByIdGenerator::generateFastPath):
+        (JSC::JITInByValGenerator::generateFastPath):
+        (JSC::generateInByIdInlineAccess):
+        (JSC::JITInByIdGenerator::generateFastPath):
+        (JSC::JITInByIdGenerator::generateBaselineDataICFastPath):
+        (JSC::JITInstanceOfGenerator::generateFastPath):
+        (JSC::JITGetByValGenerator::generateFastPath):
+        (JSC::JITPutByValGenerator::generateFastPath):
+        (JSC::JITPrivateBrandAccessGenerator::generateFastPath):
+        * jit/JITInlineCacheGenerator.h:
+        (JSC::JITInlineCacheGenerator::reportSlowPathCall):
+        (JSC::JITInlineCacheGenerator::slowPathBegin const):
+        (JSC::JITByIdGenerator::slowPathJump const):
+        (JSC::JITInByValGenerator::slowPathJump const):
+        * runtime/Options.cpp:
+        (JSC::Options::recomputeDependentOptions):
+        * runtime/OptionsList.h:
+
 2022-03-22  Chris Dumez  <cdu...@apple.com>
 
         Use ASCIILiteral in a few more places where it is useful

Modified: trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -299,7 +299,7 @@
     if (!stubInfo.hasConstantIdentifier)
         return false;
 
-    if (codeBlock->jitType() == JITType::BaselineJIT)
+    if (codeBlock->useDataIC())
         return false;
 
     if (!hasFreeRegister(stubInfo))
@@ -340,7 +340,7 @@
     if (!stubInfo.hasConstantIdentifier)
         return false;
 
-    if (codeBlock->jitType() == JITType::BaselineJIT)
+    if (codeBlock->useDataIC())
         return false;
 
     return hasFreeRegister(stubInfo);
@@ -428,24 +428,6 @@
 void InlineAccess::rewireStubAsJumpInAccess(CodeBlock* codeBlock, StructureStubInfo& stubInfo, CodeLocationLabel<JITStubRoutinePtrTag> target)
 {
     if (codeBlock->useDataIC()) {
-        // If it is not GetById-like-thing, we do not emit nop sled (e.g. GetByVal).
-        // The code is already an indirect jump, and only thing we should do is replacing m_codePtr.
-        if (codeBlock->jitType() != JITType::BaselineJIT && stubInfo.hasConstantIdentifier) {
-            // If m_codePtr is pointing to stubInfo.slowPathStartLocation, this means that InlineAccess code is not a stub one.
-            // We rewrite this with the stub-based dispatching code once, and continue using it until we reset the code.
-            if (stubInfo.m_codePtr.executableAddress() == stubInfo.slowPathStartLocation.executableAddress()) {
-                CCallHelpers::emitJITCodeOver(stubInfo.start.retagged<JSInternalPtrTag>(), scopedLambda<void(CCallHelpers&)>([&](CCallHelpers& jit) {
-                    jit.move(CCallHelpers::TrustedImmPtr(&stubInfo), stubInfo.m_stubInfoGPR);
-                    jit.farJump(CCallHelpers::Address(stubInfo.m_stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
-                    auto jump = jit.jump();
-                    auto doneLocation = stubInfo.doneLocation;
-                    jit.addLinkTask([=](LinkBuffer& linkBuffer) {
-                        linkBuffer.link(jump, doneLocation);
-                    });
-                }), "InlineAccess: linking stub call");
-            }
-        }
-
         stubInfo.m_codePtr = target;
         stubInfo.m_inlineAccessBaseStructureID.clear(); // Clear out the inline access code.
         return;
@@ -462,7 +444,7 @@
 
 void InlineAccess::resetStubAsJumpInAccess(CodeBlock* codeBlock, StructureStubInfo& stubInfo)
 {
-    if (codeBlock->useDataIC() && codeBlock->jitType() == JITType::BaselineJIT) {
+    if (codeBlock->useDataIC()) {
         stubInfo.m_codePtr = stubInfo.slowPathStartLocation;
         stubInfo.m_inlineAccessBaseStructureID.clear(); // Clear out the inline access code.
         return;

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1049,13 +1049,17 @@
     switch (node->child1().useKind()) {
     case CellUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         SpeculateCellOperand base(this, node->child1());
         JSValueRegsTemporary result(this, Reuse, base);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = JSValueRegs::payloadOnly(base.gpr());
         JSValueRegs resultRegs = result.regs();
@@ -1062,7 +1066,7 @@
 
         base.use();
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), JITCompiler::Jump(), NeedToSpill, accessType);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), JITCompiler::Jump(), NeedToSpill, accessType);
 
         jsValueResult(resultRegs, node, DataFormatJS, UseChildrenCalledExplicitly);
         break;
@@ -1070,13 +1074,17 @@
 
     case UntypedUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         JSValueOperand base(this, node->child1());
         JSValueRegsTemporary result(this, Reuse, base);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = base.jsValueRegs();
         JSValueRegs resultRegs = result.regs();
@@ -1085,7 +1093,7 @@
 
         JITCompiler::Jump notCell = m_jit.branchIfNotCell(baseRegs);
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), notCell, NeedToSpill, accessType);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), notCell, NeedToSpill, accessType);
 
         jsValueResult(resultRegs, node, DataFormatJS, UseChildrenCalledExplicitly);
         break;
@@ -1102,13 +1110,17 @@
     switch (node->child1().useKind()) {
     case CellUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         SpeculateCellOperand base(this, node->child1());
         JSValueRegsFlushedCallResult result(this);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = JSValueRegs::payloadOnly(base.gpr());
         JSValueRegs resultRegs = result.regs();
@@ -1117,7 +1129,7 @@
 
         flushRegisters();
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), JITCompiler::Jump(), DontSpill, accessType);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), JITCompiler::Jump(), DontSpill, accessType);
 
         jsValueResult(resultRegs, node, DataFormatJS, UseChildrenCalledExplicitly);
         break;
@@ -1125,13 +1137,17 @@
 
     case UntypedUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         JSValueOperand base(this, node->child1());
         JSValueRegsFlushedCallResult result(this);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = base.jsValueRegs();
         JSValueRegs resultRegs = result.regs();
@@ -1142,7 +1158,7 @@
 
         JITCompiler::Jump notCell = m_jit.branchIfNotCell(baseRegs);
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), notCell, DontSpill, accessType);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), notCell, DontSpill, accessType);
 
         jsValueResult(resultRegs, node, DataFormatJS, UseChildrenCalledExplicitly);
         break;
@@ -1323,13 +1339,17 @@
 void SpeculativeJIT::compileInById(Node* node)
 {
     std::optional<GPRTemporary> stubInfo;
+    std::optional<GPRTemporary> scratch;
     SpeculateCellOperand base(this, node->child1());
     JSValueRegsTemporary result(this, Reuse, base, PayloadWord);
 
     GPRReg stubInfoGPR = InvalidGPRReg;
+    GPRReg scratchGPR = InvalidGPRReg;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
         stubInfo.emplace(this);
+        scratch.emplace(this);
         stubInfoGPR = stubInfo->gpr();
+        scratchGPR = scratch->gpr();
     }
     GPRReg baseGPR = base.gpr();
     JSValueRegs resultRegs = result.regs();
@@ -1342,7 +1362,7 @@
     JITInByIdGenerator gen(
         m_jit.codeBlock(), &m_jit.jitCode()->common.m_stubInfos, JITType::DFGJIT, codeOrigin, callSite, usedRegisters, node->cacheableIdentifier(),
         JSValueRegs::payloadOnly(baseGPR), resultRegs, stubInfoGPR);
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR);
 
     JITCompiler::JumpList slowCases;
     slowCases.append(gen.slowPathJump());
@@ -4154,18 +4174,22 @@
     switch (m_graph.child(node, 0).useKind()) {
     case CellUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         SpeculateCellOperand base(this, m_graph.child(node, 0));
         JSValueRegsTemporary result(this, Reuse, base);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = JSValueRegs::payloadOnly(base.gpr());
         JSValueRegs resultRegs = result.regs();
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), JITCompiler::Jump(), NeedToSpill, AccessType::GetPrivateName);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), JITCompiler::Jump(), NeedToSpill, AccessType::GetPrivateName);
 
         jsValueResult(resultRegs, node, DataFormatJS);
         break;
@@ -4173,13 +4197,17 @@
 
     case UntypedUse: {
         std::optional<GPRTemporary> stubInfo;
+        std::optional<GPRTemporary> scratch;
         JSValueOperand base(this, m_graph.child(node, 0));
         JSValueRegsTemporary result(this, Reuse, base);
 
         GPRReg stubInfoGPR = InvalidGPRReg;
+        GPRReg scratchGPR = InvalidGPRReg;
         if (JITCode::useDataIC(JITType::DFGJIT)) {
             stubInfo.emplace(this);
+            scratch.emplace(this);
             stubInfoGPR = stubInfo->gpr();
+            scratchGPR = scratch->gpr();
         }
         JSValueRegs baseRegs = base.jsValueRegs();
         JSValueRegs resultRegs = result.regs();
@@ -4186,7 +4214,7 @@
 
         JITCompiler::Jump notCell = m_jit.branchIfNotCell(baseRegs);
 
-        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, node->cacheableIdentifier(), notCell, NeedToSpill, AccessType::GetPrivateName);
+        cachedGetById(node->origin.semantic, baseRegs, resultRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), notCell, NeedToSpill, AccessType::GetPrivateName);
 
         jsValueResult(resultRegs, node, DataFormatJS);
         break;
@@ -4342,14 +4370,18 @@
 void SpeculativeJIT::compilePutPrivateNameById(Node* node)
 {
     std::optional<GPRTemporary> stubInfo;
+    std::optional<GPRTemporary> scratch2;
     SpeculateCellOperand base(this, node->child1());
     JSValueOperand value(this, node->child2());
     GPRTemporary scratch(this);
 
     GPRReg stubInfoGPR = InvalidGPRReg;
+    GPRReg scratch2GPR = InvalidGPRReg;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
         stubInfo.emplace(this);
+        scratch2.emplace(this);
         stubInfoGPR = stubInfo->gpr();
+        scratch2GPR = scratch2->gpr();
     }
     JSValueRegs valueRegs = value.jsValueRegs();
     GPRReg baseGPR = base.gpr();
@@ -4358,7 +4390,7 @@
     // We emit property check during DFG generation, so we don't need
     // to check it here.
     auto putKind = node->privateFieldPutKind().isDefine() ? PutKind::DirectPrivateFieldDefine : PutKind::DirectPrivateFieldSet;
-    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), putKind, ECMAMode::strict());
+    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, scratch2GPR, node->cacheableIdentifier(), putKind, ECMAMode::strict());
 
     noResult(node);
 }
@@ -14128,14 +14160,18 @@
 void SpeculativeJIT::compilePutByIdFlush(Node* node)
 {
     std::optional<GPRTemporary> stubInfo;
+    std::optional<GPRTemporary> scratch2;
     SpeculateCellOperand base(this, node->child1());
     JSValueOperand value(this, node->child2());
     GPRTemporary scratch(this);
 
     GPRReg stubInfoGPR = InvalidGPRReg;
+    GPRReg scratch2GPR = InvalidGPRReg;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
         stubInfo.emplace(this);
+        scratch2.emplace(this);
         stubInfoGPR = stubInfo->gpr();
+        scratch2GPR = scratch2->gpr();
     }
     GPRReg baseGPR = base.gpr();
     JSValueRegs valueRegs = value.jsValueRegs();
@@ -14142,7 +14178,7 @@
     GPRReg scratchGPR = scratch.gpr();
     flushRegisters();
 
-    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), PutKind::NotDirect, node->ecmaMode(), MacroAssembler::Jump(), DontSpill);
+    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, scratch2GPR, node->cacheableIdentifier(), PutKind::NotDirect, node->ecmaMode(), MacroAssembler::Jump(), DontSpill);
 
     noResult(node);
 }
@@ -14150,20 +14186,24 @@
 void SpeculativeJIT::compilePutById(Node* node)
 {
     std::optional<GPRTemporary> stubInfo;
+    std::optional<GPRTemporary> scratch2;
     SpeculateCellOperand base(this, node->child1());
     JSValueOperand value(this, node->child2());
     GPRTemporary scratch(this);
 
     GPRReg stubInfoGPR = InvalidGPRReg;
+    GPRReg scratch2GPR = InvalidGPRReg;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
         stubInfo.emplace(this);
+        scratch2.emplace(this);
         stubInfoGPR = stubInfo->gpr();
+        scratch2GPR = scratch2->gpr();
     }
     GPRReg baseGPR = base.gpr();
     JSValueRegs valueRegs = value.jsValueRegs();
     GPRReg scratchGPR = scratch.gpr();
 
-    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), PutKind::NotDirect, node->ecmaMode());
+    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, scratch2GPR, node->cacheableIdentifier(), PutKind::NotDirect, node->ecmaMode());
 
     noResult(node);
 }
@@ -14171,20 +14211,24 @@
 void SpeculativeJIT::compilePutByIdDirect(Node* node)
 {
     std::optional<GPRTemporary> stubInfo;
+    std::optional<GPRTemporary> scratch2;
     SpeculateCellOperand base(this, node->child1());
     JSValueOperand value(this, node->child2());
     GPRTemporary scratch(this);
 
     GPRReg stubInfoGPR = InvalidGPRReg;
+    GPRReg scratch2GPR = InvalidGPRReg;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
         stubInfo.emplace(this);
+        scratch2.emplace(this);
         stubInfoGPR = stubInfo->gpr();
+        scratch2GPR = scratch2->gpr();
     }
     GPRReg baseGPR = base.gpr();
     JSValueRegs valueRegs = value.jsValueRegs();
     GPRReg scratchGPR = scratch.gpr();
 
-    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), PutKind::Direct, node->ecmaMode());
+    cachedPutById(node->origin.semantic, baseGPR, valueRegs, stubInfoGPR, scratchGPR, scratch2GPR, node->cacheableIdentifier(), PutKind::Direct, node->ecmaMode());
 
     noResult(node);
 }
@@ -15778,7 +15822,7 @@
     noResult(node);
 }
 
-void SpeculativeJIT::cachedPutById(CodeOrigin codeOrigin, GPRReg baseGPR, JSValueRegs valueRegs, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier identifier, PutKind putKind, ECMAMode ecmaMode, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
+void SpeculativeJIT::cachedPutById(CodeOrigin codeOrigin, GPRReg baseGPR, JSValueRegs valueRegs, GPRReg stubInfoGPR, GPRReg scratchGPR, GPRReg scratch2GPR, CacheableIdentifier identifier, PutKind putKind, ECMAMode ecmaMode, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
 {
     RegisterSet usedRegisters = this->usedRegisters();
     if (spillMode == DontSpill) {
@@ -15787,6 +15831,10 @@
         usedRegisters.set(valueRegs, false);
         if (stubInfoGPR != InvalidGPRReg)
             usedRegisters.set(stubInfoGPR, false);
+        if (scratchGPR != InvalidGPRReg)
+            usedRegisters.set(scratchGPR, false);
+        if (scratch2GPR != InvalidGPRReg)
+            usedRegisters.set(scratch2GPR, false);
     }
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     JITPutByIdGenerator gen(
@@ -15794,7 +15842,7 @@
         JSValueRegs::payloadOnly(baseGPR), valueRegs, stubInfoGPR,
         scratchGPR, ecmaMode, putKind);
 
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR, scratch2GPR);
 
     JITCompiler::JumpList slowCases;
     if (slowPathTarget.isSet())

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.h (291735 => 291736)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.h	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.h	2022-03-23 06:45:52 UTC (rev 291736)
@@ -723,16 +723,16 @@
 
     void compileCheckDetached(Node*);
 
-    void cachedGetById(CodeOrigin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
-    void cachedPutById(CodeOrigin, GPRReg baseGPR, JSValueRegs valueRegs, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, PutKind, ECMAMode, JITCompiler::Jump slowPathTarget = JITCompiler::Jump(), SpillRegistersMode = NeedToSpill);
+    void cachedGetById(CodeOrigin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
+    void cachedPutById(CodeOrigin, GPRReg baseGPR, JSValueRegs valueRegs, GPRReg stubInfoGPR, GPRReg scratchGPR, GPRReg scratch2GPR, CacheableIdentifier, PutKind, ECMAMode, JITCompiler::Jump slowPathTarget = JITCompiler::Jump(), SpillRegistersMode = NeedToSpill);
     void cachedGetByVal(CodeOrigin, JSValueRegs base, JSValueRegs property, JSValueRegs result, JITCompiler::Jump slowPathTarget);
 
 #if USE(JSVALUE64)
-    void cachedGetById(CodeOrigin, GPRReg baseGPR, GPRReg resultGPR, GPRReg stubInfoGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
-    void cachedGetByIdWithThis(CodeOrigin, GPRReg baseGPR, GPRReg thisGPR, GPRReg resultGPR, GPRReg stubInfoGPR, CacheableIdentifier, const JITCompiler::JumpList& slowPathTarget = JITCompiler::JumpList());
+    void cachedGetById(CodeOrigin, GPRReg baseGPR, GPRReg resultGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
+    void cachedGetByIdWithThis(CodeOrigin, GPRReg baseGPR, GPRReg thisGPR, GPRReg resultGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, const JITCompiler::JumpList& slowPathTarget = JITCompiler::JumpList());
 #elif USE(JSVALUE32_64)
-    void cachedGetById(CodeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
-    void cachedGetByIdWithThis(CodeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg thisTagGPROrNone, GPRReg thisPayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR, CacheableIdentifier, const JITCompiler::JumpList& slowPathTarget = JITCompiler::JumpList());
+    void cachedGetById(CodeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
+    void cachedGetByIdWithThis(CodeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg thisTagGPROrNone, GPRReg thisPayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier, const JITCompiler::JumpList& slowPathTarget = JITCompiler::JumpList());
 #endif
 
     void compileDeleteById(Node*);

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -168,14 +168,14 @@
     }
 }
 
-void SpeculativeJIT::cachedGetById(CodeOrigin origin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget , SpillRegistersMode mode, AccessType type)
+void SpeculativeJIT::cachedGetById(CodeOrigin origin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget , SpillRegistersMode mode, AccessType type)
 {
-    cachedGetById(origin, base.tagGPR(), base.payloadGPR(), result.tagGPR(), result.payloadGPR(), stubInfoGPR, identifier, slowPathTarget, mode, type);
+    cachedGetById(origin, base.tagGPR(), base.payloadGPR(), result.tagGPR(), result.payloadGPR(), stubInfoGPR, scratchGPR, identifier, slowPathTarget, mode, type);
 }
 
 void SpeculativeJIT::cachedGetById(
     CodeOrigin codeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR,
-    CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode, AccessType type)
+    GPRReg scratchGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode, AccessType type)
 {
     UNUSED_PARAM(stubInfoGPR);
     // This is a hacky fix for when the register allocator decides to alias the base payload with the result tag. This only happens
@@ -204,7 +204,7 @@
         m_jit.codeBlock(), &m_jit.jitCode()->common.m_stubInfos, JITType::DFGJIT, codeOrigin, callSite, usedRegisters, identifier,
         JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), InvalidGPRReg, type);
     
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR);
     
     JITCompiler::JumpList slowCases;
     if (slowPathTarget.isSet())
@@ -229,7 +229,7 @@
 }
 
 void SpeculativeJIT::cachedGetByIdWithThis(
-    CodeOrigin codeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg thisTagGPR, GPRReg thisPayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR,
+    CodeOrigin codeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg thisTagGPR, GPRReg thisPayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, GPRReg stubInfoGPR, GPRReg scratchGPR,
     CacheableIdentifier identifier, const JITCompiler::JumpList& slowPathTarget)
 {
     UNUSED_PARAM(stubInfoGPR);
@@ -240,7 +240,7 @@
         m_jit.codeBlock(), &m_jit.jitCode()->common.m_stubInfos, JITType::DFGJIT, codeOrigin, callSite, usedRegisters, identifier,
         JSValueRegs(resultTagGPR, resultPayloadGPR), JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(thisTagGPR, thisPayloadGPR), InvalidGPRReg);
     
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR);
     
     JITCompiler::JumpList slowCases;
     if (!slowPathTarget.empty())
@@ -3232,7 +3232,7 @@
             GPRReg resultTagGPR = resultTag.gpr();
             GPRReg resultPayloadGPR = resultPayload.gpr();
             
-            cachedGetByIdWithThis(node->origin.semantic, InvalidGPRReg, baseGPR, InvalidGPRReg, thisGPR, resultTagGPR, resultPayloadGPR, InvalidGPRReg, node->cacheableIdentifier());
+            cachedGetByIdWithThis(node->origin.semantic, InvalidGPRReg, baseGPR, InvalidGPRReg, thisGPR, resultTagGPR, resultPayloadGPR, InvalidGPRReg, InvalidGPRReg, node->cacheableIdentifier());
             
             jsValueResult(resultTagGPR, resultPayloadGPR, node);
         } else {
@@ -3252,7 +3252,7 @@
             notCellList.append(m_jit.branchIfNotCell(base.jsValueRegs()));
             notCellList.append(m_jit.branchIfNotCell(thisValue.jsValueRegs()));
             
-            cachedGetByIdWithThis(node->origin.semantic, baseTagGPR, basePayloadGPR, thisTagGPR, thisPayloadGPR, resultTagGPR, resultPayloadGPR, InvalidGPRReg, node->cacheableIdentifier(), notCellList);
+            cachedGetByIdWithThis(node->origin.semantic, baseTagGPR, basePayloadGPR, thisTagGPR, thisPayloadGPR, resultTagGPR, resultPayloadGPR, InvalidGPRReg, InvalidGPRReg, node->cacheableIdentifier(), notCellList);
             
             jsValueResult(resultTagGPR, resultPayloadGPR, node);
         }

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -148,12 +148,12 @@
     }
 }
 
-void SpeculativeJIT::cachedGetById(CodeOrigin origin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget , SpillRegistersMode mode, AccessType type)
+void SpeculativeJIT::cachedGetById(CodeOrigin origin, JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget , SpillRegistersMode mode, AccessType type)
 {
-    cachedGetById(origin, base.gpr(), result.gpr(), stubInfoGPR, identifier, slowPathTarget, mode, type);
+    cachedGetById(origin, base.gpr(), result.gpr(), stubInfoGPR, scratchGPR, identifier, slowPathTarget, mode, type);
 }
 
-void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg resultGPR, GPRReg stubInfoGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode, AccessType type)
+void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg resultGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier identifier, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode, AccessType type)
 {
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     RegisterSet usedRegisters = this->usedRegisters();
@@ -163,15 +163,18 @@
         usedRegisters.set(resultGPR, false);
         if (stubInfoGPR != InvalidGPRReg)
             usedRegisters.set(stubInfoGPR, false);
+        if (scratchGPR != InvalidGPRReg)
+            usedRegisters.set(scratchGPR, false);
     }
     JITGetByIdGenerator gen(
         m_jit.codeBlock(), &m_jit.jitCode()->common.m_stubInfos, JITType::DFGJIT, codeOrigin, callSite, usedRegisters, identifier,
         JSValueRegs(baseGPR), JSValueRegs(resultGPR), stubInfoGPR, type);
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR);
     
     JITCompiler::JumpList slowCases;
     slowCases.append(slowPathTarget);
-    slowCases.append(gen.slowPathJump());
+    if (!JITCode::useDataIC(JITType::DFGJIT))
+        slowCases.append(gen.slowPathJump());
 
     std::unique_ptr<SlowPathGenerator> slowPath;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
@@ -190,7 +193,7 @@
     addSlowPathGenerator(WTFMove(slowPath));
 }
 
-void SpeculativeJIT::cachedGetByIdWithThis(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg thisGPR, GPRReg resultGPR, GPRReg stubInfoGPR, CacheableIdentifier identifier, const JITCompiler::JumpList& slowPathTarget)
+void SpeculativeJIT::cachedGetByIdWithThis(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg thisGPR, GPRReg resultGPR, GPRReg stubInfoGPR, GPRReg scratchGPR, CacheableIdentifier identifier, const JITCompiler::JumpList& slowPathTarget)
 {
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     RegisterSet usedRegisters = this->usedRegisters();
@@ -200,15 +203,18 @@
     usedRegisters.set(resultGPR, false);
     if (stubInfoGPR != InvalidGPRReg)
         usedRegisters.set(stubInfoGPR, false);
+    if (scratchGPR != InvalidGPRReg)
+        usedRegisters.set(scratchGPR, false);
     
     JITGetByIdWithThisGenerator gen(
         m_jit.codeBlock(), &m_jit.jitCode()->common.m_stubInfos, JITType::DFGJIT, codeOrigin, callSite, usedRegisters, identifier,
         JSValueRegs(resultGPR), JSValueRegs(baseGPR), JSValueRegs(thisGPR), stubInfoGPR);
-    gen.generateFastPath(m_jit);
+    gen.generateFastPath(m_jit, scratchGPR);
     
     JITCompiler::JumpList slowCases;
     slowCases.append(slowPathTarget);
-    slowCases.append(gen.slowPathJump());
+    if (!JITCode::useDataIC(JITType::DFGJIT))
+        slowCases.append(gen.slowPathJump());
     
     std::unique_ptr<SlowPathGenerator> slowPath;
     if (JITCode::useDataIC(JITType::DFGJIT)) {
@@ -4242,43 +4248,53 @@
 
     case GetByIdWithThis: {
         if (node->child1().useKind() == CellUse && node->child2().useKind() == CellUse) {
-            std::optional<GPRTemporary> stubInfo;
             SpeculateCellOperand base(this, node->child1());
             SpeculateCellOperand thisValue(this, node->child2());
 
+            GPRReg baseGPR = base.gpr();
+            GPRReg thisValueGPR = thisValue.gpr();
+            
+            GPRFlushedCallResult result(this);
+            GPRReg resultGPR = result.gpr();
+
+            std::optional<GPRTemporary> stubInfo;
+            std::optional<GPRTemporary> scratch;
             GPRReg stubInfoGPR = InvalidGPRReg;
+            GPRReg scratchGPR = InvalidGPRReg;
             if (JITCode::useDataIC(JITType::DFGJIT)) {
                 stubInfo.emplace(this);
+                scratch.emplace(this);
                 stubInfoGPR = stubInfo->gpr();
+                scratchGPR = scratch->gpr();
             }
-            GPRReg baseGPR = base.gpr();
-            GPRReg thisValueGPR = thisValue.gpr();
             
-            GPRFlushedCallResult result(this);
-            GPRReg resultGPR = result.gpr();
-            
             flushRegisters();
             
-            cachedGetByIdWithThis(node->origin.semantic, baseGPR, thisValueGPR, resultGPR, stubInfoGPR, node->cacheableIdentifier(), JITCompiler::JumpList());
+            cachedGetByIdWithThis(node->origin.semantic, baseGPR, thisValueGPR, resultGPR, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), JITCompiler::JumpList());
             
             jsValueResult(resultGPR, node);
             
         } else {
-            std::optional<GPRTemporary> stubInfo;
             JSValueOperand base(this, node->child1());
             JSValueOperand thisValue(this, node->child2());
 
+            GPRReg baseGPR = base.gpr();
+            GPRReg thisValueGPR = thisValue.gpr();
+            
+            GPRFlushedCallResult result(this);
+            GPRReg resultGPR = result.gpr();
+
+            std::optional<GPRTemporary> stubInfo;
+            std::optional<GPRTemporary> scratch;
             GPRReg stubInfoGPR = InvalidGPRReg;
+            GPRReg scratchGPR = InvalidGPRReg;
             if (JITCode::useDataIC(JITType::DFGJIT)) {
                 stubInfo.emplace(this);
+                scratch.emplace(this);
                 stubInfoGPR = stubInfo->gpr();
+                scratchGPR = scratch->gpr();
             }
-            GPRReg baseGPR = base.gpr();
-            GPRReg thisValueGPR = thisValue.gpr();
             
-            GPRFlushedCallResult result(this);
-            GPRReg resultGPR = result.gpr();
-            
             flushRegisters();
             
             JITCompiler::JumpList notCellList;
@@ -4285,7 +4301,7 @@
             notCellList.append(m_jit.branchIfNotCell(JSValueRegs(baseGPR)));
             notCellList.append(m_jit.branchIfNotCell(JSValueRegs(thisValueGPR)));
             
-            cachedGetByIdWithThis(node->origin.semantic, baseGPR, thisValueGPR, resultGPR, stubInfoGPR, node->cacheableIdentifier(), notCellList);
+            cachedGetByIdWithThis(node->origin.semantic, baseGPR, thisValueGPR, resultGPR, stubInfoGPR, scratchGPR, node->cacheableIdentifier(), notCellList);
             
             jsValueResult(resultGPR, node);
         }

Modified: trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -4665,7 +4665,7 @@
         patchpoint->append(m_notCellMask, ValueRep::reg(GPRInfo::notCellMaskRegister));
         patchpoint->append(m_numberTag, ValueRep::reg(GPRInfo::numberTagRegister));
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 1 : 0;
+        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 3 : 0;
 
         // FIXME: If this is a PutByIdFlush, we might want to late-clobber volatile registers.
         // https://bugs.webkit.org/show_bug.cgi?id=152848
@@ -4689,7 +4689,14 @@
                 // JS setter call ICs generated by the PutById IC will need this.
                 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
 
-                GPRReg stubInfoGPR = JITCode::useDataIC(JITType::FTLJIT) ? params.gpScratch(0) : InvalidGPRReg;
+                GPRReg stubInfoGPR = InvalidGPRReg;
+                GPRReg scratchGPR = InvalidGPRReg;
+                GPRReg scratch2GPR = InvalidGPRReg;
+                if (JITCode::useDataIC(JITType::FTLJIT)) {
+                    stubInfoGPR = params.gpScratch(0);
+                    scratchGPR = params.gpScratch(1);
+                    scratch2GPR = params.gpScratch(2);
+                }
 
                 auto generator = Box<JITPutByIdGenerator>::create(
                     jit.codeBlock(), &state->jitCode->common.m_stubInfos, JITType::FTLJIT, nodeSemanticOrigin, callSiteIndex,
@@ -4697,7 +4704,7 @@
                     JSValueRegs(params[1].gpr()), stubInfoGPR, GPRInfo::patchpointScratchRegister, ecmaMode,
                     putKind);
 
-                generator->generateFastPath(jit);
+                generator->generateFastPath(jit, scratchGPR, scratch2GPR);
                 CCallHelpers::Label done = jit.label();
 
                 params.addLatePath(
@@ -4704,7 +4711,8 @@
                     [=] (CCallHelpers& jit) {
                         AllowMacroScratchRegisterUsage allowScratch(jit);
 
-                        generator->slowPathJump().link(&jit);
+                        if (!JITCode::useDataIC(JITType::FTLJIT))
+                            generator->slowPathJump().link(&jit);
                         CCallHelpers::Label slowPathBegin = jit.label();
                         CCallHelpers::Call slowPathCall;
                         if (JITCode::useDataIC(JITType::FTLJIT)) {
@@ -12841,7 +12849,10 @@
         patchpoint->append(m_notCellMask, ValueRep::lateReg(GPRInfo::notCellMaskRegister));
         patchpoint->append(m_numberTag, ValueRep::lateReg(GPRInfo::numberTagRegister));
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 1 : 0;
+        if constexpr (type == AccessType::InById)
+            patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 2 : 0;
+        else
+            patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 1 : 0;
 
         RefPtr<PatchpointExceptionHandle> exceptionHandle = preparePatchpointForExceptions(patchpoint);
 
@@ -12857,7 +12868,13 @@
                 // This is the direct exit target for operation calls.
                 Box<CCallHelpers::JumpList> exceptions = exceptionHandle->scheduleExitCreation(params)->jumps(jit);
 
-                GPRReg stubInfoGPR = JITCode::useDataIC(JITType::FTLJIT) ? params.gpScratch(0) : InvalidGPRReg;
+                GPRReg stubInfoGPR = InvalidGPRReg;
+                GPRReg scratchGPR = InvalidGPRReg;
+                if (JITCode::useDataIC(JITType::FTLJIT)) {
+                    stubInfoGPR = params.gpScratch(0);
+                    if constexpr (type == AccessType::InById)
+                        scratchGPR = params.gpScratch(1);
+                }
                 auto returnGPR = params[0].gpr();
                 auto base = JSValueRegs(params[1].gpr());
 
@@ -12896,13 +12913,12 @@
                 }();
 
                 CCallHelpers::JumpList slowCases;
-                generator->generateFastPath(jit);
                 if constexpr (type == AccessType::InById)
+                    generator->generateFastPath(jit, scratchGPR);
+                else
+                    generator->generateFastPath(jit);
+                if (!JITCode::useDataIC(JITType::FTLJIT))
                     slowCases.append(generator->slowPathJump());
-                else {
-                    if (!JITCode::useDataIC(JITType::FTLJIT))
-                        slowCases.append(generator->slowPathJump());
-                }
                 CCallHelpers::Label done = jit.label();
 
                 params.addLatePath(
@@ -15068,7 +15084,7 @@
         patchpoint->appendSomeRegister(base);
         patchpoint->append(m_notCellMask, ValueRep::lateReg(GPRInfo::notCellMaskRegister));
         patchpoint->append(m_numberTag, ValueRep::lateReg(GPRInfo::numberTagRegister));
-        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 1 : 0;
+        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 2 : 0;
 
         // FIXME: If this is a GetByIdFlush/GetByIdDirectFlush, we might get some performance boost if we claim that it
         // clobbers volatile registers late. It's not necessary for correctness, though, since the
@@ -15098,7 +15114,12 @@
                 // the callsite index.
                 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
 
-                GPRReg stubInfoGPR = JITCode::useDataIC(JITType::FTLJIT) ? params.gpScratch(0) : InvalidGPRReg;
+                GPRReg stubInfoGPR = InvalidGPRReg;
+                GPRReg scratchGPR = InvalidGPRReg;
+                if (JITCode::useDataIC(JITType::FTLJIT)) {
+                    stubInfoGPR = params.gpScratch(0);
+                    scratchGPR = params.gpScratch(1);
+                }
 
                 auto generator = Box<JITGetByIdGenerator>::create(
                     jit.codeBlock(), &state->jitCode->common.m_stubInfos, JITType::FTLJIT, semanticNodeOrigin, callSiteIndex,
@@ -15105,7 +15126,7 @@
                     params.unavailableRegisters(), identifier, JSValueRegs(params[1].gpr()),
                     JSValueRegs(params[0].gpr()), stubInfoGPR, type);
 
-                generator->generateFastPath(jit);
+                generator->generateFastPath(jit, scratchGPR);
                 CCallHelpers::Label done = jit.label();
 
                 params.addLatePath(
@@ -15114,7 +15135,8 @@
 
                         auto optimizationFunction = appropriateOptimizingGetByIdFunction(type);
 
-                        generator->slowPathJump().link(&jit);
+                        if (!JITCode::useDataIC(JITType::FTLJIT))
+                            generator->slowPathJump().link(&jit);
                         CCallHelpers::Label slowPathBegin = jit.label();
                         CCallHelpers::Call slowPathCall;
                         if (JITCode::useDataIC(JITType::FTLJIT)) {
@@ -15159,7 +15181,7 @@
         patchpoint->append(m_notCellMask, ValueRep::lateReg(GPRInfo::notCellMaskRegister));
         patchpoint->append(m_numberTag, ValueRep::lateReg(GPRInfo::numberTagRegister));
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 1 : 0;
+        patchpoint->numGPScratchRegisters = JITCode::useDataIC(JITType::FTLJIT) ? 2 : 0;
 
         RefPtr<PatchpointExceptionHandle> exceptionHandle =
             preparePatchpointForExceptions(patchpoint);
@@ -15182,7 +15204,12 @@
                 // the callsite index.
                 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
 
-                GPRReg stubInfoGPR = JITCode::useDataIC(JITType::FTLJIT) ? params.gpScratch(0) : InvalidGPRReg;
+                GPRReg stubInfoGPR = InvalidGPRReg;
+                GPRReg scratchGPR = InvalidGPRReg;
+                if (JITCode::useDataIC(JITType::FTLJIT)) {
+                    stubInfoGPR = params.gpScratch(0);
+                    scratchGPR = params.gpScratch(1);
+                }
 
                 auto generator = Box<JITGetByIdWithThisGenerator>::create(
                     jit.codeBlock(), &state->jitCode->common.m_stubInfos, JITType::FTLJIT, semanticNodeOrigin, callSiteIndex,
@@ -15189,7 +15216,7 @@
                     params.unavailableRegisters(), identifier, JSValueRegs(params[0].gpr()),
                     JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()), stubInfoGPR);
 
-                generator->generateFastPath(jit);
+                generator->generateFastPath(jit, scratchGPR);
                 CCallHelpers::Label done = jit.label();
 
                 params.addLatePath(
@@ -15198,7 +15225,8 @@
 
                         auto optimizationFunction = operationGetByIdWithThisOptimize;
 
-                        generator->slowPathJump().link(&jit);
+                        if (!JITCode::useDataIC(JITType::FTLJIT))
+                            generator->slowPathJump().link(&jit);
                         CCallHelpers::Label slowPathBegin = jit.label();
                         CCallHelpers::Call slowPathCall;
                         if (JITCode::useDataIC(JITType::FTLJIT)) {

Modified: trunk/Source/_javascript_Core/jit/JITCode.h (291735 => 291736)


--- trunk/Source/_javascript_Core/jit/JITCode.h	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/jit/JITCode.h	2022-03-23 06:45:52 UTC (rev 291736)
@@ -165,9 +165,11 @@
     {
         if (JITCode::isBaselineCode(jitType))
             return true;
-        if (!Options::useDataIC())
-            return false;
+#if CPU(X86_64) || CPU(ARM64) || CPU(RISCV64)
         return Options::useDataICInOptimizingJIT();
+#else
+        return false;
+#endif
     }
 
     virtual const DOMJIT::Signature* signature() const { return nullptr; }

Modified: trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -105,17 +105,20 @@
         m_stubInfo->m_codePtr = m_stubInfo->slowPathStartLocation;
 }
 
-void JITByIdGenerator::generateFastCommon(MacroAssembler& jit, size_t inlineICSize)
+void JITByIdGenerator::generateFastCommon(CCallHelpers& jit, size_t inlineICSize)
 {
-    // We generate the same code regardless of whether SharedIC is enabled because we still need to use InlineAccess
-    // for the performance reason.
     m_start = jit.label();
-    size_t startSize = jit.m_assembler.buffer().codeSize();
-    m_slowPathJump = jit.jump();
-    size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
-    size_t nopsToEmitInBytes = inlineICSize - jumpSize;
-    jit.emitNops(nopsToEmitInBytes);
-    ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
+    if (JITCode::useDataIC(m_jitType)) {
+        jit.move(CCallHelpers::TrustedImmPtr(m_stubInfo), m_stubInfo->m_stubInfoGPR);
+        jit.farJump(CCallHelpers::Address(m_stubInfo->m_stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
+    } else {
+        size_t startSize = jit.m_assembler.buffer().codeSize();
+        m_slowPathJump = jit.jump();
+        size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
+        size_t nopsToEmitInBytes = inlineICSize - jumpSize;
+        jit.emitNops(nopsToEmitInBytes);
+        ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
+    }
     m_done = jit.label();
 }
 
@@ -128,14 +131,8 @@
     RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
 }
 
-void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit)
+static void generateGetByIdInlineAccess(CCallHelpers& jit, GPRReg stubInfoGPR, JSValueRegs baseJSR, GPRReg scratchGPR, JSValueRegs resultJSR)
 {
-    ASSERT(m_stubInfo);
-    generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
-}
-
-static void generateGetByIdInlineAccess(JIT& jit, GPRReg stubInfoGPR, JSValueRegs baseJSR, GPRReg scratchGPR, JSValueRegs resultJSR)
-{
     jit.load32(CCallHelpers::Address(baseJSR.payloadGPR(), JSCell::structureIDOffset()), scratchGPR);
     auto doInlineAccess = jit.branch32(CCallHelpers::Equal, scratchGPR, CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfInlineAccessBaseStructureID()));
     jit.farJump(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
@@ -144,6 +141,21 @@
     jit.loadProperty(baseJSR.payloadGPR(), scratchGPR, resultJSR);
 }
 
+void JITGetByIdGenerator::generateFastPath(CCallHelpers& jit, GPRReg scratchGPR)
+{
+    ASSERT(m_stubInfo);
+    if (!JITCode::useDataIC(m_jitType)) {
+        generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
+        return;
+    }
+
+    ASSERT(scratchGPR != InvalidGPRReg);
+    m_start = jit.label();
+    jit.move(CCallHelpers::TrustedImmPtr(m_stubInfo), m_stubInfo->m_stubInfoGPR);
+    generateGetByIdInlineAccess(jit, m_stubInfo->m_stubInfoGPR, m_base, scratchGPR, m_value);
+    m_done = jit.label();
+}
+
 void JITGetByIdGenerator::generateBaselineDataICFastPath(JIT& jit, unsigned stubInfo, GPRReg stubInfoGPR)
 {
     RELEASE_ASSERT(JITCode::useDataIC(m_jitType));
@@ -174,10 +186,19 @@
     }
 }
 
-void JITGetByIdWithThisGenerator::generateFastPath(MacroAssembler& jit)
+void JITGetByIdWithThisGenerator::generateFastPath(CCallHelpers& jit, GPRReg scratchGPR)
 {
     ASSERT(m_stubInfo);
-    generateFastCommon(jit, InlineAccess::sizeForPropertyAccess());
+    if (!JITCode::useDataIC(m_jitType)) {
+        generateFastCommon(jit, InlineAccess::sizeForPropertyAccess());
+        return;
+    }
+
+    ASSERT(scratchGPR != InvalidGPRReg);
+    m_start = jit.label();
+    jit.move(CCallHelpers::TrustedImmPtr(m_stubInfo), m_stubInfo->m_stubInfoGPR);
+    generateGetByIdInlineAccess(jit, m_stubInfo->m_stubInfoGPR, m_base, scratchGPR, m_value);
+    m_done = jit.label();
 }
 
 void JITGetByIdWithThisGenerator::generateBaselineDataICFastPath(JIT& jit, unsigned stubInfo, GPRReg stubInfoGPR)
@@ -208,6 +229,16 @@
         m_stubInfo->usedRegisters.clear(scratch);
 }
 
+static void generatePutByIdInlineAccess(CCallHelpers& jit, GPRReg stubInfoGPR, JSValueRegs baseJSR, JSValueRegs valueJSR, GPRReg scratchGPR, GPRReg scratch2GPR)
+{
+    jit.load32(CCallHelpers::Address(baseJSR.payloadGPR(), JSCell::structureIDOffset()), scratchGPR);
+    auto doInlineAccess = jit.branch32(CCallHelpers::Equal, scratchGPR, CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfInlineAccessBaseStructureID()));
+    jit.farJump(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
+    doInlineAccess.link(&jit);
+    jit.load32(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfByIdSelfOffset()), scratchGPR);
+    jit.storeProperty(valueJSR, baseJSR.payloadGPR(), scratchGPR, scratch2GPR);
+}
+
 void JITPutByIdGenerator::generateBaselineDataICFastPath(JIT& jit, unsigned stubInfo, GPRReg stubInfoGPR)
 {
     RELEASE_ASSERT(JITCode::useDataIC(m_jitType));
@@ -221,19 +252,23 @@
     using BaselineJITRegisters::PutById::FastPath::scratchGPR;
     using BaselineJITRegisters::PutById::FastPath::scratch2GPR;
 
-    jit.load32(CCallHelpers::Address(baseJSR.payloadGPR(), JSCell::structureIDOffset()), scratchGPR);
-    auto doInlineAccess = jit.branch32(CCallHelpers::Equal, scratchGPR, CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfInlineAccessBaseStructureID()));
-    jit.farJump(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
-    doInlineAccess.link(&jit);
-    jit.load32(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfByIdSelfOffset()), scratchGPR);
-    jit.storeProperty(valueJSR, baseJSR.payloadGPR(), scratchGPR, scratch2GPR);
+    generatePutByIdInlineAccess(jit, stubInfoGPR, baseJSR, valueJSR, scratchGPR, scratch2GPR);
     m_done = jit.label();
 }
 
-void JITPutByIdGenerator::generateFastPath(MacroAssembler& jit)
+void JITPutByIdGenerator::generateFastPath(CCallHelpers& jit, GPRReg scratchGPR, GPRReg scratch2GPR)
 {
     ASSERT(m_stubInfo);
-    generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
+    if (!JITCode::useDataIC(m_jitType)) {
+        generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
+        return;
+    }
+
+    ASSERT(scratchGPR != InvalidGPRReg);
+    m_start = jit.label();
+    jit.move(CCallHelpers::TrustedImmPtr(m_stubInfo), m_stubInfo->m_stubInfoGPR);
+    generatePutByIdInlineAccess(jit, m_stubInfo->m_stubInfoGPR, m_base, m_value, scratchGPR, scratch2GPR);
+    m_done = jit.label();
 }
 
 V_JITOperation_GSsiJJC JITPutByIdGenerator::slowPathFunction()
@@ -281,7 +316,7 @@
     }
 }
 
-void JITDelByValGenerator::generateFastPath(MacroAssembler& jit)
+void JITDelByValGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -323,7 +358,7 @@
     }
 }
 
-void JITDelByIdGenerator::generateFastPath(MacroAssembler& jit)
+void JITDelByIdGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -360,7 +395,7 @@
     }
 }
 
-void JITInByValGenerator::generateFastPath(MacroAssembler& jit)
+void JITInByValGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -392,10 +427,32 @@
     RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
 }
 
-void JITInByIdGenerator::generateFastPath(MacroAssembler& jit)
+static void generateInByIdInlineAccess(CCallHelpers& jit, GPRReg stubInfoGPR, JSValueRegs baseJSR, GPRReg scratchGPR, JSValueRegs resultJSR)
 {
+    jit.load32(CCallHelpers::Address(baseJSR.payloadGPR(), JSCell::structureIDOffset()), scratchGPR);
+    auto skipInlineAccess = jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfInlineAccessBaseStructureID()));
+    jit.boxBoolean(true, resultJSR);
+    auto finished = jit.jump();
+
+    skipInlineAccess.link(&jit);
+    jit.farJump(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
+
+    finished.link(&jit);
+}
+
+void JITInByIdGenerator::generateFastPath(CCallHelpers& jit, GPRReg scratchGPR)
+{
     ASSERT(m_stubInfo);
-    generateFastCommon(jit, InlineAccess::sizeForPropertyAccess());
+    if (!JITCode::useDataIC(m_jitType)) {
+        generateFastCommon(jit, InlineAccess::sizeForPropertyAccess());
+        return;
+    }
+
+    ASSERT(scratchGPR != InvalidGPRReg);
+    m_start = jit.label();
+    jit.move(CCallHelpers::TrustedImmPtr(m_stubInfo), m_stubInfo->m_stubInfoGPR);
+    generateInByIdInlineAccess(jit, m_stubInfo->m_stubInfoGPR, m_base, scratchGPR, m_value);
+    m_done = jit.label();
 }
 
 void JITInByIdGenerator::generateBaselineDataICFastPath(JIT& jit, unsigned stubInfo, GPRReg stubInfoGPR)
@@ -410,17 +467,8 @@
     using BaselineJITRegisters::InById::resultJSR;
     using BaselineJITRegisters::InById::scratchGPR;
 
-    CCallHelpers::JumpList done;
+    generateInByIdInlineAccess(jit, stubInfoGPR, baseJSR, scratchGPR, resultJSR);
 
-    jit.load32(CCallHelpers::Address(baseJSR.payloadGPR(), JSCell::structureIDOffset()), scratchGPR);
-    auto skipInlineAccess = jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfInlineAccessBaseStructureID()));
-    jit.boxBoolean(true, resultJSR);
-    auto finished = jit.jump();
-
-    skipInlineAccess.link(&jit);
-    jit.farJump(CCallHelpers::Address(stubInfoGPR, StructureStubInfo::offsetOfCodePtr()), JITStubRoutinePtrTag);
-
-    finished.link(&jit);
     m_done = jit.label();
 }
 
@@ -452,7 +500,7 @@
     }
 }
 
-void JITInstanceOfGenerator::generateFastPath(MacroAssembler& jit)
+void JITInstanceOfGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -491,7 +539,7 @@
     }
 }
 
-void JITGetByValGenerator::generateFastPath(MacroAssembler& jit)
+void JITGetByValGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -531,7 +579,7 @@
     }
 }
 
-void JITPutByValGenerator::generateFastPath(MacroAssembler& jit)
+void JITPutByValGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();
@@ -569,7 +617,7 @@
     }
 }
 
-void JITPrivateBrandAccessGenerator::generateFastPath(MacroAssembler& jit)
+void JITPrivateBrandAccessGenerator::generateFastPath(CCallHelpers& jit)
 {
     ASSERT(m_stubInfo);
     m_start = jit.label();

Modified: trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h (291735 => 291736)


--- trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h	2022-03-23 06:45:52 UTC (rev 291736)
@@ -28,6 +28,7 @@
 #if ENABLE(JIT)
 
 #include "AssemblyHelpers.h"
+#include "CCallHelpers.h"
 #include "CodeOrigin.h"
 #include "JITOperationValidation.h"
 #include "JITOperations.h"
@@ -56,13 +57,13 @@
 public:
     StructureStubInfo* stubInfo() const { return m_stubInfo; }
 
-    void reportSlowPathCall(MacroAssembler::Label slowPathBegin, MacroAssembler::Call call)
+    void reportSlowPathCall(CCallHelpers::Label slowPathBegin, CCallHelpers::Call call)
     {
         m_slowPathBegin = slowPathBegin;
         m_slowPathCall = call;
     }
     
-    MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; }
+    CCallHelpers::Label slowPathBegin() const { return m_slowPathBegin; }
 
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer,
@@ -78,10 +79,10 @@
     StructureStubInfo* m_stubInfo { nullptr };
 
 public:
-    MacroAssembler::Label m_start;
-    MacroAssembler::Label m_done;
-    MacroAssembler::Label m_slowPathBegin;
-    MacroAssembler::Call m_slowPathCall;
+    CCallHelpers::Label m_start;
+    CCallHelpers::Label m_done;
+    CCallHelpers::Label m_slowPathBegin;
+    CCallHelpers::Call m_slowPathCall;
 };
 
 class JITByIdGenerator : public JITInlineCacheGenerator {
@@ -93,7 +94,7 @@
         JSValueRegs base, JSValueRegs value, GPRReg stubInfoGPR);
 
 public:
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.isSet());
         return m_slowPathJump;
@@ -104,13 +105,13 @@
     
 protected:
     
-    void generateFastCommon(MacroAssembler&, size_t size);
+    void generateFastCommon(CCallHelpers&, size_t size);
     
     JSValueRegs m_base;
     JSValueRegs m_value;
 
 public:
-    MacroAssembler::Jump m_slowPathJump;
+    CCallHelpers::Jump m_slowPathJump;
 };
 
 class JITGetByIdGenerator final : public JITByIdGenerator {
@@ -121,7 +122,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, CacheableIdentifier,
         JSValueRegs base, JSValueRegs value, GPRReg stubInfoGPR, AccessType);
     
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&, GPRReg scratchGPR);
     void generateBaselineDataICFastPath(JIT&, unsigned stubInfoConstant, GPRReg stubInfoGPR);
 
 private:
@@ -136,8 +137,8 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, CacheableIdentifier,
         JSValueRegs value, JSValueRegs base, JSValueRegs thisRegs, GPRReg stubInfoGPR);
 
+    void generateFastPath(CCallHelpers&, GPRReg scratchGPR);
     void generateBaselineDataICFastPath(JIT&, unsigned stubInfoConstant, GPRReg stubInfoGPR);
-    void generateFastPath(MacroAssembler&);
 };
 
 class JITPutByIdGenerator final : public JITByIdGenerator {
@@ -148,7 +149,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, CacheableIdentifier,
         JSValueRegs base, JSValueRegs value, GPRReg stubInfoGPR, GPRReg scratch, ECMAMode, PutKind);
     
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&, GPRReg scratchGPR, GPRReg scratch2GPR);
     void generateBaselineDataICFastPath(JIT&, unsigned stubInfoConstant, GPRReg stubInfoGPR);
     
     V_JITOperation_GSsiJJC slowPathFunction();
@@ -167,7 +168,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, AccessType, const RegisterSet& usedRegisters,
         JSValueRegs base, JSValueRegs property, JSValueRegs result, GPRReg arrayProfileGPR, GPRReg stubInfoGPR);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -175,12 +176,12 @@
 
     void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
 
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
     JSValueRegs m_base;
     JSValueRegs m_value;
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITDelByValGenerator final : public JITInlineCacheGenerator {
@@ -192,7 +193,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters,
         JSValueRegs base, JSValueRegs property, JSValueRegs result, GPRReg stubInfoGPR, GPRReg scratch);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -201,9 +202,9 @@
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
 
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITDelByIdGenerator final : public JITInlineCacheGenerator {
@@ -215,7 +216,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, CacheableIdentifier,
         JSValueRegs base, JSValueRegs result, GPRReg stubInfoGPR, GPRReg scratch);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -224,9 +225,9 @@
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
 
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITInByValGenerator : public JITInlineCacheGenerator {
@@ -238,7 +239,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, AccessType, const RegisterSet& usedRegisters,
         JSValueRegs base, JSValueRegs property, JSValueRegs result, GPRReg stubInfoGPR);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -247,9 +248,9 @@
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
 
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITInByIdGenerator final : public JITByIdGenerator {
@@ -260,7 +261,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, CacheableIdentifier,
         JSValueRegs base, JSValueRegs value, GPRReg stubInfoGPR);
 
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&, GPRReg scratchGPR);
     void generateBaselineDataICFastPath(JIT&, unsigned stubInfoConstant, GPRReg stubInfoGPR);
 };
 
@@ -274,9 +275,9 @@
         GPRReg value, GPRReg prototype, GPRReg stubInfoGPR, GPRReg scratch1, GPRReg scratch2,
         bool prototypeIsKnownObject = false);
     
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -284,7 +285,7 @@
 
     void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITGetByValGenerator final : public JITInlineCacheGenerator {
@@ -296,7 +297,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, AccessType, const RegisterSet& usedRegisters,
         JSValueRegs base, JSValueRegs property, JSValueRegs result, GPRReg stubInfoGPR);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -305,12 +306,12 @@
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
     
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
     JSValueRegs m_base;
     JSValueRegs m_result;
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 class JITPrivateBrandAccessGenerator final : public JITInlineCacheGenerator {
@@ -322,7 +323,7 @@
         CodeBlock*, Bag<StructureStubInfo>*, JITType, CodeOrigin, CallSiteIndex, AccessType, const RegisterSet& usedRegisters,
         JSValueRegs base, JSValueRegs brand, GPRReg stubInfoGPR);
 
-    MacroAssembler::Jump slowPathJump() const
+    CCallHelpers::Jump slowPathJump() const
     {
         ASSERT(m_slowPathJump.m_jump.isSet());
         return m_slowPathJump.m_jump;
@@ -331,9 +332,9 @@
     void finalize(
         LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
     
-    void generateFastPath(MacroAssembler&);
+    void generateFastPath(CCallHelpers&);
 
-    MacroAssembler::PatchableJump m_slowPathJump;
+    CCallHelpers::PatchableJump m_slowPathJump;
 };
 
 template<typename VectorType>

Modified: trunk/Source/_javascript_Core/runtime/Options.cpp (291735 => 291736)


--- trunk/Source/_javascript_Core/runtime/Options.cpp	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/runtime/Options.cpp	2022-03-23 06:45:52 UTC (rev 291736)
@@ -434,9 +434,6 @@
     Options::useConcurrentGC() = false;
 #endif
 
-    if (!Options::useDataIC())
-        Options::useDataICInOptimizingJIT() = false;
-
     // At initialization time, we may decide that useJIT should be false for any
     // number of reasons (including failing to allocate JIT memory), and therefore,
     // will / should not be able to enable any JIT related services.

Modified: trunk/Source/_javascript_Core/runtime/OptionsList.h (291735 => 291736)


--- trunk/Source/_javascript_Core/runtime/OptionsList.h	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Source/_javascript_Core/runtime/OptionsList.h	2022-03-23 06:45:52 UTC (rev 291736)
@@ -531,7 +531,6 @@
     v(Bool, dumpBaselineJITSizeStatistics, false, Normal, nullptr) \
     v(Bool, dumpDFGJITSizeStatistics, false, Normal, nullptr) \
     v(Bool, verboseExecutablePoolAllocation, false, Normal, nullptr) \
-    v(Bool, useDataIC, false, Normal, nullptr) \
     v(Bool, useDataICInOptimizingJIT, false, Normal, nullptr) \
     v(Bool, useDataICSharing, false, Normal, nullptr) \
     v(Bool, useBaselineJITCodeSharing, is64Bit(), Normal, nullptr) \

Modified: trunk/Tools/ChangeLog (291735 => 291736)


--- trunk/Tools/ChangeLog	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Tools/ChangeLog	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1,3 +1,12 @@
+2022-03-22  Yusuke Suzuki  <ysuz...@apple.com>
+
+        [JSC] Test DFG / FTL DataIC
+        https://bugs.webkit.org/show_bug.cgi?id=231224
+
+        Reviewed by Saam Barati.
+
+        * Scripts/run-jsc-stress-tests:
+
 2022-03-22  Alex Christensen  <achristen...@webkit.org>
 
         Implement PCM to SKAdNetwork bridge

Modified: trunk/Tools/Scripts/run-jsc-stress-tests (291735 => 291736)


--- trunk/Tools/Scripts/run-jsc-stress-tests	2022-03-23 05:26:13 UTC (rev 291735)
+++ trunk/Tools/Scripts/run-jsc-stress-tests	2022-03-23 06:45:52 UTC (rev 291736)
@@ -1091,7 +1091,7 @@
         "FTLNoCJIT",
         "misc-ftl-no-cjit",
         [
-            "--useDataIC=true",
+            "--useDataICInOptimizingJIT=true",
         ] +
         FTL_OPTIONS +
         NO_CJIT_OPTIONS
@@ -1117,7 +1117,7 @@
             "--validateBCE=true",
             "--useSamplingProfiler=true",
             "--airForceIRCAllocator=true",
-            "--useDataIC=true",
+            "--useDataICInOptimizingJIT=true",
         ] +
         FTL_OPTIONS +
         NO_CJIT_OPTIONS
@@ -1178,7 +1178,7 @@
             "--airForceBriggsAllocator=true",
             "--useRandomizingExecutableIslandAllocation=true",
             "--forcePolyProto=true",
-            "--useDataIC=true",
+            "--useDataICInOptimizingJIT=true",
         ] +
         FTL_OPTIONS +
         EAGER_OPTIONS +
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to