Log Message
CStack Branch: Fix baseline JIT for basic operation https://bugs.webkit.org/show_bug.cgi?id=125470
Not yet reviewed. Fixed compileOpCall and it's slow case to properly adjust the stack pointer before and after a call. Cleaned up the calling convention in the various thunks. Adjusted the stack pointer at the end of the arity fixup thunk to account for the frame moving. Added ctiNativeCallFallback() thunk generator for when another thunk that can't perform its operation inline needs to make a native call. This thunk generator differes from ctiNativeCall() in that it doesn't emit a funciton prologue, thus allowing the original thunk to jump to the "fallback" thunk. I'm open to another name beside "fallback". Maybe "ctiNativeTailCall()". Fixed the OSR entry handling in the LLInt prologue macro to properly account for the callee saving the caller frame pointer. Added stack alignement check function for use in debug builds to find and break if the stack pointer is not appropriately aligned. * jit/AssemblyHelpers.h: (JSC::AssemblyHelpers::checkStackPointerAlignment): * jit/JIT.cpp: (JSC::JIT::privateCompile): * jit/JIT.h: (JSC::JIT::frameRegisterCountFor): * jit/JITCall.cpp: (JSC::JIT::compileOpCall): (JSC::JIT::compileOpCallSlowCase): * jit/JITOpcodes.cpp: (JSC::JIT::emit_op_ret): (JSC::JIT::emit_op_enter): * jit/JITThunks.cpp: (JSC::JITThunks::ctiNativeCallFallback): * jit/JITThunks.h: * jit/ThunkGenerators.cpp: (JSC::slowPathFor): (JSC::nativeForGenerator): (JSC::nativeCallFallbackGenerator): (JSC::arityFixup): (JSC::charCodeAtThunkGenerator): (JSC::charAtThunkGenerator): (JSC::fromCharCodeThunkGenerator): (JSC::sqrtThunkGenerator): (JSC::floorThunkGenerator): (JSC::ceilThunkGenerator): (JSC::roundThunkGenerator): (JSC::expThunkGenerator): (JSC::logThunkGenerator): (JSC::absThunkGenerator): (JSC::powThunkGenerator): (JSC::imulThunkGenerator): (JSC::arrayIteratorNextThunkGenerator): * jit/ThunkGenerators.h: * llint/LowLevelInterpreter.asm: * llint/LowLevelInterpreter64.asm:
Modified Paths
- branches/jsCStack/Source/_javascript_Core/ChangeLog
- branches/jsCStack/Source/_javascript_Core/jit/AssemblyHelpers.h
- branches/jsCStack/Source/_javascript_Core/jit/JIT.cpp
- branches/jsCStack/Source/_javascript_Core/jit/JIT.h
- branches/jsCStack/Source/_javascript_Core/jit/JITCall.cpp
- branches/jsCStack/Source/_javascript_Core/jit/JITOpcodes.cpp
- branches/jsCStack/Source/_javascript_Core/jit/JITThunks.cpp
- branches/jsCStack/Source/_javascript_Core/jit/JITThunks.h
- branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.cpp
- branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.h
- branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter.asm
- branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter64.asm
Diff
Modified: branches/jsCStack/Source/_javascript_Core/ChangeLog (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/ChangeLog 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/ChangeLog 2013-12-09 23:50:26 UTC (rev 160340)
@@ -1,3 +1,65 @@
+2013-12-09 Michael Saboff <msab...@apple.com>
+
+ CStack Branch: Fix baseline JIT for basic operation
+ https://bugs.webkit.org/show_bug.cgi?id=125470
+
+ Not yet reviewed.
+
+ Fixed compileOpCall and it's slow case to properly adjust the stack pointer before
+ and after a call.
+
+ Cleaned up the calling convention in the various thunks. Adjusted the stack
+ pointer at the end of the arity fixup thunk to account for the frame moving.
+
+ Added ctiNativeCallFallback() thunk generator for when another thunk that can't
+ perform its operation inline needs to make a native call. This thunk generator
+ differes from ctiNativeCall() in that it doesn't emit a funciton prologue, thus
+ allowing the original thunk to jump to the "fallback" thunk. I'm open to another
+ name beside "fallback". Maybe "ctiNativeTailCall()".
+
+ Fixed the OSR entry handling in the LLInt prologue macro to properly account
+ for the callee saving the caller frame pointer.
+
+ Added stack alignement check function for use in debug builds to find and break
+ if the stack pointer is not appropriately aligned.
+
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::checkStackPointerAlignment):
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompile):
+ * jit/JIT.h:
+ (JSC::JIT::frameRegisterCountFor):
+ * jit/JITCall.cpp:
+ (JSC::JIT::compileOpCall):
+ (JSC::JIT::compileOpCallSlowCase):
+ * jit/JITOpcodes.cpp:
+ (JSC::JIT::emit_op_ret):
+ (JSC::JIT::emit_op_enter):
+ * jit/JITThunks.cpp:
+ (JSC::JITThunks::ctiNativeCallFallback):
+ * jit/JITThunks.h:
+ * jit/ThunkGenerators.cpp:
+ (JSC::slowPathFor):
+ (JSC::nativeForGenerator):
+ (JSC::nativeCallFallbackGenerator):
+ (JSC::arityFixup):
+ (JSC::charCodeAtThunkGenerator):
+ (JSC::charAtThunkGenerator):
+ (JSC::fromCharCodeThunkGenerator):
+ (JSC::sqrtThunkGenerator):
+ (JSC::floorThunkGenerator):
+ (JSC::ceilThunkGenerator):
+ (JSC::roundThunkGenerator):
+ (JSC::expThunkGenerator):
+ (JSC::logThunkGenerator):
+ (JSC::absThunkGenerator):
+ (JSC::powThunkGenerator):
+ (JSC::imulThunkGenerator):
+ (JSC::arrayIteratorNextThunkGenerator):
+ * jit/ThunkGenerators.h:
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter64.asm:
+
2013-12-06 Michael Saboff <msab...@apple.com>
CStack Branch: Fix Specialized Thunks to use function prologues and epilogues
Modified: branches/jsCStack/Source/_javascript_Core/jit/AssemblyHelpers.h (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/AssemblyHelpers.h 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/AssemblyHelpers.h 2013-12-09 23:50:26 UTC (rev 160340)
@@ -60,6 +60,15 @@
AssemblerType_T& assembler() { return m_assembler; }
#if CPU(X86_64) || CPU(X86)
+ void checkStackPointerAlignment()
+ {
+#ifndef NDEBUG
+ Jump stackPointerAligned = branchTestPtr(Zero, stackPointerRegister, TrustedImm32(0xf));
+ breakpoint();
+ stackPointerAligned.link(this);
+#endif
+ }
+
void emitFunctionPrologue()
{
push(framePointerRegister);
Modified: branches/jsCStack/Source/_javascript_Core/jit/JIT.cpp (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JIT.cpp 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JIT.cpp 2013-12-09 23:50:26 UTC (rev 160340)
@@ -543,7 +543,11 @@
}
Label functionBody = label();
-
+
+ checkStackPointerAlignment();
+ addPtr(TrustedImm32(-frameRegisterCountFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
+ checkStackPointerAlignment();
+
privateCompileMainPass();
privateCompileLinkPass();
privateCompileSlowCases();
@@ -555,6 +559,7 @@
if (m_codeBlock->codeType() == FunctionCode) {
stackCheck.link(this);
m_bytecodeOffset = 0;
+ // &&&& This may need to have some stack space allocated to make the call
callOperationWithCallFrameRollbackOnException(operationStackCheck, m_codeBlock);
#ifndef NDEBUG
m_bytecodeOffset = (unsigned)-1; // Reset this, in order to guard its use with ASSERTs.
Modified: branches/jsCStack/Source/_javascript_Core/jit/JIT.h (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JIT.h 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JIT.h 2013-12-09 23:50:26 UTC (rev 160340)
@@ -246,6 +246,7 @@
static unsigned frameRegisterCountFor(CodeBlock* codeBlock)
{
+ ASSERT(!(codeBlock->m_numCalleeRegisters & 1));
return codeBlock->m_numCalleeRegisters;
}
Modified: branches/jsCStack/Source/_javascript_Core/jit/JITCall.cpp (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JITCall.cpp 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JITCall.cpp 2013-12-09 23:50:26 UTC (rev 160340)
@@ -179,9 +179,7 @@
store32(TrustedImm32(locationBits), Address(callFrameRegister, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)));
emitGetVirtualRegister(callee, regT0); // regT0 holds callee.
- store64(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset()));
store64(regT0, Address(regT1, JSStack::Callee * static_cast<int>(sizeof(Register))));
- move(regT1, callFrameRegister);
if (opcodeID == op_call_eval) {
compileCallEval(instruction);
@@ -198,10 +196,15 @@
m_callStructureStubCompilationInfo[callLinkInfoIndex].callType = CallLinkInfo::callTypeFor(opcodeID);
m_callStructureStubCompilationInfo[callLinkInfoIndex].bytecodeIndex = m_bytecodeOffset;
- loadPtr(Address(regT0, OBJECT_OFFSETOF(JSFunction, m_scope)), regT1);
- emitPutToCallFrameHeader(regT1, JSStack::ScopeChain);
+ loadPtr(Address(regT0, OBJECT_OFFSETOF(JSFunction, m_scope)), regT2);
+ store64(regT2, Address(regT1, JSStack::ScopeChain * sizeof(Register)));
+ addPtr(TrustedImm32(16), regT1, stackPointerRegister);
+
m_callStructureStubCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall();
+ addPtr(TrustedImm32(-frameRegisterCountFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
+ checkStackPointerAlignment();
+
sampleCodeBlock(m_codeBlock);
emitPutCallResult(instruction);
@@ -216,8 +219,13 @@
linkSlowCase(iter);
+ addPtr(TrustedImm32(16), regT1, stackPointerRegister);
+
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_vm->getCTIStub(linkConstructThunkGenerator).code() : m_vm->getCTIStub(linkCallThunkGenerator).code());
+ addPtr(TrustedImm32(-frameRegisterCountFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
+ checkStackPointerAlignment();
+
sampleCodeBlock(m_codeBlock);
emitPutCallResult(instruction);
Modified: branches/jsCStack/Source/_javascript_Core/jit/JITOpcodes.cpp (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JITOpcodes.cpp 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JITOpcodes.cpp 2013-12-09 23:50:26 UTC (rev 160340)
@@ -262,6 +262,7 @@
// Return the result in %eax.
emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR);
+ checkStackPointerAlignment(); // &&&&
emitFunctionEpilogue();
ret();
}
@@ -777,6 +778,7 @@
void JIT::emit_op_enter(Instruction*)
{
+ checkStackPointerAlignment(); // &&&&
emitEnterOptimizationCheck();
// Even though CTI doesn't use them, we initialize our constant
Modified: branches/jsCStack/Source/_javascript_Core/jit/JITThunks.cpp (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JITThunks.cpp 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JITThunks.cpp 2013-12-09 23:50:26 UTC (rev 160340)
@@ -52,6 +52,7 @@
#endif
return ctiStub(vm, nativeCallGenerator).code();
}
+
MacroAssemblerCodePtr JITThunks::ctiNativeConstruct(VM* vm)
{
#if ENABLE(LLINT)
@@ -61,6 +62,12 @@
return ctiStub(vm, nativeConstructGenerator).code();
}
+MacroAssemblerCodePtr JITThunks::ctiNativeCallFallback(VM* vm)
+{
+ ASSERT(vm->canUseJIT());
+ return ctiStub(vm, nativeCallFallbackGenerator).code();
+}
+
MacroAssemblerCodeRef JITThunks::ctiStub(VM* vm, ThunkGenerator generator)
{
Locker locker(m_lock);
Modified: branches/jsCStack/Source/_javascript_Core/jit/JITThunks.h (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/JITThunks.h 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/JITThunks.h 2013-12-09 23:50:26 UTC (rev 160340)
@@ -54,6 +54,7 @@
MacroAssemblerCodePtr ctiNativeCall(VM*);
MacroAssemblerCodePtr ctiNativeConstruct(VM*);
+ MacroAssemblerCodePtr ctiNativeCallFallback(VM*);
MacroAssemblerCodeRef ctiStub(VM*, ThunkGenerator);
Modified: branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.cpp (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.cpp 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.cpp 2013-12-09 23:50:26 UTC (rev 160340)
@@ -82,8 +82,6 @@
static void slowPathFor(
CCallHelpers& jit, VM* vm, P_JITOperation_E slowPathFunction)
{
- // &&&& FIXME: Need to cleanup frame below like emitFunctionEpilogue()
- jit.breakpoint();
jit.emitFunctionPrologue();
jit.storePtr(GPRInfo::callFrameRegister, &vm->topCallFrame);
jit.setupArgumentsExecState();
@@ -95,11 +93,8 @@
// 1) Exception throwing thunk.
// 2) Host call return value returner thingy.
// 3) The function to call.
- jit.emitGetReturnPCFromCallFrameHeaderPtr(GPRInfo::nonPreservedNonReturnGPR);
- jit.emitPutReturnPCToCallFrameHeader(CCallHelpers::TrustedImmPtr(0));
- emitPointerValidation(jit, GPRInfo::nonPreservedNonReturnGPR);
- jit.restoreReturnAddressBeforeReturn(GPRInfo::nonPreservedNonReturnGPR);
emitPointerValidation(jit, GPRInfo::returnValueGPR);
+ jit.emitFunctionEpilogue();
jit.jump(GPRInfo::returnValueGPR);
}
@@ -245,13 +240,15 @@
return virtualForThunkGenerator(vm, CodeForConstruct);
}
-static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind kind)
+static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind kind, bool fallBack = false)
{
int executableOffsetToFunction = NativeExecutable::offsetOfNativeFunctionFor(kind);
JSInterfaceJIT jit(vm);
- jit.emitFunctionPrologue();
+ if (!fallBack)
+ jit.emitFunctionPrologue();
+
jit.emitPutImmediateToCallFrameHeader(0, JSStack::CodeBlock);
jit.storePtr(JSInterfaceJIT::callFrameRegister, &vm->topCallFrame);
@@ -282,23 +279,15 @@
jit.emitGetCallerFrameFromCallFrameHeaderPtr(JSInterfaceJIT::regT0);
jit.emitGetFromCallFrameHeaderPtr(JSStack::ScopeChain, JSInterfaceJIT::regT1, JSInterfaceJIT::regT0);
jit.emitPutCellToCallFrameHeader(JSInterfaceJIT::regT1, JSStack::ScopeChain);
-
- jit.peek(JSInterfaceJIT::regT1);
- jit.emitPutReturnPCToCallFrameHeader(JSInterfaceJIT::regT1);
-
#if !OS(WINDOWS)
// Calling convention: f(edi, esi, edx, ecx, ...);
// Host function signature: f(ExecState*);
jit.move(JSInterfaceJIT::callFrameRegister, X86Registers::edi);
- jit.subPtr(JSInterfaceJIT::TrustedImm32(16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); // Align stack after call.
-
jit.emitGetFromCallFrameHeaderPtr(JSStack::Callee, X86Registers::esi);
jit.loadPtr(JSInterfaceJIT::Address(X86Registers::esi, JSFunction::offsetOfExecutable()), X86Registers::r9);
- jit.move(JSInterfaceJIT::regT0, JSInterfaceJIT::callFrameRegister); // Eagerly restore caller frame register to avoid loading from stack.
jit.call(JSInterfaceJIT::Address(X86Registers::r9, executableOffsetToFunction));
- jit.addPtr(JSInterfaceJIT::TrustedImm32(16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
#else
// Calling convention: f(ecx, edx, r8, r9, ...);
// Host function signature: f(ExecState*);
@@ -417,7 +406,7 @@
jit.jumpToExceptionHandler();
LinkBuffer patchBuffer(*vm, &jit, GLOBAL_THUNK_ID);
- return FINALIZE_CODE(patchBuffer, ("native %s trampoline", toCString(kind).data()));
+ return FINALIZE_CODE(patchBuffer, ("native %s %strampoline", toCString(kind).data(), fallBack ? "fallback " : ""));
}
MacroAssemblerCodeRef nativeCallGenerator(VM* vm)
@@ -425,6 +414,11 @@
return nativeForGenerator(vm, CodeForCall);
}
+MacroAssemblerCodeRef nativeCallFallbackGenerator(VM* vm)
+{
+ return nativeForGenerator(vm, CodeForCall, true);
+}
+
MacroAssemblerCodeRef nativeConstructGenerator(VM* vm)
{
return nativeForGenerator(vm, CodeForConstruct);
@@ -459,9 +453,10 @@
jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::regT3);
jit.branchAdd32(MacroAssembler::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::regT2).linkTo(fillUndefinedLoop, &jit);
- // Adjust call frame register to account for missing args
+ // Adjust call frame register and stack pointer to account for missing args
jit.lshift64(JSInterfaceJIT::TrustedImm32(3), JSInterfaceJIT::regT0);
jit.addPtr(JSInterfaceJIT::regT0, JSInterfaceJIT::callFrameRegister);
+ jit.addPtr(JSInterfaceJIT::regT0, JSInterfaceJIT::stackPointerRegister);
# if CPU(X86_64)
jit.push(JSInterfaceJIT::regT4);
@@ -553,7 +548,7 @@
SpecializedThunkJIT jit(vm, 1);
stringCharLoad(jit, vm);
jit.returnInt32(SpecializedThunkJIT::regT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "charCodeAt");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "charCodeAt");
}
MacroAssemblerCodeRef charAtThunkGenerator(VM* vm)
@@ -562,7 +557,7 @@
stringCharLoad(jit, vm);
charToString(jit, vm, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT1);
jit.returnJSCell(SpecializedThunkJIT::regT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "charAt");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "charAt");
}
MacroAssemblerCodeRef fromCharCodeThunkGenerator(VM* vm)
@@ -572,7 +567,7 @@
jit.loadInt32Argument(0, SpecializedThunkJIT::regT0);
charToString(jit, vm, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT1);
jit.returnJSCell(SpecializedThunkJIT::regT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "fromCharCode");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "fromCharCode");
}
MacroAssemblerCodeRef sqrtThunkGenerator(VM* vm)
@@ -584,7 +579,7 @@
jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
jit.sqrtDouble(SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::fpRegT0);
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "sqrt");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "sqrt");
}
@@ -732,7 +727,7 @@
doubleResult.link(&jit);
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
#endif // CPU(ARM64)
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "floor");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "floor");
}
MacroAssemblerCodeRef ceilThunkGenerator(VM* vm)
@@ -755,7 +750,7 @@
jit.returnInt32(SpecializedThunkJIT::regT0);
doubleResult.link(&jit);
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "ceil");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "ceil");
}
MacroAssemblerCodeRef roundThunkGenerator(VM* vm)
@@ -789,7 +784,7 @@
jit.returnInt32(SpecializedThunkJIT::regT0);
doubleResult.link(&jit);
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "round");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "round");
}
MacroAssemblerCodeRef expThunkGenerator(VM* vm)
@@ -802,7 +797,7 @@
jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(exp));
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "exp");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "exp");
}
MacroAssemblerCodeRef logThunkGenerator(VM* vm)
@@ -815,7 +810,7 @@
jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(log));
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "log");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "log");
}
MacroAssemblerCodeRef absThunkGenerator(VM* vm)
@@ -835,7 +830,7 @@
jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
jit.absDouble(SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::fpRegT1);
jit.returnDouble(SpecializedThunkJIT::fpRegT1);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "abs");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "abs");
}
MacroAssemblerCodeRef powThunkGenerator(VM* vm)
@@ -887,7 +882,7 @@
} else
jit.appendFailure(nonIntExponent);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "pow");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "pow");
}
MacroAssemblerCodeRef imulThunkGenerator(VM* vm)
@@ -920,7 +915,7 @@
} else
jit.appendFailure(nonIntArg1Jump);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "imul");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "imul");
}
static MacroAssemblerCodeRef arrayIteratorNextThunkGenerator(VM* vm, ArrayIterationKind kind)
@@ -960,7 +955,7 @@
if (kind == ArrayIterateKey) {
jit.add32(TrustedImm32(1), Address(SpecializedThunkJIT::regT4, JSArrayIterator::offsetOfNextIndex()));
jit.returnInt32(SpecializedThunkJIT::regT1);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "array-iterator-next-key");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "array-iterator-next-key");
}
ASSERT(kind == ArrayIterateValue);
@@ -1011,7 +1006,7 @@
jit.add32(TrustedImm32(1), Address(SpecializedThunkJIT::regT4, JSArrayIterator::offsetOfNextIndex()));
jit.returnDouble(SpecializedThunkJIT::fpRegT0);
- return jit.finalize(vm->jitStubs->ctiNativeCall(vm), "array-iterator-next-value");
+ return jit.finalize(vm->jitStubs->ctiNativeCallFallback(vm), "array-iterator-next-value");
}
MacroAssemblerCodeRef arrayIteratorNextKeyThunkGenerator(VM* vm)
Modified: branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.h (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.h 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/jit/ThunkGenerators.h 2013-12-09 23:50:26 UTC (rev 160340)
@@ -43,6 +43,7 @@
MacroAssemblerCodeRef nativeCallGenerator(VM*);
MacroAssemblerCodeRef nativeConstructGenerator(VM*);
+MacroAssemblerCodeRef nativeCallFallbackGenerator(VM*);
MacroAssemblerCodeRef arityFixup(VM*);
MacroAssemblerCodeRef charCodeAtThunkGenerator(VM*);
Modified: branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter.asm (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2013-12-09 23:50:26 UTC (rev 160340)
@@ -219,6 +219,7 @@
end
end
+
macro restoreCallerPCAndCFR()
if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
# In C_LOOP case, we're only preserving the bytecode vPC.
@@ -229,6 +230,8 @@
pop cfr
end
end
+
+
macro preserveReturnAddressAfterCall(destinationRegister)
if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
# In C_LOOP case, we're only preserving the bytecode vPC.
@@ -350,12 +353,8 @@
if JIT_ENABLED
baddis 5, CodeBlock::m_llintExecuteCounter + ExecutionCounter::m_counter[t1], .continue
cCall2(osrSlowPath, cfr, PC)
- move t1, cfr
btpz t0, .recover
- # &&&& FIXME: Not sure this is right
- break
- # loadp ReturnPC[cfr], t2
- # restoreReturnAddressBeforeReturn(t2)
+ pop cfr # pop the callerFrame since we will jump to a function that wants to save it
jmp t0
.recover:
codeBlockGetter(t1)
Modified: branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter64.asm (160339 => 160340)
--- branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2013-12-09 23:48:59 UTC (rev 160339)
+++ branches/jsCStack/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2013-12-09 23:50:26 UTC (rev 160340)
@@ -470,6 +470,7 @@
_llint_op_enter:
traceExecution()
+ checkStackPointerAlignment(t2, 0xdead00e1)
loadp CodeBlock[cfr], t2 // t2<CodeBlock> = cfr.CodeBlock
loadi CodeBlock::m_numVars[t2], t2 // t2<size_t> = t2<CodeBlock>.m_numVars
btiz t2, .opEnterDone
_______________________________________________ webkit-changes mailing list webkit-changes@lists.webkit.org https://lists.webkit.org/mailman/listinfo/webkit-changes