Title: [190649] trunk/Source/_javascript_Core
Revision
190649
Author
mark....@apple.com
Date
2015-10-06 15:29:27 -0700 (Tue, 06 Oct 2015)

Log Message

Factoring out op_sub baseline code generation into JITSubGenerator.
https://bugs.webkit.org/show_bug.cgi?id=149600

Reviewed by Geoffrey Garen.

We're going to factor out baseline code generation into snippet generators so
that we can later use them in the DFG and FTL to emit code for to perform the
JS operations where the operand types are predicted to be polymorphic.
We are starting in this patch with the implementation of op_sub.

What was done in this patch:
1. Created JITSubGenerator based on the baseline implementation of op_sub as
   expressed in compileBinaryArithOp() and compileBinaryArithOpSlowCase().
   I did not attempt to do write a more optimal version of op_sub.  I'll
   leave that to a later patch.

2. Convert the 32-bit op_sub baseline implementation to use the same
   JITSubGenerator which was based on the 64-bit implementation.  The
   pre-existing 32-bit baseline op_sub had handling for more optimization cases.
   However, a benchmark run shows that simply going with the 64-bit version
   (foregoing those extra optimizations) did not change the performance.

   Also, previously, the 32-bit version was able to move double results directly
   into the result location on the stack directly.  By using JITSubGenerator,
   we now always move that result into a pair of GPRs before storing it into
   the stack location.

3. Add some needed emitters to AssemblyHelpers that play nice with JSValueRegs.

* _javascript_Core.xcodeproj/project.pbxproj:
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::boxDouble):
(JSC::AssemblyHelpers::unboxDouble):
(JSC::AssemblyHelpers::boxBooleanPayload):
* jit/JIT.h:
(JSC::JIT::linkDummySlowCase):
* jit/JITArithmetic.cpp:
(JSC::JIT::compileBinaryArithOp):
(JSC::JIT::compileBinaryArithOpSlowCase):
(JSC::JIT::emitSlow_op_div):
(JSC::JIT::emit_op_sub):
(JSC::JIT::emitSlow_op_sub):
* jit/JITArithmetic32_64.cpp:
(JSC::JIT::emitBinaryDoubleOp):
(JSC::JIT::emit_op_sub): Deleted.
(JSC::JIT::emitSub32Constant): Deleted.
(JSC::JIT::emitSlow_op_sub): Deleted.
* jit/JITInlines.h:
(JSC::JIT::linkSlowCaseIfNotJSCell):
(JSC::JIT::linkAllSlowCasesForBytecodeOffset):
(JSC::JIT::addSlowCase):
(JSC::JIT::emitLoad):
(JSC::JIT::emitGetVirtualRegister):
(JSC::JIT::emitPutVirtualRegister):
* jit/JITSubGenerator.h: Added.
(JSC::JITSubGenerator::JITSubGenerator):
(JSC::JITSubGenerator::generateFastPath):
(JSC::JITSubGenerator::slowPathJumpList):

Modified Paths

Added Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (190648 => 190649)


--- trunk/Source/_javascript_Core/ChangeLog	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/ChangeLog	2015-10-06 22:29:27 UTC (rev 190649)
@@ -1,3 +1,64 @@
+2015-10-06  Mark Lam  <mark....@apple.com>
+
+        Factoring out op_sub baseline code generation into JITSubGenerator.
+        https://bugs.webkit.org/show_bug.cgi?id=149600
+
+        Reviewed by Geoffrey Garen.
+
+        We're going to factor out baseline code generation into snippet generators so
+        that we can later use them in the DFG and FTL to emit code for to perform the
+        JS operations where the operand types are predicted to be polymorphic.
+        We are starting in this patch with the implementation of op_sub.
+
+        What was done in this patch:
+        1. Created JITSubGenerator based on the baseline implementation of op_sub as
+           expressed in compileBinaryArithOp() and compileBinaryArithOpSlowCase().
+           I did not attempt to do write a more optimal version of op_sub.  I'll
+           leave that to a later patch.
+
+        2. Convert the 32-bit op_sub baseline implementation to use the same
+           JITSubGenerator which was based on the 64-bit implementation.  The
+           pre-existing 32-bit baseline op_sub had handling for more optimization cases.
+           However, a benchmark run shows that simply going with the 64-bit version
+           (foregoing those extra optimizations) did not change the performance.
+
+           Also, previously, the 32-bit version was able to move double results directly
+           into the result location on the stack directly.  By using JITSubGenerator,
+           we now always move that result into a pair of GPRs before storing it into
+           the stack location.
+
+        3. Add some needed emitters to AssemblyHelpers that play nice with JSValueRegs.
+
+        * _javascript_Core.xcodeproj/project.pbxproj:
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::boxDouble):
+        (JSC::AssemblyHelpers::unboxDouble):
+        (JSC::AssemblyHelpers::boxBooleanPayload):
+        * jit/JIT.h:
+        (JSC::JIT::linkDummySlowCase):
+        * jit/JITArithmetic.cpp:
+        (JSC::JIT::compileBinaryArithOp):
+        (JSC::JIT::compileBinaryArithOpSlowCase):
+        (JSC::JIT::emitSlow_op_div):
+        (JSC::JIT::emit_op_sub):
+        (JSC::JIT::emitSlow_op_sub):
+        * jit/JITArithmetic32_64.cpp:
+        (JSC::JIT::emitBinaryDoubleOp):
+        (JSC::JIT::emit_op_sub): Deleted.
+        (JSC::JIT::emitSub32Constant): Deleted.
+        (JSC::JIT::emitSlow_op_sub): Deleted.
+        * jit/JITInlines.h:
+        (JSC::JIT::linkSlowCaseIfNotJSCell):
+        (JSC::JIT::linkAllSlowCasesForBytecodeOffset):
+        (JSC::JIT::addSlowCase):
+        (JSC::JIT::emitLoad):
+        (JSC::JIT::emitGetVirtualRegister):
+        (JSC::JIT::emitPutVirtualRegister):
+        * jit/JITSubGenerator.h: Added.
+        (JSC::JITSubGenerator::JITSubGenerator):
+        (JSC::JITSubGenerator::generateFastPath):
+        (JSC::JITSubGenerator::slowPathJumpList):
+
 2015-10-06  Daniel Bates  <dba...@webkit.org>
 
         Enable XSLT when building WebKit for iOS using the public iOS SDK

Modified: trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj (190648 => 190649)


--- trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2015-10-06 22:29:27 UTC (rev 190649)
@@ -3707,6 +3707,7 @@
 		FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapVerifier.cpp; sourceTree = "<group>"; };
 		FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapVerifier.h; sourceTree = "<group>"; };
 		FE90BB3A1B7CF64E006B3F03 /* VMInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VMInlines.h; sourceTree = "<group>"; };
+		FE98B5B61BB9AE110073E7A6 /* JITSubGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITSubGenerator.h; sourceTree = "<group>"; };
 		FEA0861E182B7A0400F6D851 /* Breakpoint.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Breakpoint.h; sourceTree = "<group>"; };
 		FEA0861F182B7A0400F6D851 /* DebuggerPrimitives.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerPrimitives.h; sourceTree = "<group>"; };
 		FEB51F6A1A97B688001F921C /* Regress141809.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Regress141809.h; path = API/tests/Regress141809.h; sourceTree = "<group>"; };
@@ -4189,6 +4190,7 @@
 				FEF6835D174343CC00A32E25 /* JITStubsX86.h */,
 				FEF6835C174343CC00A32E25 /* JITStubsX86_64.h */,
 				A7A4AE0C17973B4D005612B1 /* JITStubsX86Common.h */,
+				FE98B5B61BB9AE110073E7A6 /* JITSubGenerator.h */,
 				0F5EF91B16878F78003E5C25 /* JITThunks.cpp */,
 				0F5EF91C16878F78003E5C25 /* JITThunks.h */,
 				0FC712E017CD878F008CC93C /* JITToDFGDeferredCompilationCallback.cpp */,

Modified: trunk/Source/_javascript_Core/jit/AssemblyHelpers.h (190648 => 190649)


--- trunk/Source/_javascript_Core/jit/AssemblyHelpers.h	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/jit/AssemblyHelpers.h	2015-10-06 22:29:27 UTC (rev 190649)
@@ -964,7 +964,11 @@
     {
         boxDouble(fpr, regs.gpr());
     }
-    
+    void unboxDouble(JSValueRegs regs, FPRReg destFPR, FPRReg)
+    {
+        unboxDouble(regs.payloadGPR(), destFPR);
+    }
+
     // Here are possible arrangements of source, target, scratch:
     // - source, target, scratch can all be separate registers.
     // - source and target can be the same but scratch is separate.
@@ -1002,6 +1006,10 @@
     {
         boxDouble(fpr, regs.tagGPR(), regs.payloadGPR());
     }
+    void unboxDouble(JSValueRegs regs, FPRReg fpr, FPRReg scratchFPR)
+    {
+        unboxDouble(regs.tagGPR(), regs.payloadGPR(), fpr, scratchFPR);
+    }
 #endif
     
     void boxBooleanPayload(GPRReg boolGPR, GPRReg payloadGPR)

Modified: trunk/Source/_javascript_Core/jit/JIT.h (190648 => 190649)


--- trunk/Source/_javascript_Core/jit/JIT.h	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/jit/JIT.h	2015-10-06 22:29:27 UTC (rev 190649)
@@ -400,6 +400,9 @@
 
         enum FinalObjectMode { MayBeFinal, KnownNotFinal };
 
+        void emitGetVirtualRegister(int src, JSValueRegs dst);
+        void emitPutVirtualRegister(int dst, JSValueRegs src);
+
 #if USE(JSVALUE32_64)
         bool getOperandConstantInt(int op1, int op2, int& op, int32_t& constant);
 
@@ -708,6 +711,8 @@
             ++iter;
         }
         void linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator&, int virtualRegisterIndex);
+        void linkAllSlowCasesForBytecodeOffset(Vector<SlowCaseEntry>& slowCases,
+            Vector<SlowCaseEntry>::iterator&, unsigned bytecodeOffset);
 
         MacroAssembler::Call appendCallWithExceptionCheck(const FunctionPtr&);
 #if OS(WINDOWS) && CPU(X86_64)

Modified: trunk/Source/_javascript_Core/jit/JITArithmetic.cpp (190648 => 190649)


--- trunk/Source/_javascript_Core/jit/JITArithmetic.cpp	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/jit/JITArithmetic.cpp	2015-10-06 22:29:27 UTC (rev 190649)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -32,6 +32,7 @@
 #include "JITInlines.h"
 #include "JITOperations.h"
 #include "JITStubs.h"
+#include "JITSubGenerator.h"
 #include "JSArray.h"
 #include "JSFunction.h"
 #include "Interpreter.h"
@@ -668,8 +669,6 @@
     emitJumpSlowCaseIfNotInt(regT1);
     if (opcodeID == op_add)
         addSlowCase(branchAdd32(Overflow, regT1, regT0));
-    else if (opcodeID == op_sub)
-        addSlowCase(branchSub32(Overflow, regT1, regT0));
     else {
         ASSERT(opcodeID == op_mul);
         if (shouldEmitProfiling()) {
@@ -721,7 +720,7 @@
 
     Label stubFunctionCall(this);
 
-    JITSlowPathCall slowPathCall(this, currentInstruction, opcodeID == op_add ? slow_path_add : opcodeID == op_sub ? slow_path_sub : slow_path_mul);
+    JITSlowPathCall slowPathCall(this, currentInstruction, opcodeID == op_add ? slow_path_add : slow_path_mul);
     slowPathCall.call();
     Jump end = jump();
 
@@ -767,8 +766,6 @@
 
     if (opcodeID == op_add)
         addDouble(fpRegT2, fpRegT1);
-    else if (opcodeID == op_sub)
-        subDouble(fpRegT2, fpRegT1);
     else if (opcodeID == op_mul)
         mulDouble(fpRegT2, fpRegT1);
     else {
@@ -961,6 +958,8 @@
     slowPathCall.call();
 }
 
+#endif // USE(JSVALUE64)
+
 void JIT::emit_op_sub(Instruction* currentInstruction)
 {
     int result = currentInstruction[1].u.operand;
@@ -968,24 +967,42 @@
     int op2 = currentInstruction[3].u.operand;
     OperandTypes types = OperandTypes::fromInt(currentInstruction[4].u.operand);
 
-    compileBinaryArithOp(op_sub, result, op1, op2, types);
-    emitPutVirtualRegister(result);
+#if USE(JSVALUE64)
+    JSValueRegs leftRegs = JSValueRegs(regT0);
+    JSValueRegs rightRegs = JSValueRegs(regT1);
+    JSValueRegs resultRegs = leftRegs;
+    GPRReg scratchGPR = InvalidGPRReg;
+    FPRReg scratchFPR = InvalidFPRReg;
+#else
+    JSValueRegs leftRegs = JSValueRegs(regT1, regT0);
+    JSValueRegs rightRegs = JSValueRegs(regT3, regT2);
+    JSValueRegs resultRegs = leftRegs;
+    GPRReg scratchGPR = regT4;
+    FPRReg scratchFPR = fpRegT2;
+#endif
+
+    emitGetVirtualRegister(op1, leftRegs);
+    emitGetVirtualRegister(op2, rightRegs);
+
+    JITSubGenerator gen(resultRegs, leftRegs, rightRegs, types.first(), types.second(),
+        fpRegT0, fpRegT1, scratchGPR, scratchFPR);
+
+    gen.generateFastPath(*this);
+    emitPutVirtualRegister(result, resultRegs);
+
+    addSlowCase(gen.slowPathJumpList());
 }
 
 void JIT::emitSlow_op_sub(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
 {
-    int result = currentInstruction[1].u.operand;
-    int op1 = currentInstruction[2].u.operand;
-    int op2 = currentInstruction[3].u.operand;
-    OperandTypes types = OperandTypes::fromInt(currentInstruction[4].u.operand);
+    linkAllSlowCasesForBytecodeOffset(m_slowCases, iter, m_bytecodeOffset);
 
-    compileBinaryArithOpSlowCase(currentInstruction, op_sub, iter, result, op1, op2, types, false, false);
+    JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_sub);
+    slowPathCall.call();
 }
 
 /* ------------------------------ END: OP_ADD, OP_SUB, OP_MUL ------------------------------ */
 
-#endif // USE(JSVALUE64)
-
 } // namespace JSC
 
 #endif // ENABLE(JIT)

Modified: trunk/Source/_javascript_Core/jit/JITArithmetic32_64.cpp (190648 => 190649)


--- trunk/Source/_javascript_Core/jit/JITArithmetic32_64.cpp	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/jit/JITArithmetic32_64.cpp	2015-10-06 22:29:27 UTC (rev 190649)
@@ -1,5 +1,5 @@
 /*
-* Copyright (C) 2008 Apple Inc. All rights reserved.
+* Copyright (C) 2008, 2015 Apple Inc. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
@@ -594,99 +594,6 @@
 
 // Subtraction (-)
 
-void JIT::emit_op_sub(Instruction* currentInstruction)
-{
-    int dst = currentInstruction[1].u.operand;
-    int op1 = currentInstruction[2].u.operand;
-    int op2 = currentInstruction[3].u.operand;
-    OperandTypes types = OperandTypes::fromInt(currentInstruction[4].u.operand);
-
-    JumpList notInt32Op1;
-    JumpList notInt32Op2;
-
-    if (isOperandConstantInt(op2)) {
-        emitSub32Constant(dst, op1, getConstantOperand(op2).asInt32(), types.first());
-        return;
-    }
-
-    emitLoad2(op1, regT1, regT0, op2, regT3, regT2);
-    notInt32Op1.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag)));
-    notInt32Op2.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)));
-
-    // Int32 case.
-    addSlowCase(branchSub32(Overflow, regT2, regT0));
-    emitStoreInt32(dst, regT0, (op1 == dst || op2 == dst));
-
-    if (!supportsFloatingPoint()) {
-        addSlowCase(notInt32Op1);
-        addSlowCase(notInt32Op2);
-        return;
-    }
-    Jump end = jump();
-
-    // Double case.
-    emitBinaryDoubleOp(op_sub, dst, op1, op2, types, notInt32Op1, notInt32Op2);
-    end.link(this);
-}
-
-void JIT::emitSub32Constant(int dst, int op, int32_t constant, ResultType opType)
-{
-    // Int32 case.
-    emitLoad(op, regT1, regT0);
-    Jump notInt32 = branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag));
-    addSlowCase(branchSub32(Overflow, regT0, Imm32(constant), regT2, regT3));   
-    emitStoreInt32(dst, regT2, (op == dst));
-
-    // Double case.
-    if (!supportsFloatingPoint()) {
-        addSlowCase(notInt32);
-        return;
-    }
-    Jump end = jump();
-
-    notInt32.link(this);
-    if (!opType.definitelyIsNumber())
-        addSlowCase(branch32(Above, regT1, TrustedImm32(JSValue::LowestTag)));
-    move(Imm32(constant), regT2);
-    convertInt32ToDouble(regT2, fpRegT0);
-    emitLoadDouble(op, fpRegT1);
-    subDouble(fpRegT0, fpRegT1);
-    emitStoreDouble(dst, fpRegT1);
-
-    end.link(this);
-}
-
-void JIT::emitSlow_op_sub(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
-{
-    int op2 = currentInstruction[3].u.operand;
-    OperandTypes types = OperandTypes::fromInt(currentInstruction[4].u.operand);
-
-    if (isOperandConstantInt(op2)) {
-        linkSlowCase(iter); // overflow check
-
-        if (!supportsFloatingPoint() || !types.first().definitelyIsNumber())
-            linkSlowCase(iter); // int32 or double check
-    } else {
-        linkSlowCase(iter); // overflow check
-
-        if (!supportsFloatingPoint()) {
-            linkSlowCase(iter); // int32 check
-            linkSlowCase(iter); // int32 check
-        } else {
-            if (!types.first().definitelyIsNumber())
-                linkSlowCase(iter); // double check
-
-            if (!types.second().definitelyIsNumber()) {
-                linkSlowCase(iter); // int32 check
-                linkSlowCase(iter); // double check
-            }
-        }
-    }
-
-    JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_sub);
-    slowPathCall.call();
-}
-
 void JIT::emitBinaryDoubleOp(OpcodeID opcodeID, int dst, int op1, int op2, OperandTypes types, JumpList& notInt32Op1, JumpList& notInt32Op2, bool op1IsInRegisters, bool op2IsInRegisters)
 {
     JumpList end;
@@ -729,11 +636,6 @@
                 addDouble(fpRegT2, fpRegT0);
                 emitStoreDouble(dst, fpRegT0);
                 break;
-            case op_sub:
-                emitLoadDouble(op1, fpRegT1);
-                subDouble(fpRegT0, fpRegT1);
-                emitStoreDouble(dst, fpRegT1);
-                break;
             case op_div: {
                 emitLoadDouble(op1, fpRegT1);
                 divDouble(fpRegT0, fpRegT1);
@@ -830,11 +732,6 @@
                 addDouble(fpRegT2, fpRegT0);
                 emitStoreDouble(dst, fpRegT0);
                 break;
-            case op_sub:
-                emitLoadDouble(op2, fpRegT2);
-                subDouble(fpRegT2, fpRegT0);
-                emitStoreDouble(dst, fpRegT0);
-                break;
             case op_div: {
                 emitLoadDouble(op2, fpRegT2);
                 divDouble(fpRegT2, fpRegT0);

Modified: trunk/Source/_javascript_Core/jit/JITInlines.h (190648 => 190649)


--- trunk/Source/_javascript_Core/jit/JITInlines.h	2015-10-06 21:46:08 UTC (rev 190648)
+++ trunk/Source/_javascript_Core/jit/JITInlines.h	2015-10-06 22:29:27 UTC (rev 190649)
@@ -769,6 +769,14 @@
         linkSlowCase(iter);
 }
 
+ALWAYS_INLINE void JIT::linkAllSlowCasesForBytecodeOffset(Vector<SlowCaseEntry>& slowCases, Vector<SlowCaseEntry>::iterator& iter, unsigned bytecodeOffset)
+{
+    while (iter != slowCases.end() && iter->to == bytecodeOffset) {
+        iter->from.link(this);
+        ++iter;
+    }
+}
+
 ALWAYS_INLINE void JIT::addSlowCase(Jump jump)
 {
     ASSERT(m_bytecodeOffset != std::numeric_limits<unsigned>::max()); // This method should only be called during hot/cold path generation, so that m_bytecodeOffset is set.
@@ -998,6 +1006,16 @@
     move(Imm32(v.tag()), tag);
 }
 
+ALWAYS_INLINE void JIT::emitGetVirtualRegister(int src, JSValueRegs dst)
+{
+    emitLoad(src, dst.tagGPR(), dst.payloadGPR());
+}
+
+ALWAYS_INLINE void JIT::emitPutVirtualRegister(int dst, JSValueRegs from)
+{
+    emitStore(dst, from.tagGPR(), from.payloadGPR());
+}
+
 inline void JIT::emitLoad(int index, RegisterID tag, RegisterID payload, RegisterID base)
 {
     RELEASE_ASSERT(tag != payload);
@@ -1156,6 +1174,11 @@
     load64(Address(callFrameRegister, src * sizeof(Register)), dst);
 }
 
+ALWAYS_INLINE void JIT::emitGetVirtualRegister(int src, JSValueRegs dst)
+{
+    emitGetVirtualRegister(src, dst.payloadGPR());
+}
+
 ALWAYS_INLINE void JIT::emitGetVirtualRegister(VirtualRegister src, RegisterID dst)
 {
     emitGetVirtualRegister(src.offset(), dst);
@@ -1187,6 +1210,11 @@
     store64(from, Address(callFrameRegister, dst * sizeof(Register)));
 }
 
+ALWAYS_INLINE void JIT::emitPutVirtualRegister(int dst, JSValueRegs from)
+{
+    emitPutVirtualRegister(dst, from.payloadGPR());
+}
+
 ALWAYS_INLINE void JIT::emitPutVirtualRegister(VirtualRegister dst, RegisterID from)
 {
     emitPutVirtualRegister(dst.offset(), from);

Added: trunk/Source/_javascript_Core/jit/JITSubGenerator.h (0 => 190649)


--- trunk/Source/_javascript_Core/jit/JITSubGenerator.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/jit/JITSubGenerator.h	2015-10-06 22:29:27 UTC (rev 190649)
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef JITSubGenerator_h
+#define JITSubGenerator_h
+
+#include "CCallHelpers.h"
+
+namespace JSC {
+    
+class JITSubGenerator {
+public:
+
+    JITSubGenerator(JSValueRegs result, JSValueRegs left, JSValueRegs right,
+        ResultType leftType, ResultType rightType, FPRReg leftFPR, FPRReg rightFPR,
+        GPRReg scratchGPR, FPRReg scratchFPR)
+        : m_result(result)
+        , m_left(left)
+        , m_right(right)
+        , m_leftType(leftType)
+        , m_rightType(rightType)
+        , m_leftFPR(leftFPR)
+        , m_rightFPR(rightFPR)
+        , m_scratchGPR(scratchGPR)
+        , m_scratchFPR(scratchFPR)
+    { }
+
+    void generateFastPath(CCallHelpers& jit)
+    {
+        CCallHelpers::JumpList slowPath;
+
+        CCallHelpers::Jump leftNotInt = jit.branchIfNotInt32(m_left);
+        CCallHelpers::Jump rightNotInt = jit.branchIfNotInt32(m_right);
+
+        m_slowPathJumpList.append(
+            jit.branchSub32(CCallHelpers::Overflow, m_right.payloadGPR(), m_left.payloadGPR()));
+
+        jit.boxInt32(m_left.payloadGPR(), m_result);
+
+        if (!jit.supportsFloatingPoint()) {
+            m_slowPathJumpList.append(leftNotInt);
+            m_slowPathJumpList.append(rightNotInt);
+            return;
+        }
+        
+        CCallHelpers::Jump end = jit.jump();
+
+        leftNotInt.link(&jit);
+        if (!m_leftType.definitelyIsNumber())
+            m_slowPathJumpList.append(jit.branchIfNotNumber(m_left, m_scratchGPR));
+        if (!m_rightType.definitelyIsNumber())
+            m_slowPathJumpList.append(jit.branchIfNotNumber(m_right, m_scratchGPR));
+
+        jit.unboxDouble(m_left, m_leftFPR, m_scratchFPR);
+        CCallHelpers::Jump rightIsDouble = jit.branchIfNotInt32(m_right);
+
+        jit.convertInt32ToDouble(m_right.payloadGPR(), m_rightFPR);
+        CCallHelpers::Jump rightWasInteger = jit.jump();
+
+        rightNotInt.link(&jit);
+        if (!m_rightType.definitelyIsNumber())
+            m_slowPathJumpList.append(jit.branchIfNotNumber(m_right, m_scratchGPR));
+
+        jit.convertInt32ToDouble(m_left.payloadGPR(), m_leftFPR);
+
+        rightIsDouble.link(&jit);
+        jit.unboxDouble(m_right, m_rightFPR, m_scratchFPR);
+
+        rightWasInteger.link(&jit);
+
+        jit.subDouble(m_rightFPR, m_leftFPR);
+        jit.boxDouble(m_leftFPR, m_result);
+
+        end.link(&jit);
+    }
+
+    CCallHelpers::JumpList slowPathJumpList() { return m_slowPathJumpList; }
+
+private:
+    JSValueRegs m_result;
+    JSValueRegs m_left;
+    JSValueRegs m_right;
+    ResultType m_leftType;
+    ResultType m_rightType;
+    FPRReg m_leftFPR;
+    FPRReg m_rightFPR;
+    GPRReg m_scratchGPR;
+    FPRReg m_scratchFPR;
+
+    CCallHelpers::JumpList m_slowPathJumpList;
+};
+
+} // namespace JSC
+
+#endif // JITSubGenerator_h
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to