Title: [196799] trunk/Source/_javascript_Core
Revision
196799
Author
[email protected]
Date
2016-02-18 22:42:06 -0800 (Thu, 18 Feb 2016)

Log Message

[JSC] Improve the instruction selection of Select
https://bugs.webkit.org/show_bug.cgi?id=154432

Patch by Benjamin Poulain <[email protected]> on 2016-02-18
Reviewed by Filip Pizlo.

Plenty of code but this patch is pretty dumb:
-On ARM64: use the 3 operand form of CSEL instead of forcing a source
 to be alised to the destination. This gives more freedom to the register
 allocator and it is one less Move to process per Select.
-On x86, introduce a fake 3 operands form and use aggressive aliasing
 to try to alias both sources to the destination.

 If aliasing succeed on the "elseCase", the condition of the Select
 is reverted in the MacroAssembler.

 If no aliasing is possible and we end up with 3 registers, the missing
 move instruction is generated by the MacroAssembler.

 The missing move is generated after testing the values because the destination
 can use the same register as one of the test operand.
 Experimental testing seems to indicate there is no macro-fusion on CMOV,
 there is no measurable cost to having the move there.

* assembler/MacroAssembler.h:
(JSC::MacroAssembler::isInvertible):
(JSC::MacroAssembler::invert):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::moveConditionallyDouble):
(JSC::MacroAssemblerARM64::moveConditionallyFloat):
(JSC::MacroAssemblerARM64::moveConditionallyAfterFloatingPointCompare):
(JSC::MacroAssemblerARM64::moveConditionally32):
(JSC::MacroAssemblerARM64::moveConditionally64):
(JSC::MacroAssemblerARM64::moveConditionallyTest32):
(JSC::MacroAssemblerARM64::moveConditionallyTest64):
* assembler/MacroAssemblerX86Common.h:
(JSC::MacroAssemblerX86Common::moveConditionallyDouble):
(JSC::MacroAssemblerX86Common::moveConditionallyFloat):
(JSC::MacroAssemblerX86Common::moveConditionally32):
(JSC::MacroAssemblerX86Common::moveConditionallyTest32):
(JSC::MacroAssemblerX86Common::invert):
(JSC::MacroAssemblerX86Common::isInvertible):
* assembler/MacroAssemblerX86_64.h:
(JSC::MacroAssemblerX86_64::moveConditionally64):
(JSC::MacroAssemblerX86_64::moveConditionallyTest64):
* b3/B3LowerToAir.cpp:
(JSC::B3::Air::LowerToAir::createSelect):
(JSC::B3::Air::LowerToAir::lower):
* b3/air/AirInstInlines.h:
(JSC::B3::Air::Inst::shouldTryAliasingDef):
* b3/air/AirOpcode.opcodes:

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (196798 => 196799)


--- trunk/Source/_javascript_Core/ChangeLog	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/ChangeLog	2016-02-19 06:42:06 UTC (rev 196799)
@@ -1,3 +1,56 @@
+2016-02-18  Benjamin Poulain  <[email protected]>
+
+        [JSC] Improve the instruction selection of Select
+        https://bugs.webkit.org/show_bug.cgi?id=154432
+
+        Reviewed by Filip Pizlo.
+
+        Plenty of code but this patch is pretty dumb:
+        -On ARM64: use the 3 operand form of CSEL instead of forcing a source
+         to be alised to the destination. This gives more freedom to the register
+         allocator and it is one less Move to process per Select.
+        -On x86, introduce a fake 3 operands form and use aggressive aliasing
+         to try to alias both sources to the destination.
+
+         If aliasing succeed on the "elseCase", the condition of the Select
+         is reverted in the MacroAssembler.
+
+         If no aliasing is possible and we end up with 3 registers, the missing
+         move instruction is generated by the MacroAssembler.
+
+         The missing move is generated after testing the values because the destination
+         can use the same register as one of the test operand.
+         Experimental testing seems to indicate there is no macro-fusion on CMOV,
+         there is no measurable cost to having the move there.
+
+        * assembler/MacroAssembler.h:
+        (JSC::MacroAssembler::isInvertible):
+        (JSC::MacroAssembler::invert):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::moveConditionallyDouble):
+        (JSC::MacroAssemblerARM64::moveConditionallyFloat):
+        (JSC::MacroAssemblerARM64::moveConditionallyAfterFloatingPointCompare):
+        (JSC::MacroAssemblerARM64::moveConditionally32):
+        (JSC::MacroAssemblerARM64::moveConditionally64):
+        (JSC::MacroAssemblerARM64::moveConditionallyTest32):
+        (JSC::MacroAssemblerARM64::moveConditionallyTest64):
+        * assembler/MacroAssemblerX86Common.h:
+        (JSC::MacroAssemblerX86Common::moveConditionallyDouble):
+        (JSC::MacroAssemblerX86Common::moveConditionallyFloat):
+        (JSC::MacroAssemblerX86Common::moveConditionally32):
+        (JSC::MacroAssemblerX86Common::moveConditionallyTest32):
+        (JSC::MacroAssemblerX86Common::invert):
+        (JSC::MacroAssemblerX86Common::isInvertible):
+        * assembler/MacroAssemblerX86_64.h:
+        (JSC::MacroAssemblerX86_64::moveConditionally64):
+        (JSC::MacroAssemblerX86_64::moveConditionallyTest64):
+        * b3/B3LowerToAir.cpp:
+        (JSC::B3::Air::LowerToAir::createSelect):
+        (JSC::B3::Air::LowerToAir::lower):
+        * b3/air/AirInstInlines.h:
+        (JSC::B3::Air::Inst::shouldTryAliasingDef):
+        * b3/air/AirOpcode.opcodes:
+
 2016-02-18  Gyuyoung Kim  <[email protected]>
 
         [CMake][GTK] Clean up llvm guard in PlatformGTK.cmake

Modified: trunk/Source/_javascript_Core/assembler/MacroAssembler.h (196798 => 196799)


--- trunk/Source/_javascript_Core/assembler/MacroAssembler.h	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/assembler/MacroAssembler.h	2016-02-19 06:42:06 UTC (rev 196799)
@@ -177,6 +177,8 @@
         switch (cond) {
         case Zero:
         case NonZero:
+        case Signed:
+        case PositiveOrZero:
             return true;
         default:
             return false;
@@ -190,6 +192,10 @@
             return NonZero;
         case NonZero:
             return Zero;
+        case Signed:
+            return PositiveOrZero;
+        case PositiveOrZero:
+            return Signed;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             return Zero; // Make compiler happy for release builds.

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h (196798 => 196799)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2016-02-19 06:42:06 UTC (rev 196799)
@@ -1601,12 +1601,24 @@
         moveConditionallyAfterFloatingPointCompare<64>(cond, src, dest);
     }
 
+    void moveConditionallyDouble(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.fcmp<64>(left, right);
+        moveConditionallyAfterFloatingPointCompare<64>(cond, thenCase, elseCase, dest);
+    }
+
     void moveConditionallyFloat(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID src, RegisterID dest)
     {
         m_assembler.fcmp<32>(left, right);
         moveConditionallyAfterFloatingPointCompare<64>(cond, src, dest);
     }
 
+    void moveConditionallyFloat(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.fcmp<32>(left, right);
+        moveConditionallyAfterFloatingPointCompare<64>(cond, thenCase, elseCase, dest);
+    }
+
     template<int datasize>
     void moveConditionallyAfterFloatingPointCompare(DoubleCondition cond, RegisterID src, RegisterID dest)
     {
@@ -1628,6 +1640,27 @@
         m_assembler.csel<datasize>(dest, src, dest, ARM64Condition(cond));
     }
 
+    template<int datasize>
+    void moveConditionallyAfterFloatingPointCompare(DoubleCondition cond, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        if (cond == DoubleNotEqual) {
+            Jump unordered = makeBranch(ARM64Assembler::ConditionVS);
+            m_assembler.csel<datasize>(dest, thenCase, elseCase, ARM64Assembler::ConditionNE);
+            unordered.link(this);
+            return;
+        }
+        if (cond == DoubleEqualOrUnordered) {
+            // If the compare is unordered, thenCase is copied to dest and the
+            // next csel has all arguments equal to thenCase.
+            // If the compare is ordered, dest is unchanged and EQ decides
+            // what value to set.
+            m_assembler.csel<datasize>(dest, thenCase, elseCase, ARM64Assembler::ConditionVS);
+            m_assembler.csel<datasize>(dest, thenCase, dest, ARM64Assembler::ConditionEQ);
+            return;
+        }
+        m_assembler.csel<datasize>(dest, thenCase, elseCase, ARM64Condition(cond));
+    }
+
     void mulDouble(FPRegisterID src, FPRegisterID dest)
     {
         mulDouble(dest, src, dest);
@@ -1894,24 +1927,48 @@
         m_assembler.csel<32>(dest, src, dest, ARM64Condition(cond));
     }
 
+    void moveConditionally32(RelationalCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.cmp<32>(left, right);
+        m_assembler.csel<32>(dest, thenCase, elseCase, ARM64Condition(cond));
+    }
+
     void moveConditionally64(RelationalCondition cond, RegisterID left, RegisterID right, RegisterID src, RegisterID dest)
     {
         m_assembler.cmp<64>(left, right);
         m_assembler.csel<64>(dest, src, dest, ARM64Condition(cond));
     }
 
+    void moveConditionally64(RelationalCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.cmp<64>(left, right);
+        m_assembler.csel<64>(dest, thenCase, elseCase, ARM64Condition(cond));
+    }
+
     void moveConditionallyTest32(ResultCondition cond, RegisterID testReg, RegisterID mask, RegisterID src, RegisterID dest)
     {
         m_assembler.tst<32>(testReg, mask);
         m_assembler.csel<32>(dest, src, dest, ARM64Condition(cond));
     }
 
+    void moveConditionallyTest32(ResultCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.tst<32>(left, right);
+        m_assembler.csel<32>(dest, thenCase, elseCase, ARM64Condition(cond));
+    }
+
     void moveConditionallyTest64(ResultCondition cond, RegisterID testReg, RegisterID mask, RegisterID src, RegisterID dest)
     {
         m_assembler.tst<64>(testReg, mask);
         m_assembler.csel<64>(dest, src, dest, ARM64Condition(cond));
     }
 
+    void moveConditionallyTest64(ResultCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.tst<64>(left, right);
+        m_assembler.csel<64>(dest, thenCase, elseCase, ARM64Condition(cond));
+    }
+
     // Forwards / external control flow operations:
     //
     // This set of jump and conditional branch operations return a Jump

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h (196798 => 196799)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h	2016-02-19 06:42:06 UTC (rev 196799)
@@ -1577,6 +1577,30 @@
         moveConditionallyAfterFloatingPointCompare(cond, left, right, src, dest);
     }
 
+    void moveConditionallyDouble(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isSSE2Present());
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        RegisterID src;
+        if (elseCase == dest)
+            src = ""
+        else {
+            cond = invert(cond);
+            src = ""
+        }
+
+        if (cond & DoubleConditionBitInvert)
+            m_assembler.ucomisd_rr(left, right);
+        else
+            m_assembler.ucomisd_rr(right, left);
+        moveConditionallyAfterFloatingPointCompare(cond, left, right, src, dest);
+    }
+
     void moveConditionallyFloat(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID src, RegisterID dest)
     {
         ASSERT(isSSE2Present());
@@ -1587,6 +1611,30 @@
             m_assembler.ucomiss_rr(right, left);
         moveConditionallyAfterFloatingPointCompare(cond, left, right, src, dest);
     }
+
+    void moveConditionallyFloat(DoubleCondition cond, FPRegisterID left, FPRegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isSSE2Present());
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        RegisterID src;
+        if (elseCase == dest)
+            src = ""
+        else {
+            cond = invert(cond);
+            src = ""
+        }
+
+        if (cond & DoubleConditionBitInvert)
+            m_assembler.ucomiss_rr(left, right);
+        else
+            m_assembler.ucomiss_rr(right, left);
+        moveConditionallyAfterFloatingPointCompare(cond, left, right, src, dest);
+    }
     
     void swap(RegisterID reg1, RegisterID reg2)
     {
@@ -1700,19 +1748,69 @@
         cmov(x86Condition(cond), src, dest);
     }
 
+    void moveConditionally32(RelationalCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.cmpl_rr(right, left);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
+
     void moveConditionallyTest32(ResultCondition cond, RegisterID testReg, RegisterID mask, RegisterID src, RegisterID dest)
     {
         m_assembler.testl_rr(testReg, mask);
         cmov(x86Condition(cond), src, dest);
     }
-    
+
+    void moveConditionallyTest32(ResultCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isInvertible(cond));
+        ASSERT_WITH_MESSAGE(cond != Overflow, "TEST does not set the Overflow Flag.");
+
+        m_assembler.testl_rr(right, left);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
+
     void moveConditionallyTest32(ResultCondition cond, RegisterID testReg, TrustedImm32 mask, RegisterID src, RegisterID dest)
     {
         test32(cond, testReg, mask);
         cmov(x86Condition(cond), src, dest);
     }
-    
 
+    void moveConditionallyTest32(ResultCondition cond, RegisterID testReg, TrustedImm32 mask, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isInvertible(cond));
+        ASSERT_WITH_MESSAGE(cond != Overflow, "TEST does not set the Overflow Flag.");
+
+        test32(cond, testReg, mask);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
+
     // Forwards / external control flow operations:
     //
     // This set of jump and conditional branch operations return a Jump
@@ -2120,6 +2218,68 @@
         return static_cast<RelationalCondition>(cond ^ 1);
     }
 
+    static DoubleCondition invert(DoubleCondition cond)
+    {
+        switch (cond) {
+        case DoubleEqual:
+            return DoubleNotEqualOrUnordered;
+        case DoubleNotEqual:
+            return DoubleEqualOrUnordered;
+        case DoubleGreaterThan:
+            return DoubleLessThanOrEqualOrUnordered;
+        case DoubleGreaterThanOrEqual:
+            return DoubleLessThanOrUnordered;
+        case DoubleLessThan:
+            return DoubleGreaterThanOrEqualOrUnordered;
+        case DoubleLessThanOrEqual:
+            return DoubleGreaterThanOrUnordered;
+        case DoubleEqualOrUnordered:
+            return DoubleNotEqual;
+        case DoubleNotEqualOrUnordered:
+            return DoubleEqual;
+        case DoubleGreaterThanOrUnordered:
+            return DoubleLessThanOrEqual;
+        case DoubleGreaterThanOrEqualOrUnordered:
+            return DoubleLessThan;
+        case DoubleLessThanOrUnordered:
+            return DoubleGreaterThanOrEqual;
+        case DoubleLessThanOrEqualOrUnordered:
+            return DoubleGreaterThan;
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return DoubleEqual; // make compiler happy
+    }
+
+    static bool isInvertible(ResultCondition cond)
+    {
+        switch (cond) {
+        case Zero:
+        case NonZero:
+        case Signed:
+        case PositiveOrZero:
+            return true;
+        default:
+            return false;
+        }
+    }
+
+    static ResultCondition invert(ResultCondition cond)
+    {
+        switch (cond) {
+        case Zero:
+            return NonZero;
+        case NonZero:
+            return Zero;
+        case Signed:
+            return PositiveOrZero;
+        case PositiveOrZero:
+            return Signed;
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+            return Zero; // Make compiler happy for release builds.
+        }
+    }
+
     void nop()
     {
         m_assembler.nop();

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h (196798 => 196799)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerX86_64.h	2016-02-19 06:42:06 UTC (rev 196799)
@@ -979,11 +979,44 @@
         cmov(x86Condition(cond), src, dest);
     }
 
+    void moveConditionally64(RelationalCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        m_assembler.cmpq_rr(right, left);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
+
     void moveConditionallyTest64(ResultCondition cond, RegisterID testReg, RegisterID mask, RegisterID src, RegisterID dest)
     {
         m_assembler.testq_rr(testReg, mask);
         cmov(x86Condition(cond), src, dest);
     }
+
+    void moveConditionallyTest64(ResultCondition cond, RegisterID left, RegisterID right, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isInvertible(cond));
+        ASSERT_WITH_MESSAGE(cond != Overflow, "TEST does not set the Overflow Flag.");
+
+        m_assembler.testq_rr(right, left);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
     
     void moveConditionallyTest64(ResultCondition cond, RegisterID testReg, TrustedImm32 mask, RegisterID src, RegisterID dest)
     {
@@ -996,6 +1029,29 @@
             m_assembler.testq_i32r(mask.m_value, testReg);
         cmov(x86Condition(cond), src, dest);
     }
+
+    void moveConditionallyTest64(ResultCondition cond, RegisterID testReg, TrustedImm32 mask, RegisterID thenCase, RegisterID elseCase, RegisterID dest)
+    {
+        ASSERT(isInvertible(cond));
+        ASSERT_WITH_MESSAGE(cond != Overflow, "TEST does not set the Overflow Flag.");
+
+        if (mask.m_value == -1)
+            m_assembler.testq_rr(testReg, testReg);
+        else if (!(mask.m_value & ~0x7f))
+            m_assembler.testb_i8r(mask.m_value, testReg);
+        else
+            m_assembler.testq_i32r(mask.m_value, testReg);
+
+        if (thenCase != dest && elseCase != dest) {
+            move(elseCase, dest);
+            elseCase = dest;
+        }
+
+        if (elseCase == dest)
+            cmov(x86Condition(cond), thenCase, dest);
+        else
+            cmov(x86Condition(invert(cond)), elseCase, dest);
+    }
     
     void abortWithReason(AbortReason reason)
     {

Modified: trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp (196798 => 196799)


--- trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp	2016-02-19 06:42:06 UTC (rev 196799)
@@ -1599,13 +1599,32 @@
         Air::Opcode moveConditionallyTest64;
         Air::Opcode moveConditionallyDouble;
         Air::Opcode moveConditionallyFloat;
-        Tmp source;
-        Tmp destination;
     };
-    Inst createSelect(Value* value, const MoveConditionallyConfig& config, bool inverted = false)
+    Inst createSelect(const MoveConditionallyConfig& config)
     {
+        auto createSelectInstruction = [&] (Air::Opcode opcode, const Arg& condition, const ArgPromise& left, const ArgPromise& right) -> Inst {
+            if (isValidForm(opcode, condition.kind(), left.kind(), right.kind(), Arg::Tmp, Arg::Tmp, Arg::Tmp)) {
+                Tmp result = tmp(m_value);
+                Tmp thenCase = tmp(m_value->child(1));
+                Tmp elseCase = tmp(m_value->child(2));
+                append(relaxedMoveForType(m_value->type()), tmp(m_value->child(2)), result);
+                return Inst(
+                    opcode, m_value, condition,
+                    left.consume(*this), right.consume(*this), thenCase, elseCase, result);
+            }
+            if (isValidForm(opcode, condition.kind(), left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
+                Tmp result = tmp(m_value);
+                Tmp source = tmp(m_value->child(1));
+                append(relaxedMoveForType(m_value->type()), tmp(m_value->child(2)), result);
+                return Inst(
+                    opcode, m_value, condition,
+                    left.consume(*this), right.consume(*this), source, result);
+            }
+            return Inst();
+        };
+
         return createGenericCompare(
-            value,
+            m_value->child(0),
             [&] (
                 Arg::Width width, const Arg& relCond,
                 const ArgPromise& left, const ArgPromise& right) -> Inst {
@@ -1617,19 +1636,9 @@
                 case Arg::Width16:
                     return Inst();
                 case Arg::Width32:
-                    if (isValidForm(config.moveConditionally32, Arg::RelCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                        return Inst(
-                            config.moveConditionally32, m_value, relCond,
-                            left.consume(*this), right.consume(*this), config.source, config.destination);
-                    }
-                    return Inst();
+                    return createSelectInstruction(config.moveConditionally32, relCond, left, right);
                 case Arg::Width64:
-                    if (isValidForm(config.moveConditionally64, Arg::RelCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                        return Inst(
-                            config.moveConditionally64, m_value, relCond,
-                            left.consume(*this), right.consume(*this), config.source, config.destination);
-                    }
-                    return Inst();
+                    return createSelectInstruction(config.moveConditionally64, relCond, left, right);
                 }
                 ASSERT_NOT_REACHED();
             },
@@ -1644,39 +1653,19 @@
                 case Arg::Width16:
                     return Inst();
                 case Arg::Width32:
-                    if (isValidForm(config.moveConditionallyTest32, Arg::ResCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                        return Inst(
-                            config.moveConditionallyTest32, m_value, resCond,
-                            left.consume(*this), right.consume(*this), config.source, config.destination);
-                    }
-                    return Inst();
+                    return createSelectInstruction(config.moveConditionallyTest32, resCond, left, right);
                 case Arg::Width64:
-                    if (isValidForm(config.moveConditionallyTest64, Arg::ResCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                        return Inst(
-                            config.moveConditionallyTest64, m_value, resCond,
-                            left.consume(*this), right.consume(*this), config.source, config.destination);
-                    }
-                    return Inst();
+                    return createSelectInstruction(config.moveConditionallyTest64, resCond, left, right);
                 }
                 ASSERT_NOT_REACHED();
             },
             [&] (Arg doubleCond, const ArgPromise& left, const ArgPromise& right) -> Inst {
-                if (isValidForm(config.moveConditionallyDouble, Arg::DoubleCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                    return Inst(
-                        config.moveConditionallyDouble, m_value, doubleCond,
-                        left.consume(*this), right.consume(*this), config.source, config.destination);
-                }
-                return Inst();
+                return createSelectInstruction(config.moveConditionallyDouble, doubleCond, left, right);
             },
             [&] (Arg doubleCond, const ArgPromise& left, const ArgPromise& right) -> Inst {
-                if (isValidForm(config.moveConditionallyFloat, Arg::DoubleCond, left.kind(), right.kind(), Arg::Tmp, Arg::Tmp)) {
-                    return Inst(
-                        config.moveConditionallyFloat, m_value, doubleCond,
-                        left.consume(*this), right.consume(*this), config.source, config.destination);
-                }
-                return Inst();
+                return createSelectInstruction(config.moveConditionallyFloat, doubleCond, left, right);
             },
-            inverted);
+            false);
     }
 
     void lower()
@@ -2027,13 +2016,7 @@
         }
 
         case Select: {
-            Tmp result = tmp(m_value);
-            append(relaxedMoveForType(m_value->type()), tmp(m_value->child(2)), result);
-
             MoveConditionallyConfig config;
-            config.source = tmp(m_value->child(1));
-            config.destination = result;
-
             if (isInt(m_value->type())) {
                 config.moveConditionally32 = MoveConditionally32;
                 config.moveConditionally64 = MoveConditionally64;
@@ -2051,7 +2034,7 @@
                 config.moveConditionallyFloat = MoveDoubleConditionallyFloat;
             }
             
-            m_insts.last().append(createSelect(m_value->child(0), config));
+            m_insts.last().append(createSelect(config));
             return;
         }
 

Modified: trunk/Source/_javascript_Core/b3/air/AirInstInlines.h (196798 => 196799)


--- trunk/Source/_javascript_Core/b3/air/AirInstInlines.h	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/b3/air/AirInstInlines.h	2016-02-19 06:42:06 UTC (rev 196799)
@@ -196,6 +196,16 @@
         if (args.size() == 4)
             return 3;
         break;
+    case MoveConditionally32:
+    case MoveConditionally64:
+    case MoveConditionallyTest32:
+    case MoveConditionallyTest64:
+    case MoveConditionallyDouble:
+    case MoveConditionallyFloat:
+        if (args.size() == 6)
+            return 5;
+        break;
+        break;
     case Patch:
         return PatchCustom::shouldTryAliasingDef(*this);
     default:

Modified: trunk/Source/_javascript_Core/b3/air/AirOpcode.opcodes (196798 => 196799)


--- trunk/Source/_javascript_Core/b3/air/AirOpcode.opcodes	2016-02-19 05:26:53 UTC (rev 196798)
+++ trunk/Source/_javascript_Core/b3/air/AirOpcode.opcodes	2016-02-19 06:42:06 UTC (rev 196799)
@@ -713,20 +713,40 @@
 MoveConditionally32 U:G:32, U:G:32, U:G:32, U:G:Ptr, UD:G:Ptr
     RelCond, Tmp, Tmp, Tmp, Tmp
 
+MoveConditionally32 U:G:32, U:G:32, U:G:32, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    RelCond, Tmp, Tmp, Tmp, Tmp, Tmp
+
 64: MoveConditionally64 U:G:32, U:G:64, U:G:64, U:G:Ptr, UD:G:Ptr
     RelCond, Tmp, Tmp, Tmp, Tmp
 
+64: MoveConditionally64 U:G:32, U:G:64, U:G:64, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    RelCond, Tmp, Tmp, Tmp, Tmp, Tmp
+
 MoveConditionallyTest32 U:G:32, U:G:32, U:G:32, U:G:Ptr, UD:G:Ptr
     ResCond, Tmp, Tmp, Tmp, Tmp
     x86: ResCond, Tmp, Imm, Tmp, Tmp
 
+MoveConditionallyTest32 U:G:32, U:G:32, U:G:32, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    ResCond, Tmp, Tmp, Tmp, Tmp, Tmp
+    x86: ResCond, Tmp, Imm, Tmp, Tmp, Tmp
+
 64: MoveConditionallyTest64 U:G:32, U:G:64, U:G:64, U:G:Ptr, UD:G:Ptr
     ResCond, Tmp, Tmp, Tmp, Tmp
     x86: ResCond, Tmp, Imm, Tmp, Tmp
 
+64: MoveConditionallyTest64 U:G:32, U:G:32, U:G:32, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    ResCond, Tmp, Tmp, Tmp, Tmp, Tmp
+    x86_64: ResCond, Tmp, Imm, Tmp, Tmp, Tmp
+
+MoveConditionallyDouble U:G:32, U:F:64, U:F:64, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    DoubleCond, Tmp, Tmp, Tmp, Tmp, Tmp
+
 MoveConditionallyDouble U:G:32, U:F:64, U:F:64, U:G:Ptr, UD:G:Ptr
     DoubleCond, Tmp, Tmp, Tmp, Tmp
 
+MoveConditionallyFloat U:G:32, U:F:32, U:F:32, U:G:Ptr, U:G:Ptr, D:G:Ptr
+    DoubleCond, Tmp, Tmp, Tmp, Tmp, Tmp
+
 MoveConditionallyFloat U:G:32, U:F:32, U:F:32, U:G:Ptr, UD:G:Ptr
     DoubleCond, Tmp, Tmp, Tmp, Tmp
 
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to