Title: [240560] branches/safari-607-branch
Revision
240560
Author
[email protected]
Date
2019-01-28 01:40:33 -0800 (Mon, 28 Jan 2019)

Log Message

Cherry-pick r240364. rdar://problem/47586820

    [DFG] AvailabilityMap::pruneByLiveness should make non-live operands Availability::unavailable instead of Availability()
    https://bugs.webkit.org/show_bug.cgi?id=193711
    <rdar://problem/47250262>

    Reviewed by Saam Barati.

    JSTests:

    * stress/availability-was-cleared-when-locals-are-not-live.js: Added.
    (shouldBe):
    (foo):
    (bar):
    (baz):

    Source/_javascript_Core:

    When pruning OSR Availability based on bytecode liveness, we accidentally clear the Availability (making it DeadFlush) instead of
    making it Availability::unavailable() (Making it ConflictingFlush). In OSRAvailabilityAnalysisPhase, we perform forward analysis.
    We first clear all the availability of basic blocks DeadFlush, which is an empty set. And then, we set operands in the root block
    ConflictingFlush. In this forward analysis, DeadFlush is BOTTOM, and ConflictingFlush is TOP. Then, we propagate information by
    merging availability until we reach to the fixed-point. As an optimization, we perform "pruning" of the availability in the head
    of the basic blocks. We remove availabilities of operands which are not live in the bytecode liveness at the head of the basic block.
    The problem is, when removing availabilities, we set DeadFlush for them instead of ConflictingFlush. Basically, it means that we set
    BOTTOM (an empty set) instead of TOP. Let's consider the following simple example. We have 6 basic blocks, and they are connected
    as follows.

        BB0 -> BB1 -> BB2 -> BB4
         |        \        ^
         v          > BB3 /
        BB5

    And consider about loc1 in FTL, which is required to be recovered in BB4's OSR exit.

        BB0 does nothing
            head: loc1 is dead
            tail: loc1 is dead

        BB1 has MovHint @1, loc1
            head: loc1 is dead
            tail: loc1 is live

        BB2 does nothing
            head: loc1 is live
            tail: loc1 is live

        BB3 has PutStack @1, loc1
            head: loc1 is live
            tail: loc1 is live

        BB4 has OSR exit using loc1
            head: loc1 is live
            tail: loc1 is live (in bytecode)

        BB5 does nothing
            head: loc1 is dead
            tail: loc1 is dead

    In our OSR Availability analysis, we always prune loc1 result in BB1's head since its head says "loc1 is dead".
    But at that time, we clear the availability for loc1, which makes it DeadFlush, instead of making it ConflictingFlush.

    So, the flush format of loc1 in each tail of BB is like this.

        BB0
            ConflictingFlush (because all the local operands are initialized with ConflictingFlush)
        BB1
            DeadFlush+@1 (pruning clears it)
        BB2
            DeadFlush+@1 (since it is propagated from BB1)
        BB3
            FlushedJSValue+@1 with loc1 (since it has PutStack)
        BB4
            FlushedJSValue+@1 with loc1 (since MERGE(DeadFlush, FlushedJSValue) = FlushedJSValue)
        BB5
            DeadFlush (pruning clears it)

    Then, if we go the path BB0->BB1->BB2->BB4, we read the value from the stack while it is not flushed.
    The correct fix is making availability "unavailable" when pruning based on bytecode liveness.

    * dfg/DFGAvailabilityMap.cpp:
    (JSC::DFG::AvailabilityMap::pruneByLiveness): When pruning availability, we first set all the operands Availability::unavailable(),
    and copy the calculated value from the current availability map.
    * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
    (JSC::DFG::OSRAvailabilityAnalysisPhase::run): Add logging things for debugging.

    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@240364 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Modified Paths

Added Paths

Diff

Modified: branches/safari-607-branch/JSTests/ChangeLog (240559 => 240560)


--- branches/safari-607-branch/JSTests/ChangeLog	2019-01-28 09:29:20 UTC (rev 240559)
+++ branches/safari-607-branch/JSTests/ChangeLog	2019-01-28 09:40:33 UTC (rev 240560)
@@ -1,3 +1,107 @@
+2019-01-28  Babak Shafiei  <[email protected]>
+
+        Cherry-pick r240364. rdar://problem/47586820
+
+    [DFG] AvailabilityMap::pruneByLiveness should make non-live operands Availability::unavailable instead of Availability()
+    https://bugs.webkit.org/show_bug.cgi?id=193711
+    <rdar://problem/47250262>
+    
+    Reviewed by Saam Barati.
+    
+    JSTests:
+    
+    * stress/availability-was-cleared-when-locals-are-not-live.js: Added.
+    (shouldBe):
+    (foo):
+    (bar):
+    (baz):
+    
+    Source/_javascript_Core:
+    
+    When pruning OSR Availability based on bytecode liveness, we accidentally clear the Availability (making it DeadFlush) instead of
+    making it Availability::unavailable() (Making it ConflictingFlush). In OSRAvailabilityAnalysisPhase, we perform forward analysis.
+    We first clear all the availability of basic blocks DeadFlush, which is an empty set. And then, we set operands in the root block
+    ConflictingFlush. In this forward analysis, DeadFlush is BOTTOM, and ConflictingFlush is TOP. Then, we propagate information by
+    merging availability until we reach to the fixed-point. As an optimization, we perform "pruning" of the availability in the head
+    of the basic blocks. We remove availabilities of operands which are not live in the bytecode liveness at the head of the basic block.
+    The problem is, when removing availabilities, we set DeadFlush for them instead of ConflictingFlush. Basically, it means that we set
+    BOTTOM (an empty set) instead of TOP. Let's consider the following simple example. We have 6 basic blocks, and they are connected
+    as follows.
+    
+        BB0 -> BB1 -> BB2 -> BB4
+         |        \        ^
+         v          > BB3 /
+        BB5
+    
+    And consider about loc1 in FTL, which is required to be recovered in BB4's OSR exit.
+    
+        BB0 does nothing
+            head: loc1 is dead
+            tail: loc1 is dead
+    
+        BB1 has MovHint @1, loc1
+            head: loc1 is dead
+            tail: loc1 is live
+    
+        BB2 does nothing
+            head: loc1 is live
+            tail: loc1 is live
+    
+        BB3 has PutStack @1, loc1
+            head: loc1 is live
+            tail: loc1 is live
+    
+        BB4 has OSR exit using loc1
+            head: loc1 is live
+            tail: loc1 is live (in bytecode)
+    
+        BB5 does nothing
+            head: loc1 is dead
+            tail: loc1 is dead
+    
+    In our OSR Availability analysis, we always prune loc1 result in BB1's head since its head says "loc1 is dead".
+    But at that time, we clear the availability for loc1, which makes it DeadFlush, instead of making it ConflictingFlush.
+    
+    So, the flush format of loc1 in each tail of BB is like this.
+    
+        BB0
+            ConflictingFlush (because all the local operands are initialized with ConflictingFlush)
+        BB1
+            DeadFlush+@1 (pruning clears it)
+        BB2
+            DeadFlush+@1 (since it is propagated from BB1)
+        BB3
+            FlushedJSValue+@1 with loc1 (since it has PutStack)
+        BB4
+            FlushedJSValue+@1 with loc1 (since MERGE(DeadFlush, FlushedJSValue) = FlushedJSValue)
+        BB5
+            DeadFlush (pruning clears it)
+    
+    Then, if we go the path BB0->BB1->BB2->BB4, we read the value from the stack while it is not flushed.
+    The correct fix is making availability "unavailable" when pruning based on bytecode liveness.
+    
+    * dfg/DFGAvailabilityMap.cpp:
+    (JSC::DFG::AvailabilityMap::pruneByLiveness): When pruning availability, we first set all the operands Availability::unavailable(),
+    and copy the calculated value from the current availability map.
+    * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+    (JSC::DFG::OSRAvailabilityAnalysisPhase::run): Add logging things for debugging.
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@240364 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2019-01-23  Yusuke Suzuki  <[email protected]>
+
+            [DFG] AvailabilityMap::pruneByLiveness should make non-live operands Availability::unavailable instead of Availability()
+            https://bugs.webkit.org/show_bug.cgi?id=193711
+            <rdar://problem/47250262>
+
+            Reviewed by Saam Barati.
+
+            * stress/availability-was-cleared-when-locals-are-not-live.js: Added.
+            (shouldBe):
+            (foo):
+            (bar):
+            (baz):
+
 2019-01-25  Mark Lam  <[email protected]>
 
         Cherry-pick r240329. rdar://problem/47458354

Added: branches/safari-607-branch/JSTests/stress/availability-was-cleared-when-locals-are-not-live.js (0 => 240560)


--- branches/safari-607-branch/JSTests/stress/availability-was-cleared-when-locals-are-not-live.js	                        (rev 0)
+++ branches/safari-607-branch/JSTests/stress/availability-was-cleared-when-locals-are-not-live.js	2019-01-28 09:40:33 UTC (rev 240560)
@@ -0,0 +1,37 @@
+//@ runDefault("--jitPolicyScale=0", "--useConcurrentJIT=false")
+function shouldBe(actual, expected) {
+    if (actual !== expected)
+        throw new Error('bad value: ' + actual);
+}
+noInline(shouldBe);
+
+var a;
+
+function foo(x, y, z) {
+    baz(a);
+    0 + (x ? a : [] + 0);
+    return y;
+}
+
+function bar() {
+    return foo.apply(null, arguments);
+}
+
+function baz(p) {
+    if (p) {
+        return bar(1, 1, 0);
+    }
+}
+
+baz(1);
+
+for (let i = 0; i < 1; i++) {
+    foo(1);
+}
+
+for (let i = 0; i < 10000; i++) {
+    baz();
+}
+
+let hello = baz(1);
+shouldBe(hello, 1);

Modified: branches/safari-607-branch/Source/_javascript_Core/ChangeLog (240559 => 240560)


--- branches/safari-607-branch/Source/_javascript_Core/ChangeLog	2019-01-28 09:29:20 UTC (rev 240559)
+++ branches/safari-607-branch/Source/_javascript_Core/ChangeLog	2019-01-28 09:40:33 UTC (rev 240560)
@@ -1,3 +1,169 @@
+2019-01-28  Babak Shafiei  <[email protected]>
+
+        Cherry-pick r240364. rdar://problem/47586820
+
+    [DFG] AvailabilityMap::pruneByLiveness should make non-live operands Availability::unavailable instead of Availability()
+    https://bugs.webkit.org/show_bug.cgi?id=193711
+    <rdar://problem/47250262>
+    
+    Reviewed by Saam Barati.
+    
+    JSTests:
+    
+    * stress/availability-was-cleared-when-locals-are-not-live.js: Added.
+    (shouldBe):
+    (foo):
+    (bar):
+    (baz):
+    
+    Source/_javascript_Core:
+    
+    When pruning OSR Availability based on bytecode liveness, we accidentally clear the Availability (making it DeadFlush) instead of
+    making it Availability::unavailable() (Making it ConflictingFlush). In OSRAvailabilityAnalysisPhase, we perform forward analysis.
+    We first clear all the availability of basic blocks DeadFlush, which is an empty set. And then, we set operands in the root block
+    ConflictingFlush. In this forward analysis, DeadFlush is BOTTOM, and ConflictingFlush is TOP. Then, we propagate information by
+    merging availability until we reach to the fixed-point. As an optimization, we perform "pruning" of the availability in the head
+    of the basic blocks. We remove availabilities of operands which are not live in the bytecode liveness at the head of the basic block.
+    The problem is, when removing availabilities, we set DeadFlush for them instead of ConflictingFlush. Basically, it means that we set
+    BOTTOM (an empty set) instead of TOP. Let's consider the following simple example. We have 6 basic blocks, and they are connected
+    as follows.
+    
+        BB0 -> BB1 -> BB2 -> BB4
+         |        \        ^
+         v          > BB3 /
+        BB5
+    
+    And consider about loc1 in FTL, which is required to be recovered in BB4's OSR exit.
+    
+        BB0 does nothing
+            head: loc1 is dead
+            tail: loc1 is dead
+    
+        BB1 has MovHint @1, loc1
+            head: loc1 is dead
+            tail: loc1 is live
+    
+        BB2 does nothing
+            head: loc1 is live
+            tail: loc1 is live
+    
+        BB3 has PutStack @1, loc1
+            head: loc1 is live
+            tail: loc1 is live
+    
+        BB4 has OSR exit using loc1
+            head: loc1 is live
+            tail: loc1 is live (in bytecode)
+    
+        BB5 does nothing
+            head: loc1 is dead
+            tail: loc1 is dead
+    
+    In our OSR Availability analysis, we always prune loc1 result in BB1's head since its head says "loc1 is dead".
+    But at that time, we clear the availability for loc1, which makes it DeadFlush, instead of making it ConflictingFlush.
+    
+    So, the flush format of loc1 in each tail of BB is like this.
+    
+        BB0
+            ConflictingFlush (because all the local operands are initialized with ConflictingFlush)
+        BB1
+            DeadFlush+@1 (pruning clears it)
+        BB2
+            DeadFlush+@1 (since it is propagated from BB1)
+        BB3
+            FlushedJSValue+@1 with loc1 (since it has PutStack)
+        BB4
+            FlushedJSValue+@1 with loc1 (since MERGE(DeadFlush, FlushedJSValue) = FlushedJSValue)
+        BB5
+            DeadFlush (pruning clears it)
+    
+    Then, if we go the path BB0->BB1->BB2->BB4, we read the value from the stack while it is not flushed.
+    The correct fix is making availability "unavailable" when pruning based on bytecode liveness.
+    
+    * dfg/DFGAvailabilityMap.cpp:
+    (JSC::DFG::AvailabilityMap::pruneByLiveness): When pruning availability, we first set all the operands Availability::unavailable(),
+    and copy the calculated value from the current availability map.
+    * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+    (JSC::DFG::OSRAvailabilityAnalysisPhase::run): Add logging things for debugging.
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@240364 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2019-01-23  Yusuke Suzuki  <[email protected]>
+
+            [DFG] AvailabilityMap::pruneByLiveness should make non-live operands Availability::unavailable instead of Availability()
+            https://bugs.webkit.org/show_bug.cgi?id=193711
+            <rdar://problem/47250262>
+
+            Reviewed by Saam Barati.
+
+            When pruning OSR Availability based on bytecode liveness, we accidentally clear the Availability (making it DeadFlush) instead of
+            making it Availability::unavailable() (Making it ConflictingFlush). In OSRAvailabilityAnalysisPhase, we perform forward analysis.
+            We first clear all the availability of basic blocks DeadFlush, which is an empty set. And then, we set operands in the root block
+            ConflictingFlush. In this forward analysis, DeadFlush is BOTTOM, and ConflictingFlush is TOP. Then, we propagate information by
+            merging availability until we reach to the fixed-point. As an optimization, we perform "pruning" of the availability in the head
+            of the basic blocks. We remove availabilities of operands which are not live in the bytecode liveness at the head of the basic block.
+            The problem is, when removing availabilities, we set DeadFlush for them instead of ConflictingFlush. Basically, it means that we set
+            BOTTOM (an empty set) instead of TOP. Let's consider the following simple example. We have 6 basic blocks, and they are connected
+            as follows.
+
+                BB0 -> BB1 -> BB2 -> BB4
+                 |        \        ^
+                 v          > BB3 /
+                BB5
+
+            And consider about loc1 in FTL, which is required to be recovered in BB4's OSR exit.
+
+                BB0 does nothing
+                    head: loc1 is dead
+                    tail: loc1 is dead
+
+                BB1 has MovHint @1, loc1
+                    head: loc1 is dead
+                    tail: loc1 is live
+
+                BB2 does nothing
+                    head: loc1 is live
+                    tail: loc1 is live
+
+                BB3 has PutStack @1, loc1
+                    head: loc1 is live
+                    tail: loc1 is live
+
+                BB4 has OSR exit using loc1
+                    head: loc1 is live
+                    tail: loc1 is live (in bytecode)
+
+                BB5 does nothing
+                    head: loc1 is dead
+                    tail: loc1 is dead
+
+            In our OSR Availability analysis, we always prune loc1 result in BB1's head since its head says "loc1 is dead".
+            But at that time, we clear the availability for loc1, which makes it DeadFlush, instead of making it ConflictingFlush.
+
+            So, the flush format of loc1 in each tail of BB is like this.
+
+                BB0
+                    ConflictingFlush (because all the local operands are initialized with ConflictingFlush)
+                BB1
+                    DeadFlush+@1 (pruning clears it)
+                BB2
+                    DeadFlush+@1 (since it is propagated from BB1)
+                BB3
+                    FlushedJSValue+@1 with loc1 (since it has PutStack)
+                BB4
+                    FlushedJSValue+@1 with loc1 (since MERGE(DeadFlush, FlushedJSValue) = FlushedJSValue)
+                BB5
+                    DeadFlush (pruning clears it)
+
+            Then, if we go the path BB0->BB1->BB2->BB4, we read the value from the stack while it is not flushed.
+            The correct fix is making availability "unavailable" when pruning based on bytecode liveness.
+
+            * dfg/DFGAvailabilityMap.cpp:
+            (JSC::DFG::AvailabilityMap::pruneByLiveness): When pruning availability, we first set all the operands Availability::unavailable(),
+            and copy the calculated value from the current availability map.
+            * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+            (JSC::DFG::OSRAvailabilityAnalysisPhase::run): Add logging things for debugging.
+
 2019-01-25  Mark Lam  <[email protected]>
 
         Cherry-pick r240335. rdar://problem/47494779

Modified: branches/safari-607-branch/Source/_javascript_Core/dfg/DFGAvailabilityMap.cpp (240559 => 240560)


--- branches/safari-607-branch/Source/_javascript_Core/dfg/DFGAvailabilityMap.cpp	2019-01-28 09:29:20 UTC (rev 240559)
+++ branches/safari-607-branch/Source/_javascript_Core/dfg/DFGAvailabilityMap.cpp	2019-01-28 09:40:33 UTC (rev 240560)
@@ -65,7 +65,7 @@
 
 void AvailabilityMap::pruneByLiveness(Graph& graph, CodeOrigin where)
 {
-    Operands<Availability> localsCopy(OperandsLike, m_locals);
+    Operands<Availability> localsCopy(m_locals.numberOfArguments(), m_locals.numberOfLocals(), Availability::unavailable());
     graph.forAllLiveInBytecode(
         where,
         [&] (VirtualRegister reg) {

Modified: branches/safari-607-branch/Source/_javascript_Core/dfg/DFGOSRAvailabilityAnalysisPhase.cpp (240559 => 240560)


--- branches/safari-607-branch/Source/_javascript_Core/dfg/DFGOSRAvailabilityAnalysisPhase.cpp	2019-01-28 09:29:20 UTC (rev 240559)
+++ branches/safari-607-branch/Source/_javascript_Core/dfg/DFGOSRAvailabilityAnalysisPhase.cpp	2019-01-28 09:40:33 UTC (rev 240560)
@@ -35,6 +35,9 @@
 #include "JSCInlines.h"
 
 namespace JSC { namespace DFG {
+namespace DFGOSRAvailabilityAnalysisPhaseInternal {
+static constexpr bool verbose = false;
+}
 
 class OSRAvailabilityAnalysisPhase : public Phase {
 public:
@@ -63,6 +66,21 @@
 
         // This could be made more efficient by processing blocks in reverse postorder.
         
+        auto dumpAvailability = [] (BasicBlock* block) {
+            dataLogLn(block->ssa->availabilityAtHead);
+            dataLogLn(block->ssa->availabilityAtTail);
+        };
+
+        auto dumpBytecodeLivenessAtHead = [&] (BasicBlock* block) {
+            dataLog("Live: ");
+            m_graph.forAllLiveInBytecode(
+                block->at(0)->origin.forExit,
+                [&] (VirtualRegister reg) {
+                    dataLog(reg, " ");
+                });
+            dataLogLn("");
+        };
+
         LocalOSRAvailabilityCalculator calculator(m_graph);
         bool changed;
         do {
@@ -73,6 +91,10 @@
                 if (!block)
                     continue;
                 
+                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                    dataLogLn("Before changing Block #", block->index);
+                    dumpAvailability(block);
+                }
                 calculator.beginBlock(block);
                 
                 for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex)
@@ -84,6 +106,11 @@
                 block->ssa->availabilityAtTail = calculator.m_availability;
                 changed = true;
 
+                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                    dataLogLn("After changing Block #", block->index);
+                    dumpAvailability(block);
+                }
+
                 for (unsigned successorIndex = block->numSuccessors(); successorIndex--;) {
                     BasicBlock* successor = block->successor(successorIndex);
                     successor->ssa->availabilityAtHead.merge(calculator.m_availability);
@@ -93,6 +120,11 @@
                     BasicBlock* successor = block->successor(successorIndex);
                     successor->ssa->availabilityAtHead.pruneByLiveness(
                         m_graph, successor->at(0)->origin.forExit);
+                    if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                        dataLogLn("After pruning Block #", successor->index);
+                        dumpAvailability(successor);
+                        dumpBytecodeLivenessAtHead(successor);
+                    }
                 }
             }
         } while (changed);
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to