Title: [191594] trunk/Source/_javascript_Core
Revision
191594
Author
[email protected]
Date
2015-10-26 12:48:29 -0700 (Mon, 26 Oct 2015)

Log Message

r190735 Caused us to maybe trample the base's tag-GPR on 32-bit inline cache when the cache allocates a scratch register and then jumps to the slow path
https://bugs.webkit.org/show_bug.cgi?id=150532

Reviewed by Geoffrey Garen.

The base's tag register used to show up in the used register set
before r190735 because of how the DFG kept track of used register. I changed this 
in my work on inline caching because we don't want to spill these registers
when we have a GetByIdFlush/PutByIdFlush and we use the used register set
as the metric of what to spill. That said, these registers should be locked
and not used as scratch registers by the scratch register allocator. The
reason is that our inline cache may fail and jump to the slow path. The slow
path then uses the base's tag register. If the inline cache used the base's tag
register as a scratch and the inline cache fails and jumps to the slow path, we
have a problem because the tag may now be trampled.

Note that this doesn't mean that we can't trample the base's tag register when making
a call. We can totally trample the register as long as the inline cache succeeds in a GetByIdFlush/PutByIdFlush.
The problem is only when we trample it and then jump to the slow path.

This patch fixes this bug by making StructureStubInfo keep track of the base's
tag GPR. PolymorphicAccess then locks this register when using the ScratchRegisterAllocator.

* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::generate):
(JSC::PolymorphicAccess::regenerate):
* bytecode/StructureStubInfo.h:
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileIn):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::JITByIdGenerator):
* tests/stress/regress-150532.js: Added.
(assert):
(randomFunction):
(foo):
(i.switch):

Modified Paths

Added Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (191593 => 191594)


--- trunk/Source/_javascript_Core/ChangeLog	2015-10-26 19:20:09 UTC (rev 191593)
+++ trunk/Source/_javascript_Core/ChangeLog	2015-10-26 19:48:29 UTC (rev 191594)
@@ -1,3 +1,42 @@
+2015-10-26  Saam barati  <[email protected]>
+
+        r190735 Caused us to maybe trample the base's tag-GPR on 32-bit inline cache when the cache allocates a scratch register and then jumps to the slow path
+        https://bugs.webkit.org/show_bug.cgi?id=150532
+
+        Reviewed by Geoffrey Garen.
+
+        The base's tag register used to show up in the used register set
+        before r190735 because of how the DFG kept track of used register. I changed this 
+        in my work on inline caching because we don't want to spill these registers
+        when we have a GetByIdFlush/PutByIdFlush and we use the used register set
+        as the metric of what to spill. That said, these registers should be locked
+        and not used as scratch registers by the scratch register allocator. The
+        reason is that our inline cache may fail and jump to the slow path. The slow
+        path then uses the base's tag register. If the inline cache used the base's tag
+        register as a scratch and the inline cache fails and jumps to the slow path, we
+        have a problem because the tag may now be trampled.
+
+        Note that this doesn't mean that we can't trample the base's tag register when making
+        a call. We can totally trample the register as long as the inline cache succeeds in a GetByIdFlush/PutByIdFlush.
+        The problem is only when we trample it and then jump to the slow path.
+
+        This patch fixes this bug by making StructureStubInfo keep track of the base's
+        tag GPR. PolymorphicAccess then locks this register when using the ScratchRegisterAllocator.
+
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessCase::generate):
+        (JSC::PolymorphicAccess::regenerate):
+        * bytecode/StructureStubInfo.h:
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileIn):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::JITByIdGenerator):
+        * tests/stress/regress-150532.js: Added.
+        (assert):
+        (randomFunction):
+        (foo):
+        (i.switch):
+
 2015-10-24  Brian Burg  <[email protected]>
 
         Teach create_hash_table to omit builtins macros when generating tables for native-only objects

Modified: trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp (191593 => 191594)


--- trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2015-10-26 19:20:09 UTC (rev 191593)
+++ trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2015-10-26 19:48:29 UTC (rev 191594)
@@ -872,6 +872,9 @@
 
         ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
         allocator.lock(baseGPR);
+#if USE(JSVALUE32_64)
+        allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseTagGPR));
+#endif
         allocator.lock(valueRegs);
         allocator.lock(scratchGPR);
 
@@ -1233,6 +1236,9 @@
     state.allocator = &allocator;
     allocator.lock(state.baseGPR);
     allocator.lock(state.valueRegs);
+#if USE(JSVALUE32_64)
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseTagGPR));
+#endif
 
     state.scratchGPR = allocator.allocateScratchGPR();
     

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h (191593 => 191594)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2015-10-26 19:20:09 UTC (rev 191593)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2015-10-26 19:48:29 UTC (rev 191594)
@@ -134,6 +134,7 @@
         int8_t baseGPR;
 #if USE(JSVALUE32_64)
         int8_t valueTagGPR;
+        int8_t baseTagGPR;
 #endif
         int8_t valueGPR;
         RegisterSet usedRegisters;

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp (191593 => 191594)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp	2015-10-26 19:20:09 UTC (rev 191593)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT.cpp	2015-10-26 19:48:29 UTC (rev 191594)
@@ -960,6 +960,7 @@
             stubInfo->patch.valueGPR = static_cast<int8_t>(resultGPR);
 #if USE(JSVALUE32_64)
             stubInfo->patch.valueTagGPR = static_cast<int8_t>(InvalidGPRReg);
+            stubInfo->patch.baseTagGPR = static_cast<int8_t>(InvalidGPRReg);
 #endif
             stubInfo->patch.usedRegisters = usedRegisters();
 

Modified: trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp (191593 => 191594)


--- trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2015-10-26 19:20:09 UTC (rev 191593)
+++ trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2015-10-26 19:48:29 UTC (rev 191594)
@@ -61,6 +61,7 @@
     m_stubInfo->patch.baseGPR = static_cast<int8_t>(base.payloadGPR());
     m_stubInfo->patch.valueGPR = static_cast<int8_t>(value.payloadGPR());
 #if USE(JSVALUE32_64)
+    m_stubInfo->patch.baseTagGPR = static_cast<int8_t>(base.tagGPR());
     m_stubInfo->patch.valueTagGPR = static_cast<int8_t>(value.tagGPR());
 #endif
 }

Added: trunk/Source/_javascript_Core/tests/stress/regress-150532.js (0 => 191594)


--- trunk/Source/_javascript_Core/tests/stress/regress-150532.js	                        (rev 0)
+++ trunk/Source/_javascript_Core/tests/stress/regress-150532.js	2015-10-26 19:48:29 UTC (rev 191594)
@@ -0,0 +1,41 @@
+// Makes sure we don't use base's tag register on 32-bit when an inline cache fails and jumps to the slow path
+// because the slow path depends on the base being present.
+
+function assert(b) {
+    if (!b)
+        throw new Error("baddd");
+}
+noInline(assert);
+
+let customGetter = createCustomGetterObject();
+let otherObj = {
+    customGetter: 20
+};
+function randomFunction() {}
+noInline(randomFunction);
+
+function foo(o, c) {
+    let baz  = o.customGetter;
+    if (c) {
+        o = 42;
+    }
+    let jaz = o.foo;
+    let kaz = jaz + "hey";
+    let raz = kaz + "hey";
+    let result = o.customGetter;
+    randomFunction(!c, baz, jaz, kaz, raz);
+    return result;
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    switch (i % 2) {
+    case 0:
+        assert(foo(customGetter) === 100);
+        break;
+    case 1:
+        assert(foo(otherObj) === 20);
+        break;
+    }
+}
+assert(foo({hello: 20, world:50, customGetter: 40}) === 40); // Make sure we don't trample registers in "o.customGetter" inline cache failure in foo.
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to