Diff
Modified: trunk/JSTests/ChangeLog (239939 => 239940)
--- trunk/JSTests/ChangeLog 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/JSTests/ChangeLog 2019-01-14 21:34:47 UTC (rev 239940)
@@ -1,3 +1,15 @@
+2019-01-14 Mark Lam <[email protected]>
+
+ Fix all CLoop JSC test failures (including some LLInt bugs due to recent bytecode format change).
+ https://bugs.webkit.org/show_bug.cgi?id=193402
+ <rdar://problem/46012309>
+
+ Reviewed by Keith Miller.
+
+ * stress/regexp-compile-oom.js:
+ - Skip this test for !$jitTests because it is tuned for stack usage when the JIT
+ is enabled. As a result, it will fail on cloop builds though there is no bug.
+
2019-01-11 Saam barati <[email protected]>
DFG combined liveness can be wrong for terminal basic blocks
Modified: trunk/JSTests/stress/regexp-compile-oom.js (239939 => 239940)
--- trunk/JSTests/stress/regexp-compile-oom.js 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/JSTests/stress/regexp-compile-oom.js 2019-01-14 21:34:47 UTC (rev 239940)
@@ -1,4 +1,4 @@
-//@ skip if $hostOS != "darwin" or $architecture == "arm" or $architecture == "x86"
+//@ skip if $hostOS != "darwin" or $architecture == "arm" or $architecture == "x86" or not $jitTests
// Test that throw an OOM exception when compiling a pathological, but valid nested RegExp.
var failures = [];
Modified: trunk/Source/_javascript_Core/ChangeLog (239939 => 239940)
--- trunk/Source/_javascript_Core/ChangeLog 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/ChangeLog 2019-01-14 21:34:47 UTC (rev 239940)
@@ -1,3 +1,162 @@
+2019-01-14 Mark Lam <[email protected]>
+
+ Fix all CLoop JSC test failures (including some LLInt bugs due to recent bytecode format change).
+ https://bugs.webkit.org/show_bug.cgi?id=193402
+ <rdar://problem/46012309>
+
+ Reviewed by Keith Miller.
+
+ The CLoop builds via build-jsc were previously completely disabled after our
+ change to enable ASM LLInt build without the JIT. As a result, JSC tests have
+ regressed on CLoop builds. The CLoop builds and tests will be re-enabled when
+ the fix for https://bugs.webkit.org/show_bug.cgi?id=192955 lands. This patch
+ fixes all the regressions (and some old bugs) so that the CLoop test bots won't
+ be red when CLoop build gets re-enabled.
+
+ In this patch, we do the following:
+
+ 1. Change CLoopStack::grow() to set the new CLoop stack top at the maximum
+ allocated capacity (after discounting the reserved zone) as opposed to setting
+ it only at the level that the client requested.
+
+ This fixes a small performance bug that I happened to noticed when I was
+ debugging a stack issue. It does not affect correctness.
+
+ 2. In LowLevelInterpreter32_64.asm:
+
+ 1. Fix loadConstantOrVariableTag() to use subi for computing the constant
+ index because the VirtualRegister offset and FirstConstantRegisterIndex
+ values it is operating on are both signed ints. This is just to be
+ pedantic. The previous use of subu will still produce a correct value.
+
+ 2. Fix llintOpWithReturn() to use getu (instead of get) for reading
+ OpIsCellWithType::type because it is of type JSType, which is a uint8_t.
+
+ 3. Fix llintOpWithMetadata() to use loadis for loading
+ OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t5] because it
+ is of type PropertyOffset, which is a signed int.
+
+ 4. Fix commonCallOp() to use getu for loading fields argv and argc because they
+ are of type unsigned for OpCall, OpConstruct, and OpTailCall, which are the
+ clients of commonCallOp.
+
+ 5. Fix llintOpWithMetadata() and getClosureVar() to use loadp for loading
+ OpGetFromScope::Metadata::operand because it is of type uintptr_t.
+
+ 3. In LowLevelInterpreter64.asm:
+
+ 1. Fix llintOpWithReturn() to use getu for reading OpIsCellWithType::type
+ because it is of type JSType, which is a uint8_t.
+
+ 2. Fix llintOpWithMetadata() to use loadi for loading
+ OpGetById::Metadata::modeMetadata.protoLoadMode.structure[t2] because it is
+ of type StructureID, which is a uint32_t.
+
+ Fix llintOpWithMetadata() to use loadis for loading
+ OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t2] because it
+ is of type PropertyOffset, which is a signed int.
+
+ 3. commonOp() should reload the metadataTable for op_catch because unlike
+ for the ASM LLInt, the exception unwinding code is not able to restore
+ "callee saved registers" for the CLoop interpreter because the CLoop uses
+ pseudo-registers (see the CLoopRegister class).
+
+ This was the source of many exotic Cloop failures after the bytecode format
+ change (which introduced the metadataTable callee saved register). Hence,
+ we fix it by reloading metadataTable's value on re-entry via op_catch for
+ exception handling. We already take care of restoring it in op_ret.
+
+ 4. Fix llintOpWithMetadata() and getClosureVar() to use loadp for loading
+ OpGetFromScope::Metadata::operand because it is of type uintptr_t.
+
+ 4. In LowLevelInterpreter.asm:
+
+ Fix metadata() to use loadi for loading metadataTable offsets because they are
+ of type unsigned. This was also a source of many exotic CLoop test failures.
+
+ 5. Change CLoopRegister into a class with a uintptr_t as its storage element.
+ Previously, we were using a union to convert between various value types that
+ we would store in this pseudo-register. This method of type conversion is
+ undefined behavior according to the C++ spec. As a result, the C++ compiler
+ may choose to elide some CLoop statements, thereby resulting in some exotic
+ bugs.
+
+ We fix this by now always using accessor methods and assignment operators to
+ ensure that we use bitwise_cast to do the type conversions. Since bitwise_cast
+ uses a memcpy, this ensures that there's no undefined behavior, and that CLoop
+ statements won't get elided willy-nilly by the compiler.
+
+ Ditto for the CloopDobleRegisters.
+
+ Similarly, use bitwise_cast for ints2Double() and double2Ints() utility
+ functions.
+
+ Also use bitwise_cast (instead of reinterpret_cast) for the CLoop CAST macro.
+
+ 6. Fix cloop.rb to use the new CLoopRegister and CLoopDoubleRegister classes.
+
+ Add a clLValue accessor for offlineasm operand types to distinguish
+ LValue use of the operands from RValue uses.
+
+ Replace the use of clearHighWord() with simply casting to uint32_t. This is
+ more efficient for the C++ compiler (and help speed up debug build runs).
+
+ Also fix 32-bit arithmetic operations to only set the lower 32-bit value of
+ the pseudo registers. This fixes some CLoop JSC test failures.
+
+ This patch has been manually tested with the JSC tests on the following builds:
+ 64bit X86 ASM LLLint (without JIT), 64bit and 32bit X86 CLoop, and ARMv7 Cloop.
+
+ * interpreter/CLoopStack.cpp:
+ (JSC::CLoopStack::grow):
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter.cpp:
+ (JSC::CLoopRegister::i const):
+ (JSC::CLoopRegister::u const):
+ (JSC::CLoopRegister::i32 const):
+ (JSC::CLoopRegister::u32 const):
+ (JSC::CLoopRegister::i8 const):
+ (JSC::CLoopRegister::u8 const):
+ (JSC::CLoopRegister::ip const):
+ (JSC::CLoopRegister::i8p const):
+ (JSC::CLoopRegister::vp const):
+ (JSC::CLoopRegister::cvp const):
+ (JSC::CLoopRegister::callFrame const):
+ (JSC::CLoopRegister::execState const):
+ (JSC::CLoopRegister::instruction const):
+ (JSC::CLoopRegister::vm const):
+ (JSC::CLoopRegister::cell const):
+ (JSC::CLoopRegister::protoCallFrame const):
+ (JSC::CLoopRegister::nativeFunc const):
+ (JSC::CLoopRegister::i64 const):
+ (JSC::CLoopRegister::u64 const):
+ (JSC::CLoopRegister::encodedJSValue const):
+ (JSC::CLoopRegister::opcode const):
+ (JSC::CLoopRegister::operator ExecState*):
+ (JSC::CLoopRegister::operator const Instruction*):
+ (JSC::CLoopRegister::operator JSCell*):
+ (JSC::CLoopRegister::operator ProtoCallFrame*):
+ (JSC::CLoopRegister::operator Register*):
+ (JSC::CLoopRegister::operator VM*):
+ (JSC::CLoopRegister::operator=):
+ (JSC::CLoopRegister::bitsAsDouble const):
+ (JSC::CLoopRegister::bitsAsInt64 const):
+ (JSC::CLoopDoubleRegister::operator T const):
+ (JSC::CLoopDoubleRegister::d const):
+ (JSC::CLoopDoubleRegister::bitsAsInt64 const):
+ (JSC::CLoopDoubleRegister::operator=):
+ (JSC::LLInt::ints2Double):
+ (JSC::LLInt::double2Ints):
+ (JSC::LLInt::decodeResult):
+ (JSC::CLoop::execute):
+ (JSC::LLInt::Ints2Double): Deleted.
+ (JSC::LLInt::Double2Ints): Deleted.
+ (JSC::CLoopRegister::CLoopRegister): Deleted.
+ (JSC::CLoopRegister::clearHighWord): Deleted.
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ * offlineasm/cloop.rb:
+
2019-01-14 Keith Miller <[email protected]>
JSC should have a module loader API
Modified: trunk/Source/_javascript_Core/interpreter/CLoopStack.cpp (239939 => 239940)
--- trunk/Source/_javascript_Core/interpreter/CLoopStack.cpp 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/interpreter/CLoopStack.cpp 2019-01-14 21:34:47 UTC (rev 239940)
@@ -103,6 +103,7 @@
m_reservation.commit(newCommitTop, delta);
addToCommittedByteCount(delta);
m_commitTop = newCommitTop;
+ newTopOfStack = m_commitTop + m_softReservedZoneSizeInRegisters;
setCLoopStackLimit(newTopOfStack);
return true;
}
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (239939 => 239940)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2019-01-14 21:34:47 UTC (rev 239940)
@@ -344,7 +344,7 @@
end
macro metadata(size, opcode, dst, scratch)
- loadp constexpr %opcode%::opcodeID * 4[metadataTable], dst # offset = metadataTable<unsigned*>[opcodeID]
+ loadi constexpr %opcode%::opcodeID * 4[metadataTable], dst # offset = metadataTable<unsigned*>[opcodeID]
getu(size, opcode, metadataID, scratch) # scratch = bytecode.metadataID
muli sizeof %opcode%::Metadata, scratch # scratch *= sizeof(Op::Metadata)
addi scratch, dst # offset += scratch
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.cpp (239939 => 239940)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.cpp 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.cpp 2019-01-14 21:34:47 UTC (rev 239940)
@@ -123,154 +123,124 @@
#define OFFLINE_ASM_LOCAL_LABEL(label) label: TRACE_LABEL("OFFLINE_ASM_LOCAL_LABEL", #label); USE_LABEL(label);
+namespace JSC {
//============================================================================
+// CLoopRegister is the storage for an emulated CPU register.
+// It defines the policy of how ints smaller than intptr_t are packed into the
+// pseudo register, as well as hides endianness differences.
+
+class CLoopRegister {
+public:
+ ALWAYS_INLINE intptr_t i() const { return m_value; };
+ ALWAYS_INLINE uintptr_t u() const { return m_value; }
+ ALWAYS_INLINE int32_t i32() const { return m_value; }
+ ALWAYS_INLINE uint32_t u32() const { return m_value; }
+ ALWAYS_INLINE int8_t i8() const { return m_value; }
+ ALWAYS_INLINE uint8_t u8() const { return m_value; }
+
+ ALWAYS_INLINE intptr_t* ip() const { return bitwise_cast<intptr_t*>(m_value); }
+ ALWAYS_INLINE int8_t* i8p() const { return bitwise_cast<int8_t*>(m_value); }
+ ALWAYS_INLINE void* vp() const { return bitwise_cast<void*>(m_value); }
+ ALWAYS_INLINE const void* cvp() const { return bitwise_cast<const void*>(m_value); }
+ ALWAYS_INLINE CallFrame* callFrame() const { return bitwise_cast<CallFrame*>(m_value); }
+ ALWAYS_INLINE ExecState* execState() const { return bitwise_cast<ExecState*>(m_value); }
+ ALWAYS_INLINE const void* instruction() const { return bitwise_cast<const void*>(m_value); }
+ ALWAYS_INLINE VM* vm() const { return bitwise_cast<VM*>(m_value); }
+ ALWAYS_INLINE JSCell* cell() const { return bitwise_cast<JSCell*>(m_value); }
+ ALWAYS_INLINE ProtoCallFrame* protoCallFrame() const { return bitwise_cast<ProtoCallFrame*>(m_value); }
+ ALWAYS_INLINE NativeFunction nativeFunc() const { return bitwise_cast<NativeFunction>(m_value); }
+#if USE(JSVALUE64)
+ ALWAYS_INLINE int64_t i64() const { return m_value; }
+ ALWAYS_INLINE uint64_t u64() const { return m_value; }
+ ALWAYS_INLINE EncodedJSValue encodedJSValue() const { return bitwise_cast<EncodedJSValue>(m_value); }
+#endif
+ ALWAYS_INLINE Opcode opcode() const { return bitwise_cast<Opcode>(m_value); }
+
+ operator ExecState*() { return bitwise_cast<ExecState*>(m_value); }
+ operator const Instruction*() { return bitwise_cast<const Instruction*>(m_value); }
+ operator JSCell*() { return bitwise_cast<JSCell*>(m_value); }
+ operator ProtoCallFrame*() { return bitwise_cast<ProtoCallFrame*>(m_value); }
+ operator Register*() { return bitwise_cast<Register*>(m_value); }
+ operator VM*() { return bitwise_cast<VM*>(m_value); }
+
+ template<typename T, typename = std::enable_if_t<sizeof(T) == sizeof(uintptr_t)>>
+ ALWAYS_INLINE void operator=(T value) { m_value = bitwise_cast<uintptr_t>(value); }
+#if USE(JSVALUE64)
+ ALWAYS_INLINE void operator=(int32_t value) { m_value = static_cast<intptr_t>(value); }
+ ALWAYS_INLINE void operator=(uint32_t value) { m_value = static_cast<uintptr_t>(value); }
+#endif
+ ALWAYS_INLINE void operator=(int16_t value) { m_value = static_cast<intptr_t>(value); }
+ ALWAYS_INLINE void operator=(uint16_t value) { m_value = static_cast<uintptr_t>(value); }
+ ALWAYS_INLINE void operator=(int8_t value) { m_value = static_cast<intptr_t>(value); }
+ ALWAYS_INLINE void operator=(uint8_t value) { m_value = static_cast<uintptr_t>(value); }
+ ALWAYS_INLINE void operator=(bool value) { m_value = static_cast<uintptr_t>(value); }
+
+#if USE(JSVALUE64)
+ ALWAYS_INLINE double bitsAsDouble() const { return bitwise_cast<double>(m_value); }
+ ALWAYS_INLINE int64_t bitsAsInt64() const { return bitwise_cast<int64_t>(m_value); }
+#endif
+
+private:
+ uintptr_t m_value { static_cast<uintptr_t>(0xbadbeef0baddbeef) };
+};
+
+class CLoopDoubleRegister {
+public:
+ template<typename T>
+ explicit operator T() const { return bitwise_cast<T>(m_value); }
+
+ ALWAYS_INLINE double d() const { return m_value; }
+ ALWAYS_INLINE int64_t bitsAsInt64() const { return bitwise_cast<int64_t>(m_value); }
+
+ ALWAYS_INLINE void operator=(double value) { m_value = value; }
+
+ template<typename T, typename = std::enable_if_t<sizeof(T) == sizeof(uintptr_t) && std::is_integral<T>::value>>
+ ALWAYS_INLINE void operator=(T value) { m_value = bitwise_cast<double>(value); }
+
+private:
+ double m_value;
+};
+
+//============================================================================
// Some utilities:
//
-namespace JSC {
namespace LLInt {
#if USE(JSVALUE32_64)
-static double Ints2Double(uint32_t lo, uint32_t hi)
+static double ints2Double(uint32_t lo, uint32_t hi)
{
- union {
- double dval;
- uint64_t ival64;
- } u;
- u.ival64 = (static_cast<uint64_t>(hi) << 32) | lo;
- return u.dval;
+ uint64_t value = (static_cast<uint64_t>(hi) << 32) | lo;
+ return bitwise_cast<double>(value);
}
-static void Double2Ints(double val, uint32_t& lo, uint32_t& hi)
+static void double2Ints(double val, CLoopRegister& lo, CLoopRegister& hi)
{
- union {
- double dval;
- uint64_t ival64;
- } u;
- u.dval = val;
- hi = static_cast<uint32_t>(u.ival64 >> 32);
- lo = static_cast<uint32_t>(u.ival64);
+ uint64_t value = bitwise_cast<uint64_t>(val);
+ hi = static_cast<uint32_t>(value >> 32);
+ lo = static_cast<uint32_t>(value);
}
#endif // USE(JSVALUE32_64)
+static void decodeResult(SlowPathReturnType result, CLoopRegister& t0, CLoopRegister& t1)
+{
+ const void* t0Result;
+ const void* t1Result;
+ JSC::decodeResult(result, t0Result, t1Result);
+ t0 = t0Result;
+ t1 = t1Result;
+}
+
} // namespace LLint
-
//============================================================================
-// CLoopRegister is the storage for an emulated CPU register.
-// It defines the policy of how ints smaller than intptr_t are packed into the
-// pseudo register, as well as hides endianness differences.
-
-struct CLoopRegister {
- CLoopRegister() { i = static_cast<intptr_t>(0xbadbeef0baddbeef); }
- union {
- intptr_t i;
- uintptr_t u;
-#if USE(JSVALUE64)
-#if CPU(BIG_ENDIAN)
- struct {
- int32_t i32padding;
- int32_t i32;
- };
- struct {
- uint32_t u32padding;
- uint32_t u32;
- };
- struct {
- int8_t i8padding[7];
- int8_t i8;
- };
- struct {
- uint8_t u8padding[7];
- uint8_t u8;
- };
-#else // !CPU(BIG_ENDIAN)
- struct {
- int32_t i32;
- int32_t i32padding;
- };
- struct {
- uint32_t u32;
- uint32_t u32padding;
- };
- struct {
- int8_t i8;
- int8_t i8padding[7];
- };
- struct {
- uint8_t u8;
- uint8_t u8padding[7];
- };
-#endif // !CPU(BIG_ENDIAN)
-#else // !USE(JSVALUE64)
- int32_t i32;
- uint32_t u32;
-
-#if CPU(BIG_ENDIAN)
- struct {
- int8_t i8padding[3];
- int8_t i8;
- };
- struct {
- uint8_t u8padding[3];
- uint8_t u8;
- };
-
-#else // !CPU(BIG_ENDIAN)
- struct {
- int8_t i8;
- int8_t i8padding[3];
- };
- struct {
- uint8_t u8;
- uint8_t u8padding[3];
- };
-#endif // !CPU(BIG_ENDIAN)
-#endif // !USE(JSVALUE64)
-
- intptr_t* ip;
- int8_t* i8p;
- void* vp;
- const void* cvp;
- CallFrame* callFrame;
- ExecState* execState;
- const void* instruction;
- VM* vm;
- JSCell* cell;
- ProtoCallFrame* protoCallFrame;
- NativeFunction nativeFunc;
-#if USE(JSVALUE64)
- int64_t i64;
- uint64_t u64;
- EncodedJSValue encodedJSValue;
- double castToDouble;
-#endif
- Opcode opcode;
- };
-
- operator ExecState*() { return execState; }
- operator const Instruction*() { return reinterpret_cast<const Instruction*>(instruction); }
- operator VM*() { return vm; }
- operator ProtoCallFrame*() { return protoCallFrame; }
- operator Register*() { return reinterpret_cast<Register*>(vp); }
- operator JSCell*() { return cell; }
-
-#if USE(JSVALUE64)
- inline void clearHighWord() { i32padding = 0; }
-#else
- inline void clearHighWord() { }
-#endif
-};
-
-//============================================================================
// The llint C++ interpreter loop:
//
JSValue CLoop::execute(OpcodeID entryOpcodeID, void* executableAddress, VM* vm, ProtoCallFrame* protoCallFrame, bool isInitializationPass)
{
- #define CAST reinterpret_cast
- #define SIGN_BIT32(x) ((x) & 0x80000000)
+#define CAST bitwise_cast
// One-time initialization of our address tables. We have to put this code
// here because our labels are only in scope inside this function. The
@@ -317,13 +287,6 @@
// Define the pseudo registers used by the LLINT C Loop backend:
ASSERT(sizeof(CLoopRegister) == sizeof(intptr_t));
- union CLoopDoubleRegister {
- double d;
-#if USE(JSVALUE64)
- int64_t castToInt64;
-#endif
- };
-
// The CLoop llint backend is initially based on the ARMv7 backend, and
// then further enhanced with a few instructions from the x86 backend to
// support building for X64 targets. Hence, the shape of the generated
@@ -348,7 +311,7 @@
// 2. 32 bit result values will be in the low 32-bit of t0.
// 3. 64 bit result values will be in t0.
- CLoopRegister t0, t1, t2, t3, t5, t7, sp, cfr, lr, pc;
+ CLoopRegister t0, t1, t2, t3, t5, sp, cfr, lr, pc;
#if USE(JSVALUE64)
CLoopRegister pcBase, tagTypeNumber, tagMask;
#endif
@@ -374,24 +337,24 @@
CLoopStack& cloopStack = vm->interpreter->cloopStack();
StackPointerScope stackPointerScope(cloopStack);
- lr.opcode = getOpcode(llint_return_to_host);
- sp.vp = cloopStack.currentStackPointer();
- cfr.callFrame = vm->topCallFrame;
+ lr = getOpcode(llint_return_to_host);
+ sp = cloopStack.currentStackPointer();
+ cfr = vm->topCallFrame;
#ifndef NDEBUG
- void* startSP = sp.vp;
- CallFrame* startCFR = cfr.callFrame;
+ void* startSP = sp.vp();
+ CallFrame* startCFR = cfr.callFrame();
#endif
// Initialize the incoming args for doVMEntryToJavaScript:
- t0.vp = executableAddress;
- t1.vm = vm;
- t2.protoCallFrame = protoCallFrame;
+ t0 = executableAddress;
+ t1 = vm;
+ t2 = protoCallFrame;
#if USE(JSVALUE64)
// For the ASM llint, JITStubs takes care of this initialization. We do
// it explicitly here for the C loop:
- tagTypeNumber.i = 0xFFFF000000000000;
- tagMask.i = 0xFFFF000000000002;
+ tagTypeNumber = 0xFFFF000000000000;
+ tagMask = 0xFFFF000000000002;
#endif // USE(JSVALUE64)
// Interpreter variables for value passing between opcodes and/or helpers:
@@ -401,14 +364,14 @@
#define PUSH(cloopReg) \
do { \
- sp.ip--; \
- *sp.ip = cloopReg.i; \
+ sp = sp.ip() - 1; \
+ *sp.ip() = cloopReg.i(); \
} while (false)
#define POP(cloopReg) \
do { \
- cloopReg.i = *sp.ip; \
- sp.ip++; \
+ cloopReg = *sp.ip(); \
+ sp = sp.ip() + 1; \
} while (false)
#if ENABLE(OPCODE_STATS)
@@ -473,12 +436,12 @@
OFFLINE_ASM_GLUE_LABEL(llint_return_to_host)
{
- ASSERT(startSP == sp.vp);
- ASSERT(startCFR == cfr.callFrame);
+ ASSERT(startSP == sp.vp());
+ ASSERT(startCFR == cfr.callFrame());
#if USE(JSVALUE32_64)
- return JSValue(t1.i, t0.i); // returning JSValue(tag, payload);
+ return JSValue(t1.i(), t0.i()); // returning JSValue(tag, payload);
#else
- return JSValue::decode(t0.encodedJSValue);
+ return JSValue::decode(t0.encodedJSValue());
#endif
}
@@ -490,12 +453,12 @@
// The part in getHostCallReturnValueWithExecState():
JSValue result = vm->hostCallReturnValue;
#if USE(JSVALUE32_64)
- t1.i = result.tag();
- t0.i = result.payload();
+ t1 = result.tag();
+ t0 = result.payload();
#else
- t0.encodedJSValue = JSValue::encode(result);
+ t0 = JSValue::encode(result);
#endif
- opcode = lr.opcode;
+ opcode = lr.opcode();
DISPATCH_OPCODE();
}
@@ -523,7 +486,6 @@
#undef DEFINE_OPCODE
#undef CHECK_FOR_TIMEOUT
#undef CAST
- #undef SIGN_BIT32
return JSValue(); // to suppress a compiler warning.
} // Interpreter::llintCLoopExecute()
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm (239939 => 239940)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm 2019-01-14 21:34:47 UTC (rev 239940)
@@ -470,7 +470,7 @@
.constant:
loadp CodeBlock[cfr], tag
loadp CodeBlock::m_constantRegisters + VectorBufferOffset[tag], tag
- subp FirstConstantRegisterIndex, index
+ subi FirstConstantRegisterIndex, index
loadp TagOffset[tag, index, 8], tag
.done:
end)
@@ -486,7 +486,7 @@
.constant:
loadp CodeBlock[cfr], tag
loadp CodeBlock::m_constantRegisters + VectorBufferOffset[tag], tag
- subp FirstConstantRegisterIndex, index
+ subi FirstConstantRegisterIndex, index
lshifti 3, index
addp index, tag
loadp PayloadOffset[tag], payload
@@ -1265,7 +1265,7 @@
get(operand, t1)
loadConstantOrVariable(size, t1, t0, t3)
bineq t0, CellTag, .notCellCase
- get(type, t0)
+ getu(size, OpIsCellWithType, type, t0)
cbeq JSCell::m_type[t3], t0, t1
return(BooleanTag, t1)
.notCellCase:
@@ -1351,7 +1351,7 @@
bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength
loadi OpGetById::Metadata::modeMetadata.protoLoadMode.structure[t5], t1
loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdSlow)
- loadi OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t5], t2
+ loadis OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t5], t2
bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow
loadp OpGetById::Metadata::modeMetadata.protoLoadMode.cachedSlot[t5], t3
loadPropertyAtVariableOffset(t2, t3, t0, t1)
@@ -1913,12 +1913,12 @@
loadp %op%::Metadata::callLinkInfo.callee[t5], t2
loadConstantOrVariablePayload(size, t0, CellTag, t3, .opCallSlow)
bineq t3, t2, .opCallSlow
- get(argv, t3)
+ getu(size, op, argv, t3)
lshifti 3, t3
negi t3
addp cfr, t3 # t3 contains the new value of cfr
storei t2, Callee + PayloadOffset[t3]
- get(argc, t2)
+ getu(size, op, argc, t2)
storei PC, ArgumentCount + TagOffset[cfr]
storei t2, ArgumentCount + PayloadOffset[t3]
storei CellTag, Callee + TagOffset[t3]
@@ -2248,7 +2248,7 @@
llintOpWithMetadata(op_get_from_scope, OpGetFromScope, macro (size, get, dispatch, metadata, return)
macro getProperty()
- loadis OpGetFromScope::Metadata::operand[t5], t3
+ loadp OpGetFromScope::Metadata::operand[t5], t3
loadPropertyAtVariableOffset(t3, t0, t1, t2)
valueProfile(OpGetFromScope, t5, t1, t2)
return(t1, t2)
@@ -2264,7 +2264,7 @@
end
macro getClosureVar()
- loadis OpGetFromScope::Metadata::operand[t5], t3
+ loadp OpGetFromScope::Metadata::operand[t5], t3
loadp JSLexicalEnvironment_variables + TagOffset[t0, t3, 8], t1
loadp JSLexicalEnvironment_variables + PayloadOffset[t0, t3, 8], t2
valueProfile(OpGetFromScope, t5, t1, t2)
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm (239939 => 239940)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2019-01-14 21:34:47 UTC (rev 239940)
@@ -1215,7 +1215,7 @@
llintOpWithReturn(op_is_cell_with_type, OpIsCellWithType, macro (size, get, dispatch, return)
- get(type, t0)
+ getu(size, OpIsCellWithType, type, t0)
get(operand, t1)
loadConstantOrVariable(size, t1, t3)
btqnz t3, tagMask, .notCellCase
@@ -1302,9 +1302,9 @@
.opGetByIdProtoLoad:
bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength
loadi JSCell::m_structureID[t3], t1
- loadis OpGetById::Metadata::modeMetadata.protoLoadMode.structure[t2], t3
+ loadi OpGetById::Metadata::modeMetadata.protoLoadMode.structure[t2], t3
bineq t3, t1, .opGetByIdSlow
- loadi OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t2], t1
+ loadis OpGetById::Metadata::modeMetadata.protoLoadMode.cachedOffset[t2], t1
loadp OpGetById::Metadata::modeMetadata.protoLoadMode.cachedSlot[t2], t3
loadPropertyAtVariableOffset(t1, t3, t0)
valueProfile(OpGetById, t2, t0)
@@ -2068,6 +2068,7 @@
restoreStackPointerAfterCall()
loadp CodeBlock[cfr], PB
+ loadp CodeBlock::m_metadata[PB], metadataTable
loadp CodeBlock::m_instructionsRawPointer[PB], PB
unpoison(_g_CodeBlockPoison, PB, t2)
loadp VM::targetInterpreterPCForThrow[t3], PC
@@ -2311,7 +2312,7 @@
metadata(t5, t0)
macro getProperty()
- loadis OpGetFromScope::Metadata::operand[t5], t1
+ loadp OpGetFromScope::Metadata::operand[t5], t1
loadPropertyAtVariableOffset(t1, t0, t2)
valueProfile(OpGetFromScope, t5, t2)
return(t2)
@@ -2326,7 +2327,7 @@
end
macro getClosureVar()
- loadis OpGetFromScope::Metadata::operand[t5], t1
+ loadp OpGetFromScope::Metadata::operand[t5], t1
loadq JSLexicalEnvironment_variables[t0, t1, 8], t0
valueProfile(OpGetFromScope, t5, t0)
return(t0)
Modified: trunk/Source/_javascript_Core/offlineasm/cloop.rb (239939 => 239940)
--- trunk/Source/_javascript_Core/offlineasm/cloop.rb 2019-01-14 21:32:49 UTC (rev 239939)
+++ trunk/Source/_javascript_Core/offlineasm/cloop.rb 2019-01-14 21:34:47 UTC (rev 239940)
@@ -33,21 +33,21 @@
def cloopMapType(type)
case type
- when :int; ".i"
- when :uint; ".u"
- when :int32; ".i32"
- when :uint32; ".u32"
- when :int64; ".i64"
- when :uint64; ".u64"
- when :int8; ".i8"
- when :uint8; ".u8"
- when :int8Ptr; ".i8p"
- when :voidPtr; ".vp"
- when :nativeFunc; ".nativeFunc"
- when :double; ".d"
- when :castToDouble; ".castToDouble"
- when :castToInt64; ".castToInt64"
- when :opcode; ".opcode"
+ when :int; ".i()"
+ when :uint; ".u()"
+ when :int32; ".i32()"
+ when :uint32; ".u32()"
+ when :int64; ".i64()"
+ when :uint64; ".u64()"
+ when :int8; ".i8()"
+ when :uint8; ".u8()"
+ when :int8Ptr; ".i8p()"
+ when :voidPtr; ".vp()"
+ when :nativeFunc; ".nativeFunc()"
+ when :double; ".d()"
+ when :bitsAsDouble; ".bitsAsDouble()"
+ when :bitsAsInt64; ".bitsAsInt64()"
+ when :opcode; ".opcode()"
else;
raise "Unsupported type"
end
@@ -55,6 +55,9 @@
class SpecialRegister < NoChildren
+ def clLValue(type=:int)
+ clDump
+ end
def clDump
@name
end
@@ -100,6 +103,9 @@
raise "Bad register #{name} for C_LOOP at #{codeOriginString}"
end
end
+ def clLValue(type=:int)
+ clDump
+ end
def clValue(type=:int)
clDump + cloopMapType(type)
end
@@ -124,6 +130,9 @@
raise "Bad register #{name} for C_LOOP at #{codeOriginString}"
end
end
+ def clLValue(type=:int)
+ clDump
+ end
def clValue(type=:int)
clDump + cloopMapType(type)
end
@@ -133,6 +142,9 @@
def clDump
"#{value}"
end
+ def clLValue(type=:int)
+ raise "Immediate cannot be used as an LValue"
+ end
def clValue(type=:int)
# There is a case of a very large unsigned number (0x8000000000000000)
# which we wish to encode. Unfortunately, the C/C++ compiler
@@ -165,6 +177,9 @@
def clDump
"[#{base.clDump}, #{offset.value}]"
end
+ def clLValue(type=:int)
+ clValue(type)
+ end
def clValue(type=:int)
case type
when :int8; int8MemRef
@@ -235,6 +250,9 @@
def clDump
"[#{base.clDump}, #{offset.clDump}, #{index.clDump} << #{scaleShift}]"
end
+ def clLValue(type=:int)
+ clValue(type)
+ end
def clValue(type=:int)
case type
when :int8; int8MemRef
@@ -299,6 +317,9 @@
def clDump
"#{codeOriginString}"
end
+ def clLValue(type=:int)
+ clValue(type)
+ end
def clValue
clDump
end
@@ -309,7 +330,7 @@
"*CAST<intptr_t*>(&#{cLabel})"
end
def cloopEmitLea(destination, type)
- $asm.putc "#{destination.clValue(:voidPtr)} = CAST<void*>(&#{cLabel});"
+ $asm.putc "#{destination.clLValue(:voidPtr)} = CAST<void*>(&#{cLabel});"
end
end
@@ -321,9 +342,9 @@
class Address
def cloopEmitLea(destination, type)
if destination == base
- $asm.putc "#{destination.clValue(:int8Ptr)} += #{offset.clValue(type)};"
+ $asm.putc "#{destination.clLValue(:int8Ptr)} += #{offset.clValue(type)};"
else
- $asm.putc "#{destination.clValue(:int8Ptr)} = #{base.clValue(:int8Ptr)} + #{offset.clValue(type)};"
+ $asm.putc "#{destination.clLValue(:int8Ptr)} = #{base.clValue(:int8Ptr)} + #{offset.clValue(type)};"
end
end
end
@@ -331,7 +352,7 @@
class BaseIndex
def cloopEmitLea(destination, type)
raise "Malformed BaseIndex, offset should be zero at #{codeOriginString}" unless offset.value == 0
- $asm.putc "#{destination.clValue(:int8Ptr)} = #{base.clValue(:int8Ptr)} + (#{index.clValue} << #{scaleShift});"
+ $asm.putc "#{destination.clLValue(:int8Ptr)} = #{base.clValue(:int8Ptr)} + (#{index.clValue} << #{scaleShift});"
end
end
@@ -367,35 +388,45 @@
raise unless type == :int || type == :uint || type == :int32 || type == :uint32 || \
type == :int64 || type == :uint64 || type == :double
if operands.size == 3
- $asm.putc "#{operands[2].clValue(type)} = #{operands[0].clValue(type)} #{operator} #{operands[1].clValue(type)};"
- if operands[2].is_a? RegisterID and (type == :int32 or type == :uint32)
- $asm.putc "#{operands[2].clDump}.clearHighWord();" # Just clear it. It does nothing on the 32-bit port.
- end
+ op1 = operands[0]
+ op2 = operands[1]
+ dst = operands[2]
else
raise unless operands.size == 2
- raise unless not operands[1].is_a? Immediate
- $asm.putc "#{operands[1].clValue(type)} = #{operands[1].clValue(type)} #{operator} #{operands[0].clValue(type)};"
- if operands[1].is_a? RegisterID and (type == :int32 or type == :uint32)
- $asm.putc "#{operands[1].clDump}.clearHighWord();" # Just clear it. It does nothing on the 32-bit port.
- end
+ op1 = operands[1]
+ op2 = operands[0]
+ dst = operands[1]
end
+ raise unless not dst.is_a? Immediate
+ if dst.is_a? RegisterID and (type == :int32 or type == :uint32)
+ truncationHeader = "(uint32_t)("
+ truncationFooter = ")"
+ else
+ truncationHeader = ""
+ truncationFooter = ""
+ end
+ $asm.putc "#{dst.clLValue(type)} = #{truncationHeader}#{op1.clValue(type)} #{operator} #{op2.clValue(type)}#{truncationFooter};"
end
def cloopEmitShiftOperation(operands, type, operator)
raise unless type == :int || type == :uint || type == :int32 || type == :uint32 || type == :int64 || type == :uint64
if operands.size == 3
- $asm.putc "#{operands[2].clValue(type)} = #{operands[1].clValue(type)} #{operator} (#{operands[0].clValue(:int)} & 0x1f);"
- if operands[2].is_a? RegisterID and (type == :int32 or type == :uint32)
- $asm.putc "#{operands[2].clDump}.clearHighWord();" # Just clear it. It does nothing on the 32-bit port.
- end
+ op1 = operands[0]
+ op2 = operands[1]
+ dst = operands[2]
else
- raise unless operands.size == 2
- raise unless not operands[1].is_a? Immediate
- $asm.putc "#{operands[1].clValue(type)} = #{operands[1].clValue(type)} #{operator} (#{operands[0].clValue(:int)} & 0x1f);"
- if operands[1].is_a? RegisterID and (type == :int32 or type == :uint32)
- $asm.putc "#{operands[1].clDump}.clearHighWord();" # Just clear it. It does nothing on the 32-bit port.
- end
+ op1 = operands[1]
+ op2 = operands[0]
+ dst = operands[1]
end
+ if dst.is_a? RegisterID and (type == :int32 or type == :uint32)
+ truncationHeader = "(uint32_t)("
+ truncationFooter = ")"
+ else
+ truncationHeader = ""
+ truncationFooter = ""
+ end
+ $asm.putc "#{dst.clLValue(type)} = #{truncationHeader}#{operands[1].clValue(type)} #{operator} (#{operands[0].clValue(:int)} & 0x1f)#{truncationFooter};"
end
def cloopEmitUnaryOperation(operands, type, operator)
@@ -402,10 +433,16 @@
raise unless type == :int || type == :uint || type == :int32 || type == :uint32 || type == :int64 || type == :uint64
raise unless operands.size == 1
raise unless not operands[0].is_a? Immediate
- $asm.putc "#{operands[0].clValue(type)} = #{operator}#{operands[0].clValue(type)};"
- if operands[0].is_a? RegisterID and (type == :int32 or type == :uint32)
- $asm.putc "#{operands[0].clDump}.clearHighWord();" # Just clear it. It does nothing on the 32-bit port.
+ op = operands[0]
+ dst = operands[0]
+ if dst.is_a? RegisterID and (type == :int32 or type == :uint32)
+ truncationHeader = "(uint32_t)("
+ truncationFooter = ")"
+ else
+ truncationHeader = ""
+ truncationFooter = ""
end
+ $asm.putc "#{dst.clLValue(type)} = #{truncationHeader}#{operator}#{op.clValue(type)}#{truncationFooter};"
end
def cloopEmitCompareDoubleWithNaNCheckAndBranch(operands, condition)
@@ -418,7 +455,7 @@
def cloopEmitCompareAndSet(operands, type, comparator)
# The result is a boolean. Hence, it doesn't need to be based on the type
# of the arguments being compared.
- $asm.putc "#{operands[2].clValue} = (#{operands[0].clValue(type)} #{comparator} #{operands[1].clValue(type)});"
+ $asm.putc "#{operands[2].clLValue(type)} = (#{operands[0].clValue(type)} #{comparator} #{operands[1].clValue(type)});"
end
@@ -459,7 +496,7 @@
# int. The passed in type is only used for the values being tested in
# the condition test.
conditionExpr = cloopGenerateConditionExpression(operands, type, conditionTest)
- $asm.putc "#{operands[-1].clValue} = (#{conditionExpr});"
+ $asm.putc "#{operands[-1].clLValue} = (#{conditionExpr});"
end
def cloopEmitOpAndBranch(operands, operator, type, conditionTest)
@@ -471,12 +508,9 @@
raise "Unimplemented type"
end
- op1 = operands[0].clValue(type)
- op2 = operands[1].clValue(type)
-
$asm.putc "{"
- $asm.putc " #{tempType} temp = #{op2} #{operator} #{op1};"
- $asm.putc " #{op2} = temp;"
+ $asm.putc " #{tempType} temp = #{operands[1].clValue(type)} #{operator} #{operands[0].clValue(type)};"
+ $asm.putc " #{operands[1].clLValue(type)} = temp;"
$asm.putc " if (temp #{conditionTest})"
$asm.putc " goto #{operands[2].cLabel};"
$asm.putc "}"
@@ -486,6 +520,8 @@
case type
when :int32
tempType = "int32_t"
+ truncationHeader = "(uint32_t)("
+ truncationFooter = ")"
else
raise "Unimplemented type"
end
@@ -501,7 +537,10 @@
raise "Unimplemented opeartor"
end
- $asm.putc " if (!WTF::ArithmeticOperations<#{tempType}, #{tempType}, #{tempType}>::#{operation}(#{operands[1].clValue(type)}, #{operands[0].clValue(type)}, #{operands[1].clValue(type)}))"
+ $asm.putc " #{tempType} result;"
+ $asm.putc " bool success = WTF::ArithmeticOperations<#{tempType}, #{tempType}, #{tempType}>::#{operation}(#{operands[1].clValue(type)}, #{operands[0].clValue(type)}, result);"
+ $asm.putc " #{operands[1].clLValue(type)} = #{truncationHeader}result#{truncationFooter};"
+ $asm.putc " if (!success)"
$asm.putc " goto #{operands[2].cLabel};"
$asm.putc "}"
end
@@ -509,14 +548,14 @@
# operands: callTarget, currentFrame, currentPC
def cloopEmitCallSlowPath(operands)
$asm.putc "{"
- $asm.putc " cloopStack.setCurrentStackPointer(sp.vp);"
+ $asm.putc " cloopStack.setCurrentStackPointer(sp.vp());"
$asm.putc " SlowPathReturnType result = #{operands[0].cLabel}(#{operands[1].clDump}, #{operands[2].clDump});"
- $asm.putc " decodeResult(result, t0.cvp, t1.cvp);"
+ $asm.putc " decodeResult(result, t0, t1);"
$asm.putc "}"
end
def cloopEmitCallSlowPathVoid(operands)
- $asm.putc "cloopStack.setCurrentStackPointer(sp.vp);"
+ $asm.putc "cloopStack.setCurrentStackPointer(sp.vp());"
$asm.putc "#{operands[0].cLabel}(#{operands[1].clDump}, #{operands[2].clDump});"
end
@@ -597,15 +636,15 @@
cloopEmitUnaryOperation(operands, :int32, "~")
when "loadi"
- $asm.putc "#{operands[1].clValue(:uint)} = #{operands[0].uint32MemRef};"
+ $asm.putc "#{operands[1].clLValue(:uint32)} = #{operands[0].uint32MemRef};"
# There's no need to call clearHighWord() here because the above will
# automatically take care of 0 extension.
when "loadis"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].int32MemRef};"
+ $asm.putc "#{operands[1].clLValue(:int32)} = #{operands[0].int32MemRef};"
when "loadq"
- $asm.putc "#{operands[1].clValue(:int64)} = #{operands[0].int64MemRef};"
+ $asm.putc "#{operands[1].clLValue(:int64)} = #{operands[0].int64MemRef};"
when "loadp"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].intMemRef};"
+ $asm.putc "#{operands[1].clLValue} = #{operands[0].intMemRef};"
when "storei"
$asm.putc "#{operands[1].int32MemRef} = #{operands[0].clValue(:int32)};"
when "storeq"
@@ -613,19 +652,21 @@
when "storep"
$asm.putc "#{operands[1].intMemRef} = #{operands[0].clValue(:int)};"
when "loadb"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].uint8MemRef};"
- when "loadbs", "loadbsp"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].int8MemRef};"
+ $asm.putc "#{operands[1].clLValue(:int)} = #{operands[0].uint8MemRef};"
+ when "loadbs"
+ $asm.putc "#{operands[1].clLValue(:int)} = (uint32_t)(#{operands[0].int8MemRef});"
+ when "loadbsp"
+ $asm.putc "#{operands[1].clLValue(:int)} = #{operands[0].int8MemRef};"
when "storeb"
$asm.putc "#{operands[1].uint8MemRef} = #{operands[0].clValue(:int8)};"
when "loadh"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].uint16MemRef};"
+ $asm.putc "#{operands[1].clLValue(:int)} = #{operands[0].uint16MemRef};"
when "loadhs"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].int16MemRef};"
+ $asm.putc "#{operands[1].clLValue(:int)} = (uint32_t)(#{operands[0].int16MemRef});"
when "storeh"
$asm.putc "*#{operands[1].uint16MemRef} = #{operands[0].clValue(:int16)};"
when "loadd"
- $asm.putc "#{operands[1].clValue(:double)} = #{operands[0].dblMemRef};"
+ $asm.putc "#{operands[1].clLValue(:double)} = #{operands[0].dblMemRef};"
when "stored"
$asm.putc "#{operands[1].dblMemRef} = #{operands[0].clValue(:double)};"
@@ -640,8 +681,8 @@
# Convert an int value to its double equivalent, and store it in a double register.
when "ci2d"
- $asm.putc "#{operands[1].clValue(:double)} = #{operands[0].clValue(:int32)};"
-
+ $asm.putc "#{operands[1].clLValue(:double)} = (double)#{operands[0].clValue(:int32)}; // ci2d"
+
when "bdeq"
cloopEmitCompareAndBranch(operands, :double, "==")
when "bdneq"
@@ -669,25 +710,23 @@
cloopEmitCompareDoubleWithNaNCheckAndBranch(operands, "<=")
when "td2i"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].clValue(:double)};"
- $asm.putc "#{operands[1].clDump}.clearHighWord();"
+ $asm.putc "#{operands[1].clLValue(:int)} = (uint32_t)(intptr_t)#{operands[0].clValue(:double)}; // td2i"
when "bcd2i" # operands: srcDbl dstInt slowPath
- $asm.putc "{"
+ $asm.putc "{ // bcd2i"
$asm.putc " double d = #{operands[0].clValue(:double)};"
$asm.putc " const int32_t asInt32 = int32_t(d);"
$asm.putc " if (asInt32 != d || (!asInt32 && std::signbit(d))) // true for -0.0"
$asm.putc " goto #{operands[2].cLabel};"
- $asm.putc " #{operands[1].clValue} = asInt32;"
- $asm.putc " #{operands[1].clDump}.clearHighWord();"
+ $asm.putc " #{operands[1].clLValue} = (uint32_t)asInt32;"
$asm.putc "}"
when "move"
- $asm.putc "#{operands[1].clValue(:int)} = #{operands[0].clValue(:int)};"
+ $asm.putc "#{operands[1].clLValue(:int)} = #{operands[0].clValue(:int)};"
when "sxi2q"
- $asm.putc "#{operands[1].clValue(:int64)} = #{operands[0].clValue(:int32)};"
+ $asm.putc "#{operands[1].clLValue(:int64)} = #{operands[0].clValue(:int32)};"
when "zxi2q"
- $asm.putc "#{operands[1].clValue(:uint64)} = #{operands[0].clValue(:uint32)};"
+ $asm.putc "#{operands[1].clLValue(:uint64)} = #{operands[0].clValue(:uint32)};"
when "nop"
$asm.putc "// nop"
when "bbeq"
@@ -831,7 +870,7 @@
when "break"
$asm.putc "CRASH(); // break instruction not implemented."
when "ret"
- $asm.putc "opcode = lr.opcode;"
+ $asm.putc "opcode = lr.opcode();"
$asm.putc "DISPATCH_OPCODE();"
when "cbeq"
@@ -955,12 +994,10 @@
# Sign extends the lower 32 bits of t0, but put the sign extension into
# the lower 32 bits of t1. Leave the upper 32 bits of t0 and t1 unchanged.
when "cdqi"
- $asm.putc "{"
- $asm.putc " int64_t temp = t0.i32; // sign extend the low 32bit"
- $asm.putc " t0.i32 = temp; // low word"
- $asm.putc " t0.clearHighWord();"
- $asm.putc " t1.i32 = uint64_t(temp) >> 32; // high word"
- $asm.putc " t1.clearHighWord();"
+ $asm.putc "{ // cdqi"
+ $asm.putc " int64_t temp = t0.i32(); // sign extend the low 32bit"
+ $asm.putc " t0 = (uint32_t)temp; // low word"
+ $asm.putc " t1 = (uint32_t)(temp >> 32); // high word"
$asm.putc "}"
# 64-bit instruction: idivi op1 (based on X64)
@@ -976,34 +1013,32 @@
when "idivi"
# Divide t1,t0 (EDX,EAX) by the specified arg, and store the remainder in t1,
# and quotient in t0:
- $asm.putc "{"
- $asm.putc " int64_t dividend = (int64_t(t1.u32) << 32) | t0.u32;"
+ $asm.putc "{ // idivi"
+ $asm.putc " int64_t dividend = (int64_t(t1.u32()) << 32) | t0.u32();"
$asm.putc " int64_t divisor = #{operands[0].clValue(:int)};"
- $asm.putc " t1.i32 = dividend % divisor; // remainder"
- $asm.putc " t1.clearHighWord();"
- $asm.putc " t0.i32 = dividend / divisor; // quotient"
- $asm.putc " t0.clearHighWord();"
+ $asm.putc " t1 = (uint32_t)(dividend % divisor); // remainder"
+ $asm.putc " t0 = (uint32_t)(dividend / divisor); // quotient"
$asm.putc "}"
# 32-bit instruction: fii2d int32LoOp int32HiOp dblOp (based on ARMv7)
# Decode 2 32-bit ints (low and high) into a 64-bit double.
when "fii2d"
- $asm.putc "#{operands[2].clValue(:double)} = Ints2Double(#{operands[0].clValue(:uint32)}, #{operands[1].clValue(:uint32)});"
+ $asm.putc "#{operands[2].clLValue(:double)} = ints2Double(#{operands[0].clValue(:uint32)}, #{operands[1].clValue(:uint32)}); // fii2d"
# 32-bit instruction: f2dii dblOp int32LoOp int32HiOp (based on ARMv7)
# Encode a 64-bit double into 2 32-bit ints (low and high).
when "fd2ii"
- $asm.putc "Double2Ints(#{operands[0].clValue(:double)}, #{operands[1].clValue(:uint32)}, #{operands[2].clValue(:uint32)});"
+ $asm.putc "double2Ints(#{operands[0].clValue(:double)}, #{operands[1].clDump}, #{operands[2].clDump}); // fd2ii"
# 64-bit instruction: fq2d int64Op dblOp (based on X64)
# Copy a bit-encoded double in a 64-bit int register to a double register.
when "fq2d"
- $asm.putc "#{operands[1].clValue(:double)} = #{operands[0].clValue(:castToDouble)};"
+ $asm.putc "#{operands[1].clLValue(:double)} = #{operands[0].clValue(:bitsAsDouble)}; // fq2d"
# 64-bit instruction: fd2q dblOp int64Op (based on X64 instruction set)
# Copy a double as a bit-encoded double into a 64-bit int register.
when "fd2q"
- $asm.putc "#{operands[1].clValue(:int64)} = #{operands[0].clValue(:castToInt64)};"
+ $asm.putc "#{operands[1].clLValue(:int64)} = #{operands[0].clValue(:bitsAsInt64)}; // fd2q"
when "leai"
operands[0].cloopEmitLea(operands[1], :int32)
@@ -1079,7 +1114,7 @@
# as an opcode dispatch.
when "cloopCallJSFunction"
uid = $asm.newUID
- $asm.putc "lr.opcode = getOpcode(llint_cloop_did_return_from_js_#{uid});"
+ $asm.putc "lr = getOpcode(llint_cloop_did_return_from_js_#{uid});"
$asm.putc "opcode = #{operands[0].clValue(:opcode)};"
$asm.putc "DISPATCH_OPCODE();"
$asm.putsLabel("llint_cloop_did_return_from_js_#{uid}", false)
@@ -1088,15 +1123,15 @@
# fortunately we don't have to here. All native function calls always
# have a fixed prototype of 1 args: the passed ExecState.
when "cloopCallNative"
- $asm.putc "cloopStack.setCurrentStackPointer(sp.vp);"
+ $asm.putc "cloopStack.setCurrentStackPointer(sp.vp());"
$asm.putc "nativeFunc = #{operands[0].clValue(:nativeFunc)};"
- $asm.putc "functionReturnValue = JSValue::decode(nativeFunc(t0.execState));"
+ $asm.putc "functionReturnValue = JSValue::decode(nativeFunc(t0.execState()));"
$asm.putc "#if USE(JSVALUE32_64)"
- $asm.putc " t1.i = functionReturnValue.tag();"
- $asm.putc " t0.i = functionReturnValue.payload();"
+ $asm.putc " t1 = functionReturnValue.tag();"
+ $asm.putc " t0 = functionReturnValue.payload();"
$asm.putc "#else // USE_JSVALUE64)"
- $asm.putc " t0.encodedJSValue = JSValue::encode(functionReturnValue);"
- $asm.putc "#endif // USE_JSVALUE64)"
+ $asm.putc " t0 = JSValue::encode(functionReturnValue);"
+ $asm.putc "#endif // USE_JSVALUE64)"
# We can't do generic function calls with an arbitrary set of args, but
# fortunately we don't have to here. All slow path function calls always