Title: [269511] trunk/Source/_javascript_Core
Revision
269511
Author
[email protected]
Date
2020-11-06 07:50:49 -0800 (Fri, 06 Nov 2020)

Log Message

Use address diversified PAC to ensure the integrity of opcode maps.
https://bugs.webkit.org/show_bug.cgi?id=218646

Reviewed by Yusuke Suzuki.

One reason for doing this is because space in the JSCConfig is limited, and may
hurt RAMification scores if we need to expand it when adding new opcodes.
By putting the opcode maps in dirty global memory, we still use less memory
because dirty global memory does not incur internal fragmentation like the
JSCConfig does.

In this patch, we move g_jscConfig.llint.opcodeMap, g_jscConfig.llint.opcodeMapWide16,
and g_jscConfig.llint.opcodeMapWide32 back to global arrays g_opcodeMap, g_opcodeMapWide16,
and g_opcodeMapWide32.

* interpreter/InterpreterInlines.h:
(JSC::Interpreter::getOpcodeID):
- Since this function is only used debugging purposes during development, and is
  currently unused, we can just strip the PAC bits from the opcode when computing
  the opcodeID.  The alternative to doing this requires that we know how the
  Opcode is signed by the client.  Since this function is currently unused, we
  have no clients to study / fix up for now.

* llint/LLIntData.cpp:
(JSC::LLInt::initialize):
- Changed an ASSERT for llint_throw_from_slow_path_trampoline to static_assert,
  and add a second one as well for wasm_throw_from_slow_path_trampoline.
- Moved the signing of the Opcode pointers into llint_entry() and wasm_entry()
  instead.  Now, non-ARM64E ports don't need to execute this no-op assignment loop
  (assuming it wasn't already elided by the compiler).

* llint/LLIntData.h:
(JSC::LLInt::opcodeMap):
(JSC::LLInt::opcodeMapWide16):
(JSC::LLInt::opcodeMapWide32):
(JSC::LLInt::getOpcode):
(JSC::LLInt::getOpcodeWide16):
(JSC::LLInt::getOpcodeWide32):
- Change getOpcode(), getOpcodeWide16(), and getOpcodeWide32() to return a reference
  to the entry in the corresponding opcode map.  This is needed because we need to
  be able to compute the address of the Opcode entry in order to retag the Opcode.

(JSC::LLInt::getCodePtrImpl):
(JSC::LLInt::getCodePtr):
(JSC::LLInt::getWide16CodePtr):
(JSC::LLInt::getWide32CodePtr):

* llint/LowLevelInterpreter.asm:
* llint/WebAssembly.asm:
- Changed the bytecode dispatch `jmp`s to use address diversification when
  authenticating the Opcode pointer.
- Changed llint_entry and wasm_entry to also tag the Opcode pointers for ARM64E.
- Changed llint_entry and wasm_entry to validate that they are only called during
  system initialization.

* offlineasm/arm64.rb:
- Optimize `leap` code generation to elide an add instruction if it's only adding
  0 to a global address.

* offlineasm/arm64e.rb:
* offlineasm/ast.rb:
* offlineasm/instructions.rb:
- Added support for jmp or call using address diversified pointers.
- Added a tagCodePtr instruction that also supports signing address diversified pointers.

* runtime/JSCConfig.h:
* runtime/JSCPtrTag.h:
(JSC::untagAddressDiversifiedCodePtr):
- Added untagAddressDiversifiedCodePtr() so that we can retag the Opcode pointers.

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (269510 => 269511)


--- trunk/Source/_javascript_Core/ChangeLog	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/ChangeLog	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,3 +1,75 @@
+2020-11-06  Mark Lam  <[email protected]>
+
+        Use address diversified PAC to ensure the integrity of opcode maps.
+        https://bugs.webkit.org/show_bug.cgi?id=218646
+
+        Reviewed by Yusuke Suzuki.
+
+        One reason for doing this is because space in the JSCConfig is limited, and may
+        hurt RAMification scores if we need to expand it when adding new opcodes.
+        By putting the opcode maps in dirty global memory, we still use less memory
+        because dirty global memory does not incur internal fragmentation like the
+        JSCConfig does.
+
+        In this patch, we move g_jscConfig.llint.opcodeMap, g_jscConfig.llint.opcodeMapWide16,
+        and g_jscConfig.llint.opcodeMapWide32 back to global arrays g_opcodeMap, g_opcodeMapWide16,
+        and g_opcodeMapWide32.
+
+        * interpreter/InterpreterInlines.h:
+        (JSC::Interpreter::getOpcodeID):
+        - Since this function is only used debugging purposes during development, and is
+          currently unused, we can just strip the PAC bits from the opcode when computing
+          the opcodeID.  The alternative to doing this requires that we know how the
+          Opcode is signed by the client.  Since this function is currently unused, we
+          have no clients to study / fix up for now.
+
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::initialize):
+        - Changed an ASSERT for llint_throw_from_slow_path_trampoline to static_assert,
+          and add a second one as well for wasm_throw_from_slow_path_trampoline.
+        - Moved the signing of the Opcode pointers into llint_entry() and wasm_entry()
+          instead.  Now, non-ARM64E ports don't need to execute this no-op assignment loop
+          (assuming it wasn't already elided by the compiler).
+
+        * llint/LLIntData.h:
+        (JSC::LLInt::opcodeMap):
+        (JSC::LLInt::opcodeMapWide16):
+        (JSC::LLInt::opcodeMapWide32):
+        (JSC::LLInt::getOpcode):
+        (JSC::LLInt::getOpcodeWide16):
+        (JSC::LLInt::getOpcodeWide32):
+        - Change getOpcode(), getOpcodeWide16(), and getOpcodeWide32() to return a reference
+          to the entry in the corresponding opcode map.  This is needed because we need to
+          be able to compute the address of the Opcode entry in order to retag the Opcode.
+
+        (JSC::LLInt::getCodePtrImpl):
+        (JSC::LLInt::getCodePtr):
+        (JSC::LLInt::getWide16CodePtr):
+        (JSC::LLInt::getWide32CodePtr):
+
+        * llint/LowLevelInterpreter.asm:
+        * llint/WebAssembly.asm:
+        - Changed the bytecode dispatch `jmp`s to use address diversification when
+          authenticating the Opcode pointer.
+        - Changed llint_entry and wasm_entry to also tag the Opcode pointers for ARM64E.
+        - Changed llint_entry and wasm_entry to validate that they are only called during
+          system initialization.
+
+        * offlineasm/arm64.rb:
+        - Optimize `leap` code generation to elide an add instruction if it's only adding
+          0 to a global address.
+
+        * offlineasm/arm64e.rb:
+        * offlineasm/ast.rb:
+        * offlineasm/instructions.rb:
+        - Added support for jmp or call using address diversified pointers.
+        - Added a tagCodePtr instruction that also supports signing address diversified pointers.
+
+        * runtime/JSCConfig.h:
+        * runtime/JSCPtrTag.h:
+        (JSC::untagAddressDiversifiedCodePtr):
+        - Added untagAddressDiversifiedCodePtr() so that we can retag the Opcode pointers.
+
 2020-11-05  Don Olmstead  <[email protected]>
 
         Non-unified build fixes, early November 2020 edition

Modified: trunk/Source/_javascript_Core/interpreter/InterpreterInlines.h (269510 => 269511)


--- trunk/Source/_javascript_Core/interpreter/InterpreterInlines.h	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/interpreter/InterpreterInlines.h	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,6 +1,6 @@
 /*
  * Copyright (C) 2016 Yusuke Suzuki <[email protected]>
- * Copyright (C) 2016-2019 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2020 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -45,6 +45,9 @@
     return LLInt::getOpcode(id);
 }
 
+// This function is only available as a debugging tool for development work.
+// It is not currently used except in a RELEASE_ASSERT to ensure that it is
+// working properly.
 inline OpcodeID Interpreter::getOpcodeID(Opcode opcode)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
@@ -53,8 +56,8 @@
     // The OpcodeID is embedded in the int32_t word preceding the location of
     // the LLInt code for the opcode (see the EMBED_OPCODE_ID_IF_NEEDED macro
     // in LowLevelInterpreter.cpp).
-    auto codePtr = MacroAssemblerCodePtr<BytecodePtrTag>::createFromExecutableAddress(opcode);
-    int32_t* opcodeIDAddress = codePtr.dataLocation<int32_t*>() - 1;
+    const void* opcodeAddress = removeCodePtrTag(bitwise_cast<const void*>(opcode));
+    const int32_t* opcodeIDAddress = bitwise_cast<int32_t*>(opcodeAddress) - 1;
     OpcodeID opcodeID = static_cast<OpcodeID>(WTF::unalignedLoad<int32_t>(opcodeIDAddress));
     ASSERT(opcodeID < NUMBER_OF_BYTECODE_IDS);
     return opcodeID;
@@ -61,7 +64,7 @@
 #else
     return opcodeIDTable().get(opcode);
 #endif // ENABLE(LLINT_EMBEDDED_OPCODE_ID)
-    
+
 #else // not ENABLE(COMPUTED_GOTO_OPCODES)
     return opcode;
 #endif

Modified: trunk/Source/_javascript_Core/llint/LLIntData.cpp (269510 => 269511)


--- trunk/Source/_javascript_Core/llint/LLIntData.cpp	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/llint/LLIntData.cpp	2020-11-06 15:50:49 UTC (rev 269511)
@@ -41,6 +41,10 @@
 
 namespace LLInt {
 
+Opcode g_opcodeMap[numOpcodeIDs + numWasmOpcodeIDs] = { };
+Opcode g_opcodeMapWide16[numOpcodeIDs + numWasmOpcodeIDs] = { };
+Opcode g_opcodeMapWide32[numOpcodeIDs + numWasmOpcodeIDs] = { };
+
 #if !ENABLE(C_LOOP)
 extern "C" void llint_entry(void*, void*, void*);
 
@@ -71,19 +75,14 @@
 
 #else // !ENABLE(C_LOOP)
 
-    llint_entry(&g_jscConfig.llint.opcodeMap, &g_jscConfig.llint.opcodeMapWide16, &g_jscConfig.llint.opcodeMapWide32);
+    llint_entry(&g_opcodeMap, &g_opcodeMapWide16, &g_opcodeMapWide32);
 
 #if ENABLE(WEBASSEMBLY)
-    wasm_entry(&g_jscConfig.llint.opcodeMap[numOpcodeIDs], &g_jscConfig.llint.opcodeMapWide16[numOpcodeIDs], &g_jscConfig.llint.opcodeMapWide32[numOpcodeIDs]);
+    wasm_entry(&g_opcodeMap[numOpcodeIDs], &g_opcodeMapWide16[numOpcodeIDs], &g_opcodeMapWide32[numOpcodeIDs]);
 #endif // ENABLE(WEBASSEMBLY)
 
-    for (int i = 0; i < numOpcodeIDs + numWasmOpcodeIDs; ++i) {
-        g_jscConfig.llint.opcodeMap[i] = tagCodePtr<BytecodePtrTag>(g_jscConfig.llint.opcodeMap[i]);
-        g_jscConfig.llint.opcodeMapWide16[i] = tagCodePtr<BytecodePtrTag>(g_jscConfig.llint.opcodeMapWide16[i]);
-        g_jscConfig.llint.opcodeMapWide32[i] = tagCodePtr<BytecodePtrTag>(g_jscConfig.llint.opcodeMapWide32[i]);
-    }
-
-    ASSERT(llint_throw_from_slow_path_trampoline < UINT8_MAX);
+    static_assert(llint_throw_from_slow_path_trampoline < UINT8_MAX);
+    static_assert(wasm_throw_from_slow_path_trampoline < UINT8_MAX);
     for (unsigned i = 0; i < maxOpcodeLength + 1; ++i) {
         g_jscConfig.llint.exceptionInstructions[i] = llint_throw_from_slow_path_trampoline;
         g_jscConfig.llint.wasmExceptionInstructions[i] = wasm_throw_from_slow_path_trampoline;

Modified: trunk/Source/_javascript_Core/llint/LLIntData.h (269510 => 269511)


--- trunk/Source/_javascript_Core/llint/LLIntData.h	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/llint/LLIntData.h	2020-11-06 15:50:49 UTC (rev 269511)
@@ -43,6 +43,10 @@
 
 namespace LLInt {
 
+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMap[numOpcodeIDs + numWasmOpcodeIDs];
+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide16[numOpcodeIDs + numWasmOpcodeIDs];
+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide32[numOpcodeIDs + numWasmOpcodeIDs];
+
 class Data {
 
 public:
@@ -56,9 +60,9 @@
     friend Opcode* opcodeMap();
     friend Opcode* opcodeMapWide16();
     friend Opcode* opcodeMapWide32();
-    friend Opcode getOpcode(OpcodeID);
-    friend Opcode getOpcodeWide16(OpcodeID);
-    friend Opcode getOpcodeWide32(OpcodeID);
+    friend const Opcode& getOpcode(OpcodeID);
+    friend const Opcode& getOpcodeWide16(OpcodeID);
+    friend const Opcode& getOpcodeWide32(OpcodeID);
     template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID);
     template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID);
     template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID);
@@ -79,32 +83,32 @@
 
 inline Opcode* opcodeMap()
 {
-    return g_jscConfig.llint.opcodeMap;
+    return g_opcodeMap;
 }
 
 inline Opcode* opcodeMapWide16()
 {
-    return g_jscConfig.llint.opcodeMapWide16;
+    return g_opcodeMapWide16;
 }
 
 inline Opcode* opcodeMapWide32()
 {
-    return g_jscConfig.llint.opcodeMapWide32;
+    return g_opcodeMapWide32;
 }
 
-inline Opcode getOpcode(OpcodeID id)
+inline const Opcode& getOpcode(OpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMap[id];
+    return g_opcodeMap[id];
 #else
     return static_cast<Opcode>(id);
 #endif
 }
 
-inline Opcode getOpcodeWide16(OpcodeID id)
+inline const Opcode& getOpcodeWide16(OpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMapWide16[id];
+    return g_opcodeMapWide16[id];
 #else
     UNUSED_PARAM(id);
     RELEASE_ASSERT_NOT_REACHED();
@@ -111,10 +115,10 @@
 #endif
 }
 
-inline Opcode getOpcodeWide32(OpcodeID id)
+inline const Opcode& getOpcodeWide32(OpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMapWide32[id];
+    return g_opcodeMapWide32[id];
 #else
     UNUSED_PARAM(id);
     RELEASE_ASSERT_NOT_REACHED();
@@ -122,37 +126,33 @@
 }
 
 template<PtrTag tag>
-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID opcodeID)
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtrImpl(const Opcode opcode, const void* opcodeAddress)
 {
-    void* address = reinterpret_cast<void*>(getOpcode(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    void* opcodeValue = reinterpret_cast<void*>(opcode);
+    void* untaggedOpcode = untagAddressDiversifiedCodePtr<BytecodePtrTag>(opcodeValue, opcodeAddress);
+    void* retaggedOpcode = tagCodePtr<tag>(untaggedOpcode);
+    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(retaggedOpcode);
 }
 
 template<PtrTag tag>
-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID)
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID opcodeID)
 {
-    void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    const Opcode& opcode = getOpcode(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID)
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID)
 {
-    void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    const Opcode& opcode = getOpcodeWide16(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(const Instruction& instruction)
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID)
 {
-    if (instruction.isWide16())
-        return getWide16CodePtr<tag>(instruction.opcodeID());
-    if (instruction.isWide32())
-        return getWide32CodePtr<tag>(instruction.opcodeID());
-    return getCodePtr<tag>(instruction.opcodeID());
+    const Opcode& opcode = getOpcodeWide32(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
@@ -206,19 +206,19 @@
 
 #if ENABLE(WEBASSEMBLY)
 
-inline Opcode getOpcode(WasmOpcodeID id)
+inline const Opcode& getOpcode(WasmOpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMap[numOpcodeIDs + id];
+    return g_opcodeMap[numOpcodeIDs + id];
 #else
     return static_cast<Opcode>(id);
 #endif
 }
 
-inline Opcode getOpcodeWide16(WasmOpcodeID id)
+inline const Opcode& getOpcodeWide16(WasmOpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMapWide16[numOpcodeIDs + id];
+    return g_opcodeMapWide16[numOpcodeIDs + id];
 #else
     UNUSED_PARAM(id);
     RELEASE_ASSERT_NOT_REACHED();
@@ -225,10 +225,10 @@
 #endif
 }
 
-inline Opcode getOpcodeWide32(WasmOpcodeID id)
+inline const Opcode& getOpcodeWide32(WasmOpcodeID id)
 {
 #if ENABLE(COMPUTED_GOTO_OPCODES)
-    return g_jscConfig.llint.opcodeMapWide32[numOpcodeIDs + id];
+    return g_opcodeMapWide32[numOpcodeIDs + id];
 #else
     UNUSED_PARAM(id);
     RELEASE_ASSERT_NOT_REACHED();
@@ -238,25 +238,22 @@
 template<PtrTag tag>
 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(WasmOpcodeID opcodeID)
 {
-    void* address = reinterpret_cast<void*>(getOpcode(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    const Opcode& opcode = getOpcode(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(WasmOpcodeID opcodeID)
 {
-    void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    const Opcode& opcode = getOpcodeWide16(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(WasmOpcodeID opcodeID)
 {
-    void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID));
-    address = retagCodePtr<BytecodePtrTag, tag>(address);
-    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+    const Opcode& opcode = getOpcodeWide32(opcodeID);
+    return getCodePtrImpl<tag>(opcode, &opcode);
 }
 
 template<PtrTag tag>
@@ -289,9 +286,9 @@
 #endif
 }
 
-#endif
+#endif // ENABLE(WEBASSEMBLY)
 
-#else
+#else // not ENABLE(JIT)
 ALWAYS_INLINE void* getCodePtr(OpcodeID id)
 {
     return reinterpret_cast<void*>(getOpcode(id));
@@ -306,13 +303,8 @@
 {
     return reinterpret_cast<void*>(getOpcodeWide32(id));
 }
-#endif
+#endif // ENABLE(JIT)
 
-ALWAYS_INLINE void* getCodePtr(JSC::EncodedJSValue glueHelper())
-{
-    return bitwise_cast<void*>(glueHelper);
-}
-
 #if ENABLE(JIT)
 struct Registers {
     static constexpr GPRReg pcGPR = GPRInfo::regT4;

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (269510 => 269511)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2020-11-06 15:50:49 UTC (rev 269511)
@@ -253,6 +253,7 @@
 const ArithProfileNumberNumber = constexpr (BinaryArithProfile::observedNumberNumberBits())
 
 # Pointer Tags
+const AddressDiversified = 1
 const BytecodePtrTag = constexpr BytecodePtrTag
 const JSEntryPtrTag = constexpr JSEntryPtrTag
 const HostFunctionPtrTag = constexpr HostFunctionPtrTag
@@ -330,20 +331,20 @@
 
 macro nextInstruction()
     loadb [PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMap, t1
-    jmp [t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMap, t1
+    jmp [t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro nextInstructionWide16()
     loadb OpcodeIDNarrowSize[PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMapWide16, t1
-    jmp [t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMapWide16, t1
+    jmp [t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro nextInstructionWide32()
     loadb OpcodeIDNarrowSize[PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMapWide32, t1
-    jmp [t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMapWide32, t1
+    jmp [t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro dispatch(advanceReg)
@@ -1769,10 +1770,16 @@
             leap (label - _%kind%_relativePCBase)[t3], t4
             move index, t5
             storep t4, [map, t5, 4]
-        elsif ARM64 or ARM64E
+        elsif ARM64
             pcrtoaddr label, t3
             move index, t4
             storep t3, [map, t4, PtrSize]
+        elsif ARM64E
+            pcrtoaddr label, t3
+            move index, t4
+            leap [map, t4, PtrSize], t4
+            tagCodePtr t3, BytecodePtrTag, AddressDiversified, t4
+            storep t3, [t4]
         elsif ARMv7
             mvlbl (label - _%kind%_relativePCBase), t4
             addp t4, t3, t4
@@ -1822,6 +1829,12 @@
 
         # Include generated bytecode initialization file.
         includeEntriesAtOffset(kind, initialize)
+
+        leap JSCConfig + constexpr JSC::offsetOfJSCConfigInitializeHasBeenCalled, t3
+        bbeq [t3], 0, .notFrozen
+        crash()
+    .notFrozen:
+
         popCalleeSaves()
         functionEpilogue()
         ret

Modified: trunk/Source/_javascript_Core/llint/WebAssembly.asm (269510 => 269511)


--- trunk/Source/_javascript_Core/llint/WebAssembly.asm	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/llint/WebAssembly.asm	2020-11-06 15:50:49 UTC (rev 269511)
@@ -95,20 +95,20 @@
 
 macro wasmNextInstruction()
     loadb [PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMap, t1
-    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMap, t1
+    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro wasmNextInstructionWide16()
     loadb OpcodeIDNarrowSize[PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMapWide16, t1
-    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMapWide16, t1
+    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro wasmNextInstructionWide32()
     loadb OpcodeIDNarrowSize[PB, PC, 1], t0
-    leap JSCConfig + constexpr JSC::offsetOfJSCConfigOpcodeMapWide32, t1
-    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag
+    leap _g_opcodeMapWide32, t1
+    jmp NumberOfJSOpcodeIDs * PtrSize[t1, t0, PtrSize], BytecodePtrTag, AddressDiversified
 end
 
 macro checkSwitchToJIT(increment, action)

Modified: trunk/Source/_javascript_Core/offlineasm/arm64.rb (269510 => 269511)


--- trunk/Source/_javascript_Core/offlineasm/arm64.rb	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/offlineasm/arm64.rb	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,4 +1,4 @@
-# Copyright (C) 2011-2019 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2020 Apple Inc. All rights reserved.
 # Copyright (C) 2014 University of Szeged. All rights reserved.
 #
 # Redistribution and use in source and binary forms, with or without
@@ -305,9 +305,11 @@
             when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbsi", "loadbsq", "loadh", "loadhsi", "loadhsq", "leap"
                 labelRef = node.operands[0]
                 if labelRef.is_a? LabelReference
-                    tmp = Tmp.new(node.codeOrigin, :gpr)
-                    newList << Instruction.new(codeOrigin, "globaladdr", [LabelReference.new(node.codeOrigin, labelRef.label), tmp])
-                    newList << Instruction.new(codeOrigin, node.opcode, [Address.new(node.codeOrigin, tmp, Immediate.new(node.codeOrigin, labelRef.offset)), node.operands[1]])
+                    dest = node.operands[1]
+                    newList << Instruction.new(codeOrigin, "globaladdr", [LabelReference.new(node.codeOrigin, labelRef.label), dest])
+                    if node.opcode != "leap" or labelRef.offset != 0
+                        newList << Instruction.new(codeOrigin, node.opcode, [Address.new(node.codeOrigin, dest, Immediate.new(node.codeOrigin, labelRef.offset)), dest])
+                    end
                 else
                     newList << node
                 end

Modified: trunk/Source/_javascript_Core/offlineasm/arm64e.rb (269510 => 269511)


--- trunk/Source/_javascript_Core/offlineasm/arm64e.rb	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/offlineasm/arm64e.rb	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,4 +1,4 @@
-# Copyright (C) 2018-2019 Apple Inc. All rights reserved.
+# Copyright (C) 2018-2020 Apple Inc. All rights reserved.
 #
 # Redistribution and use in source and binary forms, with or without
 # modification, are permitted provided that the following conditions
@@ -42,8 +42,21 @@
             codeOrigin = node.codeOrigin
             case node.opcode
             when "jmp", "call"
-                if node.operands.size > 1
-                    if node.operands[1].is_a? RegisterID
+                if node.operands.size == 3
+                    raise unless node.operands[2].value == 1
+                    raise unless node.operands[1].immediate? and node.operands[1].value <= 0xffff
+                    raise unless node.operands[0].address?
+                    address = Tmp.new(codeOrigin, :gpr)
+                    target = Tmp.new(codeOrigin, :gpr)
+                    newList << Instruction.new(codeOrigin, "leap", [node.operands[0], address], annotation)
+                    newList << Instruction.new(codeOrigin, "loadp", [Address.new(codeOrigin, address, Immediate.new(codeOrigin, 0)), target], annotation)
+                    tag = Tmp.new(codeOrigin, :gpr)
+                    newList << Instruction.new(codeOrigin, "move", [Immediate.new(codeOrigin, node.operands[1].value << 48), tag], annotation)
+                    newList << Instruction.new(codeOrigin, "xorp", [address, tag], annotation)
+                    newList << node.cloneWithNewOperands([target, tag])
+                    wasHandled = true
+                elsif node.operands.size > 1
+                    if node.operands[1].is_a? RegisterID or node.operands[1].is_a? Tmp
                         tag = riscLowerOperandToRegister(node, newList, postInstructions, 1, "p", false)
                     else
                         tag = Tmp.new(codeOrigin, :gpr)
@@ -56,6 +69,23 @@
                     newList << node.cloneWithNewOperands(operands)
                     wasHandled = true
                 end
+            when "tagCodePtr"
+                raise if node.operands.size < 1 or not node.operands[0].is_a? RegisterID
+                if node.operands.size == 4
+                    raise unless node.operands[3].register?
+                    raise unless node.operands[2].immediate? and node.operands[2].value == 1
+                    raise unless node.operands[1].immediate? and node.operands[1].value <= 0xffff
+                    address = node.operands[3]
+                    if node.operands[1].immediate?
+                        tag = Tmp.new(codeOrigin, :gpr)
+                        newList << Instruction.new(codeOrigin, "move", [Immediate.new(codeOrigin, node.operands[1].value << 48), tag], annotation)
+                    elsif operands[1].register?
+                        tag = node.operands[1]
+                    end
+                    newList << Instruction.new(codeOrigin, "xorp", [address, tag], annotation)
+                    newList << node.cloneWithNewOperands([node.operands[0], tag])
+                    wasHandled = true
+                end
             when "untagArrayPtr"
                 newOperands = node.operands.map {
                     | operand |
@@ -91,6 +121,11 @@
             else
                 emitARM64Unflipped("brab", operands, :ptr)
             end
+        when "tagCodePtr"
+            raise if operands.size > 2
+            raise unless operands[0].register?
+            raise unless operands[1].register?
+            emitARM64Unflipped("pacib", operands, :ptr)
         when "tagReturnAddress"
             raise if operands.size < 1 or not operands[0].is_a? RegisterID
             if operands[0].is_a? RegisterID and operands[0].name == "sp"

Modified: trunk/Source/_javascript_Core/offlineasm/ast.rb (269510 => 269511)


--- trunk/Source/_javascript_Core/offlineasm/ast.rb	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/offlineasm/ast.rb	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,4 +1,4 @@
-# Copyright (C) 2011-2018 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2020 Apple Inc. All rights reserved.
 #
 # Redistribution and use in source and binary forms, with or without
 # modification, are permitted provided that the following conditions
@@ -699,7 +699,10 @@
 end
 
 class SpecialRegister < NoChildren
+    attr_reader :name
+
     def initialize(name)
+        super(codeOrigin)
         @name = name
     end
     
@@ -942,7 +945,7 @@
             $asm.putGlobalAnnotation
         when "emit"
             $asm.puts "#{operands[0].dump}"
-        when "tagReturnAddress", "untagReturnAddress", "removeCodePtrTag", "untagArrayPtr"
+        when "tagCodePtr", "tagReturnAddress", "untagReturnAddress", "removeCodePtrTag", "untagArrayPtr"
         else
             raise "Unhandled opcode #{opcode} at #{codeOriginString}"
         end

Modified: trunk/Source/_javascript_Core/offlineasm/instructions.rb (269510 => 269511)


--- trunk/Source/_javascript_Core/offlineasm/instructions.rb	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/offlineasm/instructions.rb	2020-11-06 15:50:49 UTC (rev 269511)
@@ -1,4 +1,4 @@
-# Copyright (C) 2011-2018 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2020 Apple Inc. All rights reserved.
 #
 # Redistribution and use in source and binary forms, with or without
 # modification, are permitted provided that the following conditions
@@ -299,6 +299,7 @@
      "leai",
      "leap",
      "memfence",
+     "tagCodePtr",
      "tagReturnAddress",
      "untagReturnAddress",
      "removeCodePtrTag",

Modified: trunk/Source/_javascript_Core/runtime/JSCConfig.h (269510 => 269511)


--- trunk/Source/_javascript_Core/runtime/JSCConfig.h	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/runtime/JSCConfig.h	2020-11-06 15:50:49 UTC (rev 269511)
@@ -90,9 +90,6 @@
     struct {
         uint8_t exceptionInstructions[maxOpcodeLength + 1];
         uint8_t wasmExceptionInstructions[maxOpcodeLength + 1];
-        Opcode opcodeMap[numOpcodeIDs + numWasmOpcodeIDs];
-        Opcode opcodeMapWide16[numOpcodeIDs + numWasmOpcodeIDs];
-        Opcode opcodeMapWide32[numOpcodeIDs + numWasmOpcodeIDs];
         const void* gateMap[numberOfGates];
     } llint;
 
@@ -116,9 +113,7 @@
 
 #endif // ENABLE(UNIFIED_AND_FREEZABLE_CONFIG_RECORD)
 
-constexpr size_t offsetOfJSCConfigOpcodeMap = offsetof(JSC::Config, llint.opcodeMap);
-constexpr size_t offsetOfJSCConfigOpcodeMapWide16 = offsetof(JSC::Config, llint.opcodeMapWide16);
-constexpr size_t offsetOfJSCConfigOpcodeMapWide32 = offsetof(JSC::Config, llint.opcodeMapWide32);
+constexpr size_t offsetOfJSCConfigInitializeHasBeenCalled = offsetof(JSC::Config, initializeHasBeenCalled);
 constexpr size_t offsetOfJSCConfigGateMap = offsetof(JSC::Config, llint.gateMap);
 
 } // namespace JSC

Modified: trunk/Source/_javascript_Core/runtime/JSCPtrTag.h (269510 => 269511)


--- trunk/Source/_javascript_Core/runtime/JSCPtrTag.h	2020-11-06 14:33:10 UTC (rev 269510)
+++ trunk/Source/_javascript_Core/runtime/JSCPtrTag.h	2020-11-06 15:50:49 UTC (rev 269511)
@@ -201,6 +201,20 @@
 #endif
 }
 
+template<PtrTag tag, typename PtrType>
+inline PtrType untagAddressDiversifiedCodePtr(PtrType ptr, const void* ptrAddress)
+{
+    UNUSED_PARAM(ptrAddress);
+#if CPU(ARM64E)
+    uint64_t address = bitwise_cast<uint64_t>(ptrAddress);
+    uint64_t tagBits = static_cast<uint64_t>(tag) << 48;
+    uint64_t addressDiversifiedTag = tagBits ^ address;
+    return __builtin_ptrauth_auth(ptr, ptrauth_key_process_dependent_code, addressDiversifiedTag);
+#else
+    return ptr;
+#endif
+}
+
 #if CPU(ARM64E) && ENABLE(PTRTAG_DEBUGGING)
 void initializePtrTagLookup();
 #else
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to