Title: [292540] trunk
Revision
292540
Author
commit-qu...@webkit.org
Date
2022-04-07 10:03:19 -0700 (Thu, 07 Apr 2022)

Log Message

[JSC][ARMv7] Support proper near calls and JUMP_ISLANDS
https://bugs.webkit.org/show_bug.cgi?id=238143

Patch by Geza Lore <gl...@igalia.com> on 2022-04-07
Reviewed by Yusuke Suzuki.

JSTests:

* microbenchmarks/let-const-tdz-environment-parsing-and-hash-consing-speed.js:

Source/_javascript_Core:

Implement nearCall and nearTailCall as single instruction direct
branches on ARMv7/Thumb-2. (Will need to support these for Wasm JITs,
to implement threadSafePatchableNearcall.) To make this possible while
also having an executable pool size larger than the branch range, I
also ported JUMP_ISLANDS.

To port JUMP_ISLANDS, a reformulation of the region allocations was
necessary, which is now done in terms of the range of the
nearCall/nearTailCall macroassembler macros. For ARM64, the behaviour
should be identical.

The jump islad reservation on ARMv7 is set to 5% of executable memory
size, which is approximately the same as the baseline JIT code size
saving provided by using short branches for near calls, so the change
should be neutral overall with respect to executable memory
consumption.

Also made it possible for the --jitMemoryReservationSize option to
request JIT memory that is larger than the default hardcoded size
while using JUMP_ISLANDS (we need this for testing on ARMv7, which has
a smaller default executable pool size). To do this the region
allocators are no longer statically allocated but are held in a
FixedVector.

Also removed the unused repatchCompact methods from assemblers.

* assembler/ARM64Assembler.h:
* assembler/ARMv7Assembler.h:
(JSC::ARMv7Assembler::isEven):
(JSC::ARMv7Assembler::makeEven):
(JSC::ARMv7Assembler::bl):
(JSC::ARMv7Assembler::link):
(JSC::ARMv7Assembler::linkTailCall):
(JSC::ARMv7Assembler::linkCall):
(JSC::ARMv7Assembler::relinkCall):
(JSC::ARMv7Assembler::relinkTailCall):
(JSC::ARMv7Assembler::prepareForAtomicRelinkJumpConcurrently):
(JSC::ARMv7Assembler::prepareForAtomicRelinkCallConcurrently):
(JSC::ARMv7Assembler::replaceWithJump):
(JSC::ARMv7Assembler::canEmitJump):
(JSC::ARMv7Assembler::isBL):
(JSC::ARMv7Assembler::linkJumpT4):
(JSC::ARMv7Assembler::linkConditionalJumpT4):
(JSC::ARMv7Assembler::linkJumpAbsolute):
(JSC::ARMv7Assembler::linkBranch):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::repatchNearCall):
* assembler/AssemblerCommon.h:
(JSC::isInt):
* assembler/MIPSAssembler.h:
* assembler/MacroAssemblerARM64.h:
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::nearCall):
(JSC::MacroAssemblerARMv7::nearTailCall):
(JSC::MacroAssemblerARMv7::linkCall):
* assembler/MacroAssemblerMIPS.h:
* assembler/MacroAssemblerRISCV64.h:
* assembler/MacroAssemblerX86Common.h:
* assembler/X86Assembler.h:
* bytecode/Repatch.cpp:
(JSC::linkPolymorphicCall):
* jit/ExecutableAllocator.cpp:
(JSC::initializeJITPageReservation):

Source/WTF:

Support constructor arguments for FixedVector element initialization.

* wtf/EmbeddedFixedVector.h:
* wtf/FixedVector.h:
(WTF::FixedVector::FixedVector):
* wtf/PlatformEnable.h:
* wtf/TrailingArray.h:
(WTF::TrailingArray::TrailingArray):
* wtf/Vector.h:
(WTF::VectorTypeOperations::initializeWithArgs):

Modified Paths

Diff

Modified: trunk/JSTests/ChangeLog (292539 => 292540)


--- trunk/JSTests/ChangeLog	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/JSTests/ChangeLog	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1,3 +1,12 @@
+2022-04-07  Geza Lore  <gl...@igalia.com>
+
+        [JSC][ARMv7] Support proper near calls and JUMP_ISLANDS
+        https://bugs.webkit.org/show_bug.cgi?id=238143
+
+        Reviewed by Yusuke Suzuki.
+
+        * microbenchmarks/let-const-tdz-environment-parsing-and-hash-consing-speed.js:
+
 2022-04-06  Yusuke Suzuki  <ysuz...@apple.com>
 
         [JSC] Substring resolving should check 8bit / 16bit again

Modified: trunk/JSTests/microbenchmarks/let-const-tdz-environment-parsing-and-hash-consing-speed.js (292539 => 292540)


--- trunk/JSTests/microbenchmarks/let-const-tdz-environment-parsing-and-hash-consing-speed.js	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/JSTests/microbenchmarks/let-const-tdz-environment-parsing-and-hash-consing-speed.js	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1,4 +1,4 @@
-//@ defaultNoNoLLIntRun if $architecture == "mips"
+//@ defaultNoNoLLIntRun if $architecture == "mips" || $architecture == "arm"
 /*
  * Copyright jQuery Foundation and other contributors, https://jquery.org/
  *

Modified: trunk/Source/_javascript_Core/ChangeLog (292539 => 292540)


--- trunk/Source/_javascript_Core/ChangeLog	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/ChangeLog	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1,5 +1,76 @@
 2022-04-07  Geza Lore  <gl...@igalia.com>
 
+        [JSC][ARMv7] Support proper near calls and JUMP_ISLANDS
+        https://bugs.webkit.org/show_bug.cgi?id=238143
+
+        Reviewed by Yusuke Suzuki.
+
+        Implement nearCall and nearTailCall as single instruction direct
+        branches on ARMv7/Thumb-2. (Will need to support these for Wasm JITs,
+        to implement threadSafePatchableNearcall.) To make this possible while
+        also having an executable pool size larger than the branch range, I
+        also ported JUMP_ISLANDS.
+
+        To port JUMP_ISLANDS, a reformulation of the region allocations was
+        necessary, which is now done in terms of the range of the
+        nearCall/nearTailCall macroassembler macros. For ARM64, the behaviour
+        should be identical.
+
+        The jump islad reservation on ARMv7 is set to 5% of executable memory
+        size, which is approximately the same as the baseline JIT code size
+        saving provided by using short branches for near calls, so the change
+        should be neutral overall with respect to executable memory
+        consumption.
+
+        Also made it possible for the --jitMemoryReservationSize option to
+        request JIT memory that is larger than the default hardcoded size
+        while using JUMP_ISLANDS (we need this for testing on ARMv7, which has
+        a smaller default executable pool size). To do this the region
+        allocators are no longer statically allocated but are held in a
+        FixedVector.
+
+        Also removed the unused repatchCompact methods from assemblers.
+
+        * assembler/ARM64Assembler.h:
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::isEven):
+        (JSC::ARMv7Assembler::makeEven):
+        (JSC::ARMv7Assembler::bl):
+        (JSC::ARMv7Assembler::link):
+        (JSC::ARMv7Assembler::linkTailCall):
+        (JSC::ARMv7Assembler::linkCall):
+        (JSC::ARMv7Assembler::relinkCall):
+        (JSC::ARMv7Assembler::relinkTailCall):
+        (JSC::ARMv7Assembler::prepareForAtomicRelinkJumpConcurrently):
+        (JSC::ARMv7Assembler::prepareForAtomicRelinkCallConcurrently):
+        (JSC::ARMv7Assembler::replaceWithJump):
+        (JSC::ARMv7Assembler::canEmitJump):
+        (JSC::ARMv7Assembler::isBL):
+        (JSC::ARMv7Assembler::linkJumpT4):
+        (JSC::ARMv7Assembler::linkConditionalJumpT4):
+        (JSC::ARMv7Assembler::linkJumpAbsolute):
+        (JSC::ARMv7Assembler::linkBranch):
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::repatchNearCall):
+        * assembler/AssemblerCommon.h:
+        (JSC::isInt):
+        * assembler/MIPSAssembler.h:
+        * assembler/MacroAssemblerARM64.h:
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::nearCall):
+        (JSC::MacroAssemblerARMv7::nearTailCall):
+        (JSC::MacroAssemblerARMv7::linkCall):
+        * assembler/MacroAssemblerMIPS.h:
+        * assembler/MacroAssemblerRISCV64.h:
+        * assembler/MacroAssemblerX86Common.h:
+        * assembler/X86Assembler.h:
+        * bytecode/Repatch.cpp:
+        (JSC::linkPolymorphicCall):
+        * jit/ExecutableAllocator.cpp:
+        (JSC::initializeJITPageReservation):
+
+2022-04-07  Geza Lore  <gl...@igalia.com>
+
         [JSC][32bit] Use constexpr tags instead of enums
         https://bugs.webkit.org/show_bug.cgi?id=238926
 

Modified: trunk/Source/_javascript_Core/assembler/ARM64Assembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/ARM64Assembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/ARM64Assembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -56,14 +56,6 @@
 
 namespace JSC {
 
-template<size_t bits, typename Type>
-ALWAYS_INLINE constexpr bool isInt(Type t)
-{
-    constexpr size_t shift = sizeof(Type) * CHAR_BIT - bits;
-    static_assert(sizeof(Type) * CHAR_BIT > shift, "shift is larger than the size of the value");
-    return ((t << shift) >> shift) == t;
-}
-
 static ALWAYS_INLINE bool is4ByteAligned(const void* ptr)
 {
     return !(reinterpret_cast<intptr_t>(ptr) & 0x3);
@@ -2889,6 +2881,11 @@
         cacheFlush(reinterpret_cast<int*>(from) - 1, sizeof(int));
     }
 
+    static void relinkTailCall(void* from, void* to)
+    {
+        relinkJump(from, to);
+    }
+
 #if ENABLE(JUMP_ISLANDS)
     static void* prepareForAtomicRelinkJumpConcurrently(void* from, void* to)
     {
@@ -2908,30 +2905,6 @@
     }
 #endif
     
-    static void repatchCompact(void* where, int32_t value)
-    {
-        ASSERT(!(value & ~0x3ff8));
-
-        MemOpSize size;
-        bool V;
-        MemOp opc;
-        int imm12;
-        RegisterID rn;
-        RegisterID rt;
-        bool expected = disassembleLoadStoreRegisterUnsignedImmediate(where, size, V, opc, imm12, rn, rt);
-        ASSERT_UNUSED(expected, expected && size >= MemOpSize_32 && !V && opc == MemOp_LOAD); // expect 32/64 bit load to GPR.
-
-        if (size == MemOpSize_32)
-            imm12 = encodePositiveImmediate<32>(value);
-        else
-            imm12 = encodePositiveImmediate<64>(value);
-        int insn = loadStoreRegisterUnsignedImmediate(size, V, opc, imm12, rn, rt);
-        RELEASE_ASSERT(roundUpToMultipleOf<instructionSize>(where) == where);
-        performJITMemcpy(where, &insn, sizeof(int));
-
-        cacheFlush(where, sizeof(int));
-    }
-
     unsigned debugOffset() { return m_buffer.debugOffset(); }
 
 #if OS(LINUX) && COMPILER(GCC_COMPATIBLE)

Modified: trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -491,6 +491,16 @@
 
 private:
 
+    // In the ARMv7 ISA, the LSB of a code pointer indicates whether the target uses Thumb vs ARM
+    // encoding. These utility functions are there for when we need to deal with this.
+    static bool isEven(const void* ptr) { return !(reinterpret_cast<uintptr_t>(ptr) & 1); }
+    static bool isEven(AssemblerLabel &label) { return !(label.offset() & 1); }
+    static void* makeEven(const void* ptr)
+    {
+        ASSERT(!isEven(ptr));
+        return reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(ptr) & ~1);
+    }
+
     // ARMv7, Appx-A.6.3
     static bool BadReg(RegisterID reg)
     {
@@ -609,6 +619,7 @@
         OP_VORR_T1      = 0xEF20,
         OP_B_T3a        = 0xF000,
         OP_B_T4a        = 0xF000,
+        OP_BL_T4a       = 0xF000,
         OP_AND_imm_T1   = 0xF000,
         OP_TST_imm      = 0xF010,
         OP_BIC_imm_T1   = 0xF020,
@@ -697,6 +708,7 @@
         OP_DMB_ISHST_T1b = 0x8F5A,
         OP_B_T3b         = 0x8000,
         OP_B_T4b         = 0x9000,
+        OP_BL_T4b        = 0xD000,
     } OpcodeID2;
 
     struct FourFours {
@@ -719,6 +731,11 @@
         } m_u;
     };
 
+    enum class BranchWithLink : bool {
+        No = false,
+        Yes = true
+    };
+
     class ARMInstructionFormatter;
 
     // false means else!
@@ -921,6 +938,13 @@
         m_formatter.twoWordOp16Op16(OP_B_T4a, OP_B_T4b);
         return m_formatter.label();
     }
+
+    // Only allowed in IT (if then) block if last instruction.
+    ALWAYS_INLINE AssemblerLabel bl()
+    {
+        m_formatter.twoWordOp16Op16(OP_BL_T4a, OP_BL_T4b);
+        return m_formatter.label();
+    }
     
     // Only allowed in IT (if then) block if last instruction.
     ALWAYS_INLINE AssemblerLabel blx(RegisterID rm)
@@ -2273,7 +2297,7 @@
             linkJumpT3<copy>(record.condition(), reinterpret_cast_ptr<uint16_t*>(from), fromInstruction, to);
             break;
         case LinkJumpT4:
-            linkJumpT4<copy>(reinterpret_cast_ptr<uint16_t*>(from), fromInstruction, to);
+            linkJumpT4<copy>(reinterpret_cast_ptr<uint16_t*>(from), fromInstruction, to, BranchWithLink::No);
             break;
         case LinkConditionalJumpT4:
             linkConditionalJumpT4<copy>(record.condition(), reinterpret_cast_ptr<uint16_t*>(from), fromInstruction, to);
@@ -2321,12 +2345,20 @@
         linkJumpAbsolute(location, location, to);
     }
 
+    static void linkTailCall(void* code, AssemblerLabel from, void* to)
+    {
+        ASSERT(from.isSet());
+
+        uint16_t* location = reinterpret_cast<uint16_t*>(reinterpret_cast<intptr_t>(code) + from.offset());
+        linkBranch(location, location, makeEven(to), BranchWithLink::No);
+    }
+
     static void linkCall(void* code, AssemblerLabel from, void* to)
     {
-        ASSERT(!(reinterpret_cast<intptr_t>(code) & 1));
         ASSERT(from.isSet());
 
-        setPointer(reinterpret_cast<uint16_t*>(reinterpret_cast<intptr_t>(code) + from.offset()) - 1, to, false);
+        uint16_t* location = reinterpret_cast<uint16_t*>(reinterpret_cast<intptr_t>(code) + from.offset());
+        linkBranch(location, location, makeEven(to), BranchWithLink::Yes);
     }
 
     static void linkPointer(void* code, AssemblerLabel where, void* value)
@@ -2350,11 +2382,50 @@
 
     static void relinkCall(void* from, void* to)
     {
-        ASSERT(!(reinterpret_cast<intptr_t>(from) & 1));
+        ASSERT(isEven(from));
 
-        setPointer(reinterpret_cast<uint16_t*>(from) - 1, to, true);
+        uint16_t* location = reinterpret_cast<uint16_t*>(from);
+        if (isBL(location - 2)) {
+            linkBranch(location, location, makeEven(to), BranchWithLink::Yes);
+            cacheFlush(location - 2, 2 * sizeof(uint16_t));
+            return;
+        }
+
+        setPointer(location - 1, to, true);
     }
-    
+
+    static void relinkTailCall(void* from, void* to)
+    {
+        ASSERT(isEven(from));
+
+        uint16_t* location = reinterpret_cast<uint16_t*>(from);
+        linkBranch(location, location, to, BranchWithLink::No);
+        cacheFlush(location - 2, 2 * sizeof(uint16_t));
+    }
+
+#if ENABLE(JUMP_ISLANDS)
+    static void* prepareForAtomicRelinkJumpConcurrently(void* from, void* to)
+    {
+        ASSERT(isEven(from));
+        ASSERT(isEven(to));
+
+        intptr_t offset = bitwise_cast<intptr_t>(to) - bitwise_cast<intptr_t>(from);
+        ASSERT(static_cast<int>(offset) == offset);
+
+        if (isInt<25>(offset))
+            return to;
+
+        return ExecutableAllocator::singleton().getJumpIslandToConcurrently(from, to);
+    }
+
+    static void* prepareForAtomicRelinkCallConcurrently(void* from, void* to)
+    {
+        ASSERT(isEven(from));
+
+        return prepareForAtomicRelinkJumpConcurrently(from, makeEven(to));
+    }
+#endif
+
     static void* readCallTarget(void* from)
     {
         return readPointer(reinterpret_cast<uint16_t*>(from) - 1);
@@ -2367,27 +2438,6 @@
         setInt32(where, value, true);
     }
     
-    static void repatchCompact(void* where, int32_t offset)
-    {
-        ASSERT(offset >= -255 && offset <= 255);
-
-        bool add = true;
-        if (offset < 0) {
-            add = false;
-            offset = -offset;
-        }
-        
-        offset |= (add << 9);
-        offset |= (1 << 10);
-        offset |= (1 << 11);
-
-        uint16_t* location = reinterpret_cast<uint16_t*>(where);
-        uint16_t instruction = location[1] & ~((1 << 12) - 1);
-        instruction |= offset;
-        performJITMemcpy(location + 1, &instruction, sizeof(uint16_t));
-        cacheFlush(location, sizeof(uint16_t) * 2);
-    }
-
     static void repatchPointer(void* where, void* value)
     {
         ASSERT(!(reinterpret_cast<intptr_t>(where) & 1));
@@ -2408,7 +2458,7 @@
 #if OS(LINUX)
         if (canBeJumpT4(reinterpret_cast<uint16_t*>(instructionStart), to)) {
             uint16_t* ptr = reinterpret_cast<uint16_t*>(instructionStart) + 2;
-            linkJumpT4(ptr, ptr, to);
+            linkJumpT4(ptr, ptr, to, BranchWithLink::No);
             cacheFlush(ptr - 2, sizeof(uint16_t) * 2);
         } else {
             uint16_t* ptr = reinterpret_cast<uint16_t*>(instructionStart) + 5;
@@ -2417,7 +2467,7 @@
         }
 #else
         uint16_t* ptr = reinterpret_cast<uint16_t*>(instructionStart) + 2;
-        linkJumpT4(ptr, ptr, to);
+        linkJumpT4(ptr, ptr, to, BranchWithLink::No);
         cacheFlush(ptr - 2, sizeof(uint16_t) * 2);
 #endif
     }
@@ -2528,10 +2578,18 @@
 #endif
     }
 
+    static ALWAYS_INLINE bool canEmitJump(void* from, void* to)
+    {
+        // 'from' holds the address of the branch instruction. The branch range however is relative
+        // to the architectural value of the PC which is 4 larger than the address of the branch.
+        intptr_t offset = bitwise_cast<intptr_t>(to) - (bitwise_cast<intptr_t>(from) + 4);
+        return isInt<25>(offset);
+    }
+
 private:
     // VFP operations commonly take one or more 5-bit operands, typically representing a
-    // floating point register number.  This will commonly be encoded in the instruction
-    // in two parts, with one single bit field, and one 4-bit field.  In the case of
+    // floating point register number. This will commonly be encoded in the instruction
+    // in two parts, with one single bit field, and one 4-bit field. In the case of
     // double precision operands the high bit of the register number will be encoded
     // separately, and for single precision operands the high bit of the register number
     // will be encoded individually.
@@ -2654,6 +2712,12 @@
         return ((instruction[0] & 0xf800) == OP_B_T4a) && ((instruction[1] & 0xd000) == OP_B_T4b);
     }
 
+    static bool isBL(const void* address)
+    {
+        const uint16_t* instruction = static_cast<const uint16_t*>(address);
+        return ((instruction[0] & 0xf800) == OP_BL_T4a) && ((instruction[1] & 0xd000) == OP_BL_T4b);
+    }
+
     static bool isBX(const void* address)
     {
         const uint16_t* instruction = static_cast<const uint16_t*>(address);
@@ -2787,7 +2851,7 @@
     }
     
     template<CopyFunction copy = performJITMemcpy>
-    static void linkJumpT4(uint16_t* writeTarget, const uint16_t* instruction, void* target)
+    static void linkJumpT4(uint16_t* writeTarget, const uint16_t* instruction, void* target, BranchWithLink link)
     {
         // FIMXE: this should be up in the MacroAssembler layer. :-(        
         ASSERT(!(reinterpret_cast<intptr_t>(instruction) & 1));
@@ -2803,7 +2867,7 @@
         ASSERT(!(relative & 1));
         uint16_t instructions[2];
         instructions[0] = OP_B_T4a | ((relative & 0x1000000) >> 14) | ((relative & 0x3ff000) >> 12);
-        instructions[1] = OP_B_T4b | ((relative & 0x800000) >> 10) | ((relative & 0x400000) >> 11) | ((relative & 0xffe) >> 1);
+        instructions[1] = OP_B_T4b | (static_cast<uint16_t>(link) << 14) | ((relative & 0x800000) >> 10) | ((relative & 0x400000) >> 11) | ((relative & 0xffe) >> 1);
         copy(writeTarget - 2, instructions, 2 * sizeof(uint16_t));
     }
 
@@ -2816,7 +2880,7 @@
         
         uint16_t newInstruction = ifThenElse(cond) | OP_IT;
         copy(writeTarget - 3, &newInstruction, sizeof(uint16_t));
-        linkJumpT4<copy>(writeTarget, instruction, target);
+        linkJumpT4<copy>(writeTarget, instruction, target, BranchWithLink::No);
     }
 
     template<CopyFunction copy = performJITMemcpy>
@@ -2872,7 +2936,7 @@
             instructions[1] = OP_NOP_T2a;
             instructions[2] = OP_NOP_T2b;
             performJITMemcpy(writeTarget - 5, instructions, 3 * sizeof(uint16_t));
-            linkJumpT4(writeTarget, instruction, target);
+            linkJumpT4(writeTarget, instruction, target, BranchWithLink::No);
         } else {
             const uint16_t JUMP_TEMPORARY_REGISTER = ARMRegisters::ip;
             ARMThumbImmediate lo16 = ARMThumbImmediate::makeUInt16(static_cast<uint16_t>(reinterpret_cast<uint32_t>(target) + 1));
@@ -2887,7 +2951,26 @@
             performJITMemcpy(writeTarget - 5, instructions, 5 * sizeof(uint16_t));
         }
     }
-    
+
+    static void linkBranch(uint16_t* from, const uint16_t* fromInstruction, void* to, BranchWithLink link)
+    {
+        ASSERT(isEven(fromInstruction));
+        ASSERT(isEven(from));
+        ASSERT(isEven(to));
+        ASSERT(link == BranchWithLink::Yes ? isBL(from - 2) : isB(from - 2));
+
+        intptr_t offset = bitwise_cast<intptr_t>(to) - bitwise_cast<intptr_t>(fromInstruction);
+#if ENABLE(JUMP_ISLANDS)
+        if (!isInt<25>(offset)) {
+            to = ExecutableAllocator::singleton().getJumpIslandTo(bitwise_cast<void*>(fromInstruction), to);
+            offset = bitwise_cast<intptr_t>(to) - bitwise_cast<intptr_t>(fromInstruction);
+        }
+#endif
+        RELEASE_ASSERT(isInt<25>(offset));
+
+        linkJumpT4(from, fromInstruction, to, link);
+    }
+
     static uint16_t twoWordOp5i6Imm4Reg4EncodedImmFirst(uint16_t op, ARMThumbImmediate imm)
     {
         return op | (imm.m_value.i << 10) | imm.m_value.imm4;

Modified: trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -545,8 +545,8 @@
             Linkable = 0x1,
             Near = 0x2,
             Tail = 0x4,
-            LinkableNear = 0x3,
-            LinkableNearTail = 0x7,
+            LinkableNear = Linkable | Near,
+            LinkableNearTail = Linkable | Near | Tail,
         };
 
         Call()
@@ -904,7 +904,7 @@
     {
         switch (nearCall.callMode()) {
         case NearCallMode::Tail:
-            AssemblerType::relinkJump(nearCall.dataLocation(), destination.dataLocation());
+            AssemblerType::relinkTailCall(nearCall.dataLocation(), destination.dataLocation());
             return;
         case NearCallMode::Regular:
             AssemblerType::relinkCall(nearCall.dataLocation(), destination.untaggedExecutableAddress());
@@ -930,12 +930,6 @@
     }
 
     template<PtrTag tag>
-    static void repatchCompact(CodeLocationDataLabelCompact<tag> dataLabelCompact, int32_t value)
-    {
-        AssemblerType::repatchCompact(dataLabelCompact.template dataLocation(), value);
-    }
-
-    template<PtrTag tag>
     static void repatchInt32(CodeLocationDataLabel32<tag> dataLabel32, int32_t value)
     {
         AssemblerType::repatchInt32(dataLabel32.dataLocation(), value);

Modified: trunk/Source/_javascript_Core/assembler/AssemblerCommon.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/AssemblerCommon.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/AssemblerCommon.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -45,6 +45,14 @@
 #endif
 }
 
+template<size_t bits, typename Type>
+ALWAYS_INLINE constexpr bool isInt(Type t)
+{
+    constexpr size_t shift = sizeof(Type) * CHAR_BIT - bits;
+    static_assert(sizeof(Type) * CHAR_BIT > shift, "shift is larger than the size of the value");
+    return ((t << shift) >> shift) == t;
+}
+
 ALWAYS_INLINE bool isInt9(int32_t value)
 {
     return value == ((value << 23) >> 23);

Modified: trunk/Source/_javascript_Core/assembler/MIPSAssembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MIPSAssembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MIPSAssembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -845,6 +845,11 @@
         cacheFlush(start, size);
     }
 
+    static void relinkTailCall(void* from, void* to)
+    {
+        relinkJump(from, to);
+    }
+
     static void repatchInt32(void* from, int32_t to)
     {
         MIPSWord* insn = reinterpret_cast<MIPSWord*>(from);
@@ -867,11 +872,6 @@
         return result;
     }
     
-    static void repatchCompact(void* where, int32_t value)
-    {
-        repatchInt32(where, value);
-    }
-
     static void repatchPointer(void* from, void* to)
     {
         repatchInt32(from, reinterpret_cast<int32_t>(to));

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -41,7 +41,9 @@
 public:
     static constexpr unsigned numGPRs = 32;
     static constexpr unsigned numFPRs = 32;
-    
+
+    static constexpr size_t nearJumpRange = 128 * MB;
+
     static constexpr RegisterID dataTempRegister = ARM64Registers::ip0;
     static constexpr RegisterID memoryTempRegister = ARM64Registers::ip1;
 

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -38,6 +38,10 @@
 using Assembler = TARGET_ASSEMBLER;
 
 class MacroAssemblerARMv7 : public AbstractMacroAssembler<Assembler> {
+public:
+    static constexpr size_t nearJumpRange = 16 * MB;
+
+private:
     static constexpr RegisterID dataTempRegister = ARMRegisters::ip;
     static constexpr RegisterID addressTempRegister = ARMRegisters::r6;
 
@@ -2237,16 +2241,14 @@
 
     ALWAYS_INLINE Call nearCall()
     {
-        moveFixedWidthEncoding(TrustedImm32(0), dataTempRegister);
         invalidateAllTempRegisters();
-        return Call(m_assembler.blx(dataTempRegister), Call::LinkableNear);
+        return Call(m_assembler.bl(), Call::LinkableNear);
     }
 
     ALWAYS_INLINE Call nearTailCall()
     {
-        moveFixedWidthEncoding(TrustedImm32(0), dataTempRegister);
         invalidateAllTempRegisters();
-        return Call(m_assembler.bx(dataTempRegister), Call::LinkableNearTail);
+        return Call(m_assembler.b(), Call::LinkableNearTail);
     }
 
     ALWAYS_INLINE Call call(PtrTag)
@@ -2654,10 +2656,12 @@
     template<PtrTag tag>
     static void linkCall(void* code, Call call, FunctionPtr<tag> function)
     {
-        if (call.isFlagSet(Call::Tail))
-            ARMv7Assembler::linkJump(code, call.m_label, function.executableAddress());
+        if (!call.isFlagSet(Call::Near))
+            Assembler::linkPointer(code, call.m_label.labelAtOffset(-2), function.executableAddress());
+        else if (call.isFlagSet(Call::Tail))
+            Assembler::linkTailCall(code, call.m_label, function.executableAddress());
         else
-            ARMv7Assembler::linkCall(code, call.m_label, function.executableAddress());
+            Assembler::linkCall(code, call.m_label, function.executableAddress());
     }
 
     bool m_makeJumpPatchable;

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerMIPS.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerMIPS.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerMIPS.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -41,6 +41,8 @@
     static constexpr unsigned numGPRs = 32;
     static constexpr unsigned numFPRs = 32;
 
+    static constexpr size_t nearJumpRange = 2 * GB;
+
     MacroAssemblerMIPS()
         : m_fixedWidth(false)
     {

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerRISCV64.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerRISCV64.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerRISCV64.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -44,6 +44,8 @@
     static constexpr unsigned numGPRs = 32;
     static constexpr unsigned numFPRs = 32;
 
+    static constexpr size_t nearJumpRange = 2 * GB;
+
     static constexpr RegisterID dataTempRegister = RISCV64Registers::x30;
     static constexpr RegisterID memoryTempRegister = RISCV64Registers::x31;
 

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerX86Common.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -37,6 +37,8 @@
 
 class MacroAssemblerX86Common : public AbstractMacroAssembler<Assembler> {
 public:
+    static constexpr size_t nearJumpRange = 2 * GB;
+
 #if CPU(X86_64)
     // Use this directly only if you're not generating code with it.
     static constexpr X86Registers::RegisterID s_scratchRegister = X86Registers::r11;
@@ -1235,13 +1237,6 @@
         load16(address, dest);
     }
 
-    template<PtrTag tag>
-    static void repatchCompact(CodeLocationDataLabelCompact<tag> dataLabelCompact, int32_t value)
-    {
-        ASSERT(isCompactPtrAlignedAddressOffset(value));
-        AssemblerType_T::repatchCompact(dataLabelCompact.dataLocation(), value);
-    }
-    
     DataLabelCompact loadCompactWithAddressOffsetPatch(Address address, RegisterID dest)
     {
         padBeforePatch();

Modified: trunk/Source/_javascript_Core/assembler/RISCV64Assembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/RISCV64Assembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/RISCV64Assembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1642,6 +1642,11 @@
         cacheFlush(location, sizeof(uint32_t) * 2);
     }
 
+    static void relinkTailCall(void* from, void* to)
+    {
+        relinkJump(from, to);
+    }
+
     static void replaceWithVMHalt(void* where)
     {
         uint32_t* location = reinterpret_cast<uint32_t*>(where);

Modified: trunk/Source/_javascript_Core/assembler/X86Assembler.h (292539 => 292540)


--- trunk/Source/_javascript_Core/assembler/X86Assembler.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/assembler/X86Assembler.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -3798,12 +3798,10 @@
     {
         setRel32(from, to);
     }
-    
-    static void repatchCompact(void* where, int32_t value)
+
+    static void relinkTailCall(void* from, void* to)
     {
-        ASSERT(value >= std::numeric_limits<int8_t>::min());
-        ASSERT(value <= std::numeric_limits<int8_t>::max());
-        setInt8(where, value);
+        relinkJump(from, to);
     }
 
     static void repatchInt32(void* where, int32_t value)

Modified: trunk/Source/_javascript_Core/bytecode/Repatch.cpp (292539 => 292540)


--- trunk/Source/_javascript_Core/bytecode/Repatch.cpp	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/bytecode/Repatch.cpp	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1926,17 +1926,8 @@
     }
     
     RELEASE_ASSERT(callCases.size() == calls.size());
-    for (CallToCodePtr callToCodePtr : calls) {
-#if CPU(ARM_THUMB2)
-        // Tail call special-casing ensures proper linking on ARM Thumb2, where a tail call jumps to an address
-        // with a non-decorated bottom bit but a normal call calls an address with a decorated bottom bit.
-        bool isTailCall = callToCodePtr.call.isFlagSet(CCallHelpers::Call::Tail);
-        void* target = isTailCall ? callToCodePtr.codePtr.dataLocation() : callToCodePtr.codePtr.executableAddress();
-        patchBuffer.link(callToCodePtr.call, FunctionPtr<JSEntryPtrTag>(MacroAssemblerCodePtr<JSEntryPtrTag>::createFromExecutableAddress(target)));
-#else
+    for (CallToCodePtr callToCodePtr : calls)
         patchBuffer.link(callToCodePtr.call, FunctionPtr<JSEntryPtrTag>(callToCodePtr.codePtr));
-#endif
-    }
 
     if (!done.empty()) {
         ASSERT(!isDataIC);

Modified: trunk/Source/_javascript_Core/jit/ExecutableAllocator.cpp (292539 => 292540)


--- trunk/Source/_javascript_Core/jit/ExecutableAllocator.cpp	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/_javascript_Core/jit/ExecutableAllocator.cpp	2022-04-07 17:03:19 UTC (rev 292540)
@@ -33,6 +33,7 @@
 #include "LinkBuffer.h"
 #include <wtf/FastBitVector.h>
 #include <wtf/FileSystem.h>
+#include <wtf/FixedVector.h>
 #include <wtf/IterationStatus.h>
 #include <wtf/PageReservation.h>
 #include <wtf/ProcessID.h>
@@ -94,16 +95,15 @@
 #elif CPU(ARM64)
 #if ENABLE(JUMP_ISLANDS)
 static constexpr size_t fixedExecutableMemoryPoolSize = 512 * MB;
-// These sizes guarantee that any jump within an island can jump forwards or backwards
-// to the adjacent island in a single instruction.
-static constexpr size_t regionSize = 112 * MB;
-static constexpr size_t islandRegionSize = 16 * MB;
-static constexpr size_t maxNumberOfRegions = fixedExecutableMemoryPoolSize / regionSize;
-static constexpr size_t islandSizeInBytes = 4;
-static constexpr size_t maxIslandsPerRegion = islandRegionSize / islandSizeInBytes;
 #else
 static constexpr size_t fixedExecutableMemoryPoolSize = 128 * MB;
 #endif
+#elif CPU(ARM_THUMB2)
+#if ENABLE(JUMP_ISLANDS)
+static constexpr size_t fixedExecutableMemoryPoolSize = 32 * MB;
+#else
+static constexpr size_t fixedExecutableMemoryPoolSize = 16 * MB;
+#endif
 #elif CPU(X86_64)
 static constexpr size_t fixedExecutableMemoryPoolSize = 1 * GB;
 #else
@@ -110,6 +110,21 @@
 static constexpr size_t fixedExecutableMemoryPoolSize = 32 * MB;
 #endif
 
+#if ENABLE(JUMP_ISLANDS)
+#if CPU(ARM64)
+static constexpr double islandRegionSizeFraction = 0.125;
+static constexpr size_t islandSizeInBytes = 4;
+#elif CPU(ARM_THUMB2)
+static constexpr double islandRegionSizeFraction = 0.05;
+static constexpr size_t islandSizeInBytes = 4;
+#endif
+#endif
+
+// Quick sanity check, in case FIXED_EXECUTABLE_MEMORY_POOL_SIZE_IN_MB was set.
+#if !ENABLE(JUMP_ISLANDS)
+static_assert(fixedExecutableMemoryPoolSize <= MacroAssembler::nearJumpRange, "Executable pool size is too large for near jump/call without JUMP_ISLANDS");
+#endif
+
 #if CPU(ARM)
 static constexpr double executablePoolReservationFraction = 0.15;
 #else
@@ -343,20 +358,13 @@
         if (reservation.size * executablePoolReservationFraction < minimumExecutablePoolReservationSize)
             reservation.size += minimumExecutablePoolReservationSize;
 #endif
-
-#if ENABLE(JUMP_ISLANDS)
-        // If asked for a reservation smaller than island size, assume that we want that size allocation
-        // plus an island. The alternative would be to turn off jump islands, but since we only use
-        // this for testing, this is probably the easier way to do it.
-        //
-        // The main reason for this is that some JSC stress tests run with a 50KB pool. This hack means
-        // we don't have to change anything about those tests.
-        if (reservation.size < islandRegionSize)
-            reservation.size += islandRegionSize;
-#endif // ENABLE(JUMP_ISLANDS)
     }
     reservation.size = std::max(roundUpToMultipleOf(pageSize(), reservation.size), pageSize() * 2);
 
+#if !ENABLE(JUMP_ISLANDS)
+    RELEASE_ASSERT(reservation.size <= MacroAssembler::nearJumpRange, "Executable pool size is too large for near jump/call without JUMP_ISLANDS");
+#endif
+
 #if USE(LIBPAS_JIT_HEAP)
     if (reservation.size < minimumPoolSizeForSegregatedHeap)
         jit_heap_runtime_config.max_segregated_object_size = 0;
@@ -430,10 +438,7 @@
 
 public:
     FixedVMPoolExecutableAllocator()
-#if ENABLE(JUMP_ISLANDS)
-        : m_allocators(constructFixedSizeArrayWithArguments<RegionAllocator, maxNumberOfRegions>(*this))
-        , m_numAllocators(maxNumberOfRegions)
-#else
+#if !ENABLE(JUMP_ISLANDS)
         : m_allocator(*this)
 #endif
     {
@@ -441,31 +446,28 @@
         m_reservation = WTFMove(reservation.pageReservation);
         if (m_reservation) {
 #if ENABLE(JUMP_ISLANDS)
+            // These sizes guarantee that any jump within an island can jump forwards or backwards
+            // to the adjacent island in a single instruction.
+            const size_t islandRegionSize = roundUpToMultipleOf(pageSize(), static_cast<size_t>(MacroAssembler::nearJumpRange * islandRegionSizeFraction));
+            m_regionSize = MacroAssembler::nearJumpRange - islandRegionSize;
+            RELEASE_ASSERT(isPageAligned(islandRegionSize));
+            RELEASE_ASSERT(isPageAligned(m_regionSize));
+            const unsigned numAllocators = (reservation.size + m_regionSize - 1) / m_regionSize;
+            m_allocators = FixedVector<RegionAllocator>::createWithSizeAndConstructorArguments(numAllocators, *this);
+
             uintptr_t start = bitwise_cast<uintptr_t>(memoryStart());
             uintptr_t reservationEnd = bitwise_cast<uintptr_t>(memoryEnd());
-            for (size_t i = 0; i < maxNumberOfRegions; ++i) {
-                RELEASE_ASSERT(start < reservationEnd || Options::jitMemoryReservationSize());
-                if (start >= reservationEnd) {
-                    m_numAllocators = i;
-                    break;
-                }
-                m_allocators[i].m_start = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(start));
-                m_allocators[i].m_end = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(start + regionSize));
-                if (m_allocators[i].end() > reservationEnd) {
-                    // We may have taken a page for the executable only copy thunk.
-                    RELEASE_ASSERT(i == maxNumberOfRegions - 1 || Options::jitMemoryReservationSize());
-                    m_allocators[i].m_end = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(reservationEnd));
-                }
-
-                size_t sizeInBytes = m_allocators[i].allocatorSize();
-                m_allocators[i].addFreshFreeSpace(bitwise_cast<void*>(m_allocators[i].start()), sizeInBytes);
-                m_bytesReserved += sizeInBytes;
-
-                RELEASE_ASSERT(m_allocators[i].allocatorSize() < regionSize);
-                RELEASE_ASSERT(m_allocators[i].islandBegin() > m_allocators[i].start());
-                RELEASE_ASSERT(m_allocators[i].islandBegin() < m_allocators[i].end());
-
-                start += regionSize;
+            for (size_t i = 0; i < numAllocators; ++i) {
+                uintptr_t end = start + m_regionSize;
+                uintptr_t islandBegin = end - islandRegionSize;
+                // The island in the very last region is never actually used (everything goes backwards), but we
+                // can't put code there in case they do need to use a backward jump island, so set up accordingly.
+                if (i == numAllocators - 1)
+                    islandBegin = end = std::min(islandBegin, reservationEnd);
+                RELEASE_ASSERT(end <= reservationEnd);
+                m_allocators[i].configure(start, islandBegin, end);
+                m_bytesReserved += m_allocators[i].allocatorSize();
+                start += m_regionSize;
             }
 #else
             m_allocator.addFreshFreeSpace(reservation.base, reservation.size);
@@ -495,7 +497,7 @@
 
         unsigned start = 0;
         if (Options::useRandomizingExecutableIslandAllocation())
-            start = cryptographicallyRandomNumber() % m_numAllocators;
+            start = cryptographicallyRandomNumber() % m_allocators.size();
 
         unsigned i = start;
         while (true) {
@@ -502,7 +504,7 @@
             RegionAllocator& allocator = m_allocators[i];
             if (RefPtr<ExecutableMemoryHandle> result = allocator.allocate(locker, sizeInBytes))
                 return result;
-            i = (i + 1) % m_numAllocators;
+            i = (i + 1) % m_allocators.size();
             if (i == start)
                 break;
         }
@@ -700,7 +702,7 @@
             m_islandsForJumpSourceLocation.insert(islands);
         }
 
-        RegionAllocator* allocator = findRegion(jumpLocation > target ? jumpLocation - regionSize : jumpLocation);
+        RegionAllocator* allocator = findRegion(jumpLocation > target ? jumpLocation - m_regionSize : jumpLocation);
         RELEASE_ASSERT(allocator);
         void* result = allocator->allocateIsland();
         void* currentIsland = result;
@@ -709,10 +711,10 @@
             islands->jumpIslands.append(CodeLocationLabel<ExecutableMemoryPtrTag>(tagCodePtr<ExecutableMemoryPtrTag>(currentIsland)));
 
             auto emitJumpTo = [&] (void* target) {
-                RELEASE_ASSERT(ARM64Assembler::canEmitJump(bitwise_cast<void*>(jumpLocation), target));
+                RELEASE_ASSERT(Assembler::canEmitJump(bitwise_cast<void*>(jumpLocation), target));
 
                 MacroAssembler jit;
-                auto jump = jit.jump();
+                auto nearTailCall = jit.nearTailCall();
                 LinkBuffer linkBuffer(jit, MacroAssemblerCodePtr<NoPtrTag>(currentIsland), islandSizeInBytes, LinkBuffer::Profile::JumpIsland, JITCompilationMustSucceed, false);
                 RELEASE_ASSERT(linkBuffer.isValid());
 
@@ -724,11 +726,11 @@
                 // has a jump linked to this island hasn't finalized yet, they're guaranteed to finalize there code and run an isb.
                 linkBuffer.setIsJumpIsland();
 
-                linkBuffer.link(jump, CodeLocationLabel<NoPtrTag>(target));
+                linkBuffer.link(nearTailCall, CodeLocationLabel<NoPtrTag>(target));
                 FINALIZE_CODE(linkBuffer, NoPtrTag, "Jump Island: %lu", jumpLocation);
             };
 
-            if (ARM64Assembler::canEmitJump(bitwise_cast<void*>(jumpLocation), bitwise_cast<void*>(target))) {
+            if (Assembler::canEmitJump(bitwise_cast<void*>(jumpLocation), bitwise_cast<void*>(target))) {
                 emitJumpTo(bitwise_cast<void*>(target));
                 break;
             }
@@ -735,9 +737,9 @@
 
             uintptr_t nextIslandRegion;
             if (jumpLocation > target)
-                nextIslandRegion = jumpLocation - regionSize;
+                nextIslandRegion = jumpLocation - m_regionSize;
             else
-                nextIslandRegion = jumpLocation + regionSize;
+                nextIslandRegion = jumpLocation + m_regionSize;
 
             RegionAllocator* allocator = findRegion(nextIslandRegion);
             RELEASE_ASSERT(allocator);
@@ -821,20 +823,30 @@
         RegionAllocator(FixedVMPoolExecutableAllocator& allocator)
             : Base(allocator)
         {
+            RELEASE_ASSERT(!(pageSize() % islandSizeInBytes), "Current implementation relies on this");
         }
 
+        void configure(uintptr_t start, uintptr_t islandBegin, uintptr_t end)
+        {
+            RELEASE_ASSERT(start < islandBegin);
+            RELEASE_ASSERT(islandBegin <= end);
+            m_start = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(start));
+            m_islandBegin = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(islandBegin));
+            m_end = tagCodePtr<ExecutableMemoryPtrTag>(bitwise_cast<void*>(end));
+            RELEASE_ASSERT(!((this->end() - this->start()) % pageSize()));
+            RELEASE_ASSERT(!((this->end() - this->islandBegin()) % pageSize()));
+            addFreshFreeSpace(bitwise_cast<void*>(this->start()), allocatorSize());
+        }
+
         //  ------------------------------------
         //  | jit allocations -->   <-- islands |
         //  -------------------------------------
 
         uintptr_t start() { return bitwise_cast<uintptr_t>(untagCodePtr<ExecutableMemoryPtrTag>(m_start)); }
+        uintptr_t islandBegin() { return bitwise_cast<uintptr_t>(untagCodePtr<ExecutableMemoryPtrTag>(m_islandBegin)); }
         uintptr_t end() { return bitwise_cast<uintptr_t>(untagCodePtr<ExecutableMemoryPtrTag>(m_end)); }
 
-        uintptr_t islandBegin()
-        {
-            // [start, allocatorEnd)
-            return end() - islandRegionSize;
-        }
+        size_t maxIslandsInThisRegion() { return (end() - islandBegin()) / islandSizeInBytes; }
 
         uintptr_t allocatorSize()
         {
@@ -872,13 +884,19 @@
             if (void* result = findResult())
                 return result;
 
-            islandBits.resize(islandBits.size() + islandsPerPage());
-            if (UNLIKELY(islandBits.size() > maxIslandsPerRegion))
+            const size_t oldSize = islandBits.size();
+            const size_t maxIslandsInThisRegion = this->maxIslandsInThisRegion();
+
+            RELEASE_ASSERT(oldSize <= maxIslandsInThisRegion);
+            if (UNLIKELY(oldSize == maxIslandsInThisRegion))
                 crashOnJumpIslandExhaustion();
 
-            uintptr_t pageBegin = end - (islandBits.size() * islandSizeInBytes); // [islandBegin, end)
-            m_fixedAllocator.m_reservation.commit(bitwise_cast<void*>(pageBegin), pageSize());
+            const size_t newSize = std::min(oldSize + islandsPerPage(), maxIslandsInThisRegion);
+            islandBits.resize(newSize);
 
+            uintptr_t islandsBegin = end - (newSize * islandSizeInBytes); // [islandsBegin, end)
+            m_fixedAllocator.m_reservation.commit(bitwise_cast<void*>(islandsBegin), (newSize - oldSize) * islandSizeInBytes);
+
             void* result = findResult();
             RELEASE_ASSERT(result);
             return result;
@@ -916,8 +934,10 @@
             return false;
         }
 
+    private:
         // Range: [start, end)
         void* m_start;
+        void* m_islandBegin;
         void* m_end;
         FastBitVector islandBits;
     };
@@ -955,8 +975,8 @@
     Lock m_lock;
     PageReservation m_reservation;
 #if ENABLE(JUMP_ISLANDS)
-    std::array<RegionAllocator, maxNumberOfRegions> m_allocators;
-    unsigned m_numAllocators;
+    size_t m_regionSize;
+    FixedVector<RegionAllocator> m_allocators;
     RedBlackTree<Islands, void*> m_islandsForJumpSourceLocation;
 #else
     Allocator m_allocator;

Modified: trunk/Source/WTF/ChangeLog (292539 => 292540)


--- trunk/Source/WTF/ChangeLog	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/ChangeLog	2022-04-07 17:03:19 UTC (rev 292540)
@@ -1,3 +1,21 @@
+2022-04-07  Geza Lore  <gl...@igalia.com>
+
+        [JSC][ARMv7] Support proper near calls and JUMP_ISLANDS
+        https://bugs.webkit.org/show_bug.cgi?id=238143
+
+        Reviewed by Yusuke Suzuki.
+
+        Support constructor arguments for FixedVector element initialization.
+
+        * wtf/EmbeddedFixedVector.h:
+        * wtf/FixedVector.h:
+        (WTF::FixedVector::FixedVector):
+        * wtf/PlatformEnable.h:
+        * wtf/TrailingArray.h:
+        (WTF::TrailingArray::TrailingArray):
+        * wtf/Vector.h:
+        (WTF::VectorTypeOperations::initializeWithArgs):
+
 2022-04-06  Chris Dumez  <cdu...@apple.com>
 
         Start replacing String(const char*) constructor with a String::fromLatin1(const char*) function

Modified: trunk/Source/WTF/wtf/EmbeddedFixedVector.h (292539 => 292540)


--- trunk/Source/WTF/wtf/EmbeddedFixedVector.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/wtf/EmbeddedFixedVector.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -67,6 +67,12 @@
         return UniqueRef { *new (NotNull, fastMalloc(Base::allocationSize(size))) EmbeddedFixedVector(size, std::move_iterator { container.begin() }, std::move_iterator { container.end() }) };
     }
 
+    template<typename... Args>
+    static UniqueRef<EmbeddedFixedVector> createWithSizeAndConstructorArguments(unsigned size, Args&&... args)
+    {
+        return UniqueRef { *new (NotNull, fastMalloc(Base::allocationSize(size))) EmbeddedFixedVector(size, std::forward<Args>(args)...) };
+    }
+
     UniqueRef<EmbeddedFixedVector> clone() const
     {
         return create(Base::begin(), Base::end());
@@ -94,11 +100,18 @@
     {
     }
 
+
     template<typename InputIterator>
     EmbeddedFixedVector(unsigned size, InputIterator first, InputIterator last)
         : Base(size, first, last)
     {
     }
+
+    template<typename... Args>
+    explicit EmbeddedFixedVector(unsigned size, Args&&... args) // create with given size and constructor arguments for all elements
+        : Base(size, std::forward<Args>(args)...)
+    {
+    }
 };
 
 } // namespace WTF

Modified: trunk/Source/WTF/wtf/FixedVector.h (292539 => 292540)


--- trunk/Source/WTF/wtf/FixedVector.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/wtf/FixedVector.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -99,6 +99,18 @@
         return *this;
     }
 
+private:
+    FixedVector(std::unique_ptr<Storage>&& storage)
+        :  m_storage { WTFMove(storage) }
+    { }
+
+public:
+    template<typename... Args>
+    static FixedVector createWithSizeAndConstructorArguments(size_t size, Args&&... args)
+    {
+        return FixedVector<T> { size ? Storage::createWithSizeAndConstructorArguments(size, std::forward<Args>(args)...).moveToUniquePtr() : std::unique_ptr<Storage> { nullptr } };
+    }
+
     size_t size() const { return m_storage ? m_storage->size() : 0; }
     bool isEmpty() const { return m_storage ? m_storage->isEmpty() : true; }
     size_t byteSize() const { return m_storage ? m_storage->byteSize() : 0; }

Modified: trunk/Source/WTF/wtf/PlatformEnable.h (292539 => 292540)


--- trunk/Source/WTF/wtf/PlatformEnable.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/wtf/PlatformEnable.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -611,9 +611,11 @@
 #endif
 #endif
 
-#if !defined(ENABLE_JUMP_ISLANDS) && CPU(ARM64) && CPU(ADDRESS64) && ENABLE(JIT)
+#if !defined(ENABLE_JUMP_ISLANDS) && ENABLE(JIT)
+#if (CPU(ARM64) && CPU(ADDRESS64)) || CPU(ARM_THUMB2)
 #define ENABLE_JUMP_ISLANDS 1
 #endif
+#endif
 
 /* FIXME: This should be turned into an #error invariant */
 /* The FTL *does not* work on 32-bit platforms. Disable it even if someone asked us to enable it. */

Modified: trunk/Source/WTF/wtf/TrailingArray.h (292539 => 292540)


--- trunk/Source/WTF/wtf/TrailingArray.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/wtf/TrailingArray.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -54,6 +54,7 @@
     using reverse_iterator = std::reverse_iterator<iterator>;
     using const_reverse_iterator = std::reverse_iterator<const_iterator>;
 
+protected:
     explicit TrailingArray(unsigned size)
         : m_size(size)
     {
@@ -70,11 +71,20 @@
         std::uninitialized_copy(first, last, begin());
     }
 
+    template<typename... Args>
+    TrailingArray(unsigned size, Args&&... args) // create with given size and constructor arguments for all elements
+        : m_size(size)
+    {
+        static_assert(std::is_final_v<Derived>);
+        VectorTypeOperations<T>::initializeWithArgs(begin(), end(), std::forward<Args>(args)...);
+    }
+
     ~TrailingArray()
     {
         VectorTypeOperations<T>::destruct(begin(), end());
     }
 
+public:
     static constexpr size_t allocationSize(unsigned size)
     {
         return offsetOfData() + size * sizeof(T);

Modified: trunk/Source/WTF/wtf/Vector.h (292539 => 292540)


--- trunk/Source/WTF/wtf/Vector.h	2022-04-07 16:10:36 UTC (rev 292539)
+++ trunk/Source/WTF/wtf/Vector.h	2022-04-07 17:03:19 UTC (rev 292540)
@@ -97,6 +97,13 @@
     {
         initializeIfNonPOD(begin, end);
     }
+
+    template<typename... Args>
+    static void initializeWithArgs(T* begin, T* end, Args&&... args)
+    {
+        for (T *cur = begin; cur != end; ++cur)
+            new (NotNull, cur) T(args...);
+    }
 };
 
 template<typename T>
@@ -255,6 +262,12 @@
         VectorInitializer<VectorTraits<T>::needsInitialization, VectorTraits<T>::canInitializeWithMemset, T>::initialize(begin, end);
     }
 
+    template<typename ... Args>
+    static void initializeWithArgs(T* begin, T* end, Args&&... args)
+    {
+        VectorInitializer<VectorTraits<T>::needsInitialization, VectorTraits<T>::canInitializeWithMemset, T>::initializeWithArgs(begin, end, std::forward<Args>(args)...);
+    }
+
     static void move(T* src, T* srcEnd, T* dst)
     {
         VectorMover<VectorTraits<T>::canMoveWithMemcpy, T>::move(src, srcEnd, dst);
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to