Title: [194316] trunk/Source/_javascript_Core
Revision
194316
Author
[email protected]
Date
2015-12-19 13:04:55 -0800 (Sat, 19 Dec 2015)

Log Message

[JSC] Streamline Tmp indexing inside the register allocator
https://bugs.webkit.org/show_bug.cgi?id=152420

Patch by Benjamin Poulain <[email protected]> on 2015-12-19
Reviewed by Filip Pizlo.

AirIteratedRegisterCoalescing has been accumulating a bit of mess over time.

When it started, every map addressed by Tmp was using Tmp hashing.
That caused massive performance problems. Everything perf sensitive was moved
to direct array addressing by the absolute Tmp index. This left the code
with half of the function using Tmp, the other half using indices.

With this patch, almost everything is moved to absolute indexing.
There are a few advantages to this:
-No more conversion churn for Floating Point registers.
-Most of the functions can now be shared between GP and FP.
-A bit of clean up since the core algorithm only deals with integers now.

This patch also changes the index type to be a template argument.
That will allow future specialization of "m_interferenceEdges" based
on the expected problem size.

Finally, the code related to the program modification (register assignment
and spilling) was moved to the wrapper "IteratedRegisterCoalescing".

The current split is:
-AbstractColoringAllocator: common core. Share as much as possible between
 GP and FP.
-ColoringAllocator: the remaining parts of the algorithm, everything that
 is specific to GP, FP.
-IteratedRegisterCoalescing: the "iterated" part of the algorithm.
 Try to allocate and modify the code as needed.

The long term plan is:
-Move selectSpill() and the coloring loop to AbstractColoringAllocator.
-Specialize m_interferenceEdges to make it faster.

* b3/air/AirIteratedRegisterCoalescing.cpp:
* b3/air/AirTmpInlines.h:
(JSC::B3::Air::AbsoluteTmpMapper<Arg::GP>::lastMachineRegisterIndex):
(JSC::B3::Air::AbsoluteTmpMapper<Arg::FP>::lastMachineRegisterIndex):

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (194315 => 194316)


--- trunk/Source/_javascript_Core/ChangeLog	2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/_javascript_Core/ChangeLog	2015-12-19 21:04:55 UTC (rev 194316)
@@ -1,5 +1,49 @@
 2015-12-19  Benjamin Poulain  <[email protected]>
 
+        [JSC] Streamline Tmp indexing inside the register allocator
+        https://bugs.webkit.org/show_bug.cgi?id=152420
+
+        Reviewed by Filip Pizlo.
+
+        AirIteratedRegisterCoalescing has been accumulating a bit of mess over time.
+
+        When it started, every map addressed by Tmp was using Tmp hashing.
+        That caused massive performance problems. Everything perf sensitive was moved
+        to direct array addressing by the absolute Tmp index. This left the code
+        with half of the function using Tmp, the other half using indices.
+
+        With this patch, almost everything is moved to absolute indexing.
+        There are a few advantages to this:
+        -No more conversion churn for Floating Point registers.
+        -Most of the functions can now be shared between GP and FP.
+        -A bit of clean up since the core algorithm only deals with integers now.
+
+        This patch also changes the index type to be a template argument.
+        That will allow future specialization of "m_interferenceEdges" based
+        on the expected problem size.
+
+        Finally, the code related to the program modification (register assignment
+        and spilling) was moved to the wrapper "IteratedRegisterCoalescing".
+
+        The current split is:
+        -AbstractColoringAllocator: common core. Share as much as possible between
+         GP and FP.
+        -ColoringAllocator: the remaining parts of the algorithm, everything that
+         is specific to GP, FP.
+        -IteratedRegisterCoalescing: the "iterated" part of the algorithm.
+         Try to allocate and modify the code as needed.
+
+        The long term plan is:
+        -Move selectSpill() and the coloring loop to AbstractColoringAllocator.
+        -Specialize m_interferenceEdges to make it faster.
+
+        * b3/air/AirIteratedRegisterCoalescing.cpp:
+        * b3/air/AirTmpInlines.h:
+        (JSC::B3::Air::AbsoluteTmpMapper<Arg::GP>::lastMachineRegisterIndex):
+        (JSC::B3::Air::AbsoluteTmpMapper<Arg::FP>::lastMachineRegisterIndex):
+
+2015-12-19  Benjamin Poulain  <[email protected]>
+
         [JSC] FTLB3Output generates some invalid ZExt32
         https://bugs.webkit.org/show_bug.cgi?id=151905
 

Modified: trunk/Source/_javascript_Core/b3/air/AirIteratedRegisterCoalescing.cpp (194315 => 194316)


--- trunk/Source/_javascript_Core/b3/air/AirIteratedRegisterCoalescing.cpp	2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/_javascript_Core/b3/air/AirIteratedRegisterCoalescing.cpp	2015-12-19 21:04:55 UTC (rev 194316)
@@ -47,336 +47,97 @@
 bool traceDebug = false;
 bool reportStats = false;
 
-template<Arg::Type type>
-struct MoveInstHelper;
-
-template<>
-struct MoveInstHelper<Arg::GP> {
-    static bool mayBeCoalescable(const Inst& inst)
-    {
-        bool isMove = inst.opcode == Move;
-        if (!isMove)
-            return false;
-
-        ASSERT_WITH_MESSAGE(inst.args.size() == 2, "We assume coalecable moves only have two arguments in a few places.");
-        ASSERT(inst.args[0].isType(Arg::GP));
-        ASSERT(inst.args[1].isType(Arg::GP));
-
-        return inst.args[0].isTmp() && inst.args[1].isTmp();
-    }
-};
-
-template<>
-struct MoveInstHelper<Arg::FP> {
-    static bool mayBeCoalescable(const Inst& inst)
-    {
-        if (inst.opcode != MoveDouble)
-            return false;
-
-        ASSERT_WITH_MESSAGE(inst.args.size() == 2, "We assume coalecable moves only have two arguments in a few places.");
-        ASSERT(inst.args[0].isType(Arg::FP));
-        ASSERT(inst.args[1].isType(Arg::FP));
-
-        return inst.args[0].isTmp() && inst.args[1].isTmp();
-    }
-};
-
-template<Arg::Type type>
-class IteratedRegisterCoalescingAllocator {
+// The AbstractColoringAllocator defines all the code that is independant
+// from the type or register and can be shared when allocating registers.
+template<typename IndexType>
+class AbstractColoringAllocator {
 public:
-    IteratedRegisterCoalescingAllocator(
-        Code& code, const UseCounts<Tmp>& useCounts, const HashSet<Tmp>& unspillableTmp)
-        : m_code(code)
-        , m_useCounts(useCounts)
-        , m_unspillableTmp(unspillableTmp)
-        , m_numberOfRegisters(regsInPriorityOrder(type).size())
+    AbstractColoringAllocator(const Vector<Reg>& regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize)
+        : m_regsInPriorityOrder(regsInPriorityOrder)
+        , m_lastPrecoloredRegisterIndex(lastPrecoloredRegisterIndex)
     {
-        initializeDegrees();
+        initializeDegrees(tmpArraySize);
 
-        unsigned tmpArraySize = this->tmpArraySize();
         m_adjacencyList.resize(tmpArraySize);
         m_moveList.resize(tmpArraySize);
-        m_coalescedTmps.resize(tmpArraySize);
+        m_coalescedTmps.fill(0, tmpArraySize);
         m_isOnSelectStack.ensureSize(tmpArraySize);
-
-        build();
-        allocate();
     }
 
-    Tmp getAlias(Tmp tmp) const
+protected:
+    IndexType getAlias(IndexType tmpIndex) const
     {
-        Tmp alias = tmp;
-        while (Tmp nextAlias = m_coalescedTmps[AbsoluteTmpMapper<type>::absoluteIndex(alias)])
+        IndexType alias = tmpIndex;
+        while (IndexType nextAlias = m_coalescedTmps[alias])
             alias = nextAlias;
         return alias;
     }
 
-    Tmp getAliasWhenSpilling(Tmp tmp) const
+    void addEdge(IndexType a, IndexType b)
     {
-        ASSERT_WITH_MESSAGE(!m_spilledTmp.isEmpty(), "This function is only valid for coalescing during spilling.");
-
-        if (m_coalescedTmpsAtSpill.isEmpty())
-            return tmp;
-
-        Tmp alias = tmp;
-        while (Tmp nextAlias = m_coalescedTmpsAtSpill[AbsoluteTmpMapper<type>::absoluteIndex(alias)])
-            alias = nextAlias;
-
-        ASSERT_WITH_MESSAGE(!m_spilledTmp.contains(tmp) || alias == tmp, "The aliases at spill should always be colorable. Something went horribly wrong.");
-
-        return alias;
-    }
-
-    const HashSet<Tmp>& spilledTmp() const { return m_spilledTmp; }
-    Reg allocatedReg(Tmp tmp) const
-    {
-        ASSERT(!tmp.isReg());
-        ASSERT(m_coloredTmp.size());
-        ASSERT(tmp.isGP() == (type == Arg::GP));
-
-        Reg reg = m_coloredTmp[AbsoluteTmpMapper<type>::absoluteIndex(tmp)];
-        if (!reg) {
-            // We only care about Tmps that interfere. A Tmp that never interfere with anything
-            // can take any register.
-            reg = regsInPriorityOrder(type).first();
-        }
-        return reg;
-    }
-
-private:
-    unsigned tmpArraySize()
-    {
-        unsigned numTmps = m_code.numTmps(type);
-        return AbsoluteTmpMapper<type>::absoluteIndex(numTmps);
-    }
-
-    void initializeDegrees()
-    {
-        unsigned tmpArraySize = this->tmpArraySize();
-        m_degrees.resize(tmpArraySize);
-
-        // All precolored registers have  an "infinite" degree.
-        unsigned firstNonRegIndex = AbsoluteTmpMapper<type>::absoluteIndex(0);
-        for (unsigned i = 0; i < firstNonRegIndex; ++i)
-            m_degrees[i] = std::numeric_limits<unsigned>::max();
-
-        bzero(m_degrees.data() + firstNonRegIndex, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
-    }
-
-    void build()
-    {
-        TmpLiveness<type> liveness(m_code);
-        for (BasicBlock* block : m_code) {
-            typename TmpLiveness<type>::LocalCalc localCalc(liveness, block);
-            for (unsigned instIndex = block->size(); instIndex--;) {
-                Inst& inst = block->at(instIndex);
-                Inst* nextInst = instIndex + 1 < block->size() ? &block->at(instIndex + 1) : nullptr;
-                build(inst, nextInst, localCalc);
-                localCalc.execute(instIndex);
-            }
-        }
-    }
-
-    void build(Inst& inst, Inst* nextInst, const typename TmpLiveness<type>::LocalCalc& localCalc)
-    {
-        inst.forEachTmpWithExtraClobberedRegs(
-            nextInst,
-            [&] (const Tmp& arg, Arg::Role role, Arg::Type argType) {
-                if (!Arg::isDef(role) || argType != type)
-                    return;
-                
-                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
-                // We should not use forEachDefAndExtraClobberedTmp() here since colored Tmps
-                // do not need interference edges in our implementation.
-                inst.forEachTmp([&] (Tmp& otherArg, Arg::Role role, Arg::Type argType) {
-                        if (!Arg::isDef(role) || argType != type)
-                            return;
-                        
-                        addEdge(arg, otherArg);
-                    });
-            });
-
-        if (MoveInstHelper<type>::mayBeCoalescable(inst)) {
-            // We do not want the Use() of this move to interfere with the Def(), even if it is live
-            // after the Move. If we were to add the interference edge, it would be impossible to
-            // coalesce the Move even if the two Tmp never interfere anywhere.
-            Tmp defTmp;
-            Tmp useTmp;
-            inst.forEachTmp([&defTmp, &useTmp] (Tmp& argTmp, Arg::Role role, Arg::Type) {
-                if (Arg::isDef(role))
-                    defTmp = argTmp;
-                else {
-                    ASSERT(Arg::isEarlyUse(role));
-                    useTmp = argTmp;
-                }
-            });
-            ASSERT(defTmp);
-            ASSERT(useTmp);
-
-            unsigned nextMoveIndex = m_coalescingCandidates.size();
-            m_coalescingCandidates.append({ useTmp, defTmp });
-
-            unsigned newIndexInWorklist = m_worklistMoves.addMove();
-            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
-
-            ASSERT(nextMoveIndex <= m_activeMoves.size());
-            m_activeMoves.ensureSize(nextMoveIndex + 1);
-
-            for (const Arg& arg : inst.args) {
-                auto& list = m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(arg.tmp())];
-                list.add(nextMoveIndex);
-            }
-
-            for (const Tmp& liveTmp : localCalc.live()) {
-                if (liveTmp != useTmp)
-                    addEdge(defTmp, liveTmp);
-            }
-
-            // The next instruction could have early clobbers. We need to consider those now.
-            if (nextInst && nextInst->hasSpecial()) {
-                nextInst->extraEarlyClobberedRegs().forEach(
-                    [&] (Reg reg) {
-                        if (reg.isGPR() == (type == Arg::GP)) {
-                            for (const Tmp& liveTmp : localCalc.live()) {
-                                addEdge(Tmp(reg), liveTmp);
-                            }
-                        }
-                    });
-            }
-        } else
-            addEdges(inst, nextInst, localCalc.live());
-    }
-
-    void addEdges(Inst& inst, Inst* nextInst, typename TmpLiveness<type>::LocalCalc::Iterable liveTmps)
-    {
-        // All the Def()s interfere with everthing live.
-        inst.forEachTmpWithExtraClobberedRegs(
-            nextInst,
-            [&] (const Tmp& arg, Arg::Role role, Arg::Type argType) {
-                if (!Arg::isDef(role) || argType != type)
-                    return;
-                
-                for (const Tmp& liveTmp : liveTmps) {
-                    if (liveTmp.isGP() == (type == Arg::GP))
-                        addEdge(arg, liveTmp);
-                }
-            });
-    }
-
-    void addEdge(const Tmp& a, const Tmp& b)
-    {
         if (a == b)
             return;
-
-        if (m_interferenceEdges.add(InterferenceEdge(a, b)).isNewEntry) {
-            if (!a.isReg()) {
-                ASSERT(!m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(a)].contains(b));
-                m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(a)].append(b);
-                m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(a)]++;
-            }
-
-            if (!b.isReg()) {
-                ASSERT(!m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(b)].contains(a));
-                m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(b)].append(a);
-                m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(b)]++;
-            }
-        }
+        addEdgeDistinct(a, b);
     }
 
     void makeWorkList()
     {
-        unsigned firstNonRegIndex = AbsoluteTmpMapper<type>::absoluteIndex(0);
-        for (unsigned i = firstNonRegIndex; i < m_degrees.size(); ++i) {
+        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (IndexType i = firstNonRegIndex; i < m_degrees.size(); ++i) {
             unsigned degree = m_degrees[i];
             if (!degree)
                 continue;
 
-            Tmp tmp = AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(i);
-
-            if (degree >= m_numberOfRegisters)
-                m_spillWorklist.add(tmp);
-            else if (!m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)].isEmpty())
-                m_freezeWorklist.add(tmp);
+            if (degree >= m_regsInPriorityOrder.size())
+                m_spillWorklist.add(i);
+            else if (!m_moveList[i].isEmpty())
+                m_freezeWorklist.add(i);
             else
-                m_simplifyWorklist.append(tmp);
+                m_simplifyWorklist.append(i);
         }
     }
 
+    // Low-degree vertex can always be colored: just pick any of the color taken by any
+    // other adjacent verices.
+    // The "Simplify" phase takes a low-degree out of the interference graph to simplify it.
     void simplify()
     {
-        Tmp last = m_simplifyWorklist.takeLast();
+        IndexType lastIndex = m_simplifyWorklist.takeLast();
 
-        ASSERT(!m_selectStack.contains(last));
-        ASSERT(!m_isOnSelectStack.get(AbsoluteTmpMapper<type>::absoluteIndex(last)));
-        m_selectStack.append(last);
-        m_isOnSelectStack.quickSet(AbsoluteTmpMapper<type>::absoluteIndex(last));
+        ASSERT(!m_selectStack.contains(lastIndex));
+        ASSERT(!m_isOnSelectStack.get(lastIndex));
+        m_selectStack.append(lastIndex);
+        m_isOnSelectStack.quickSet(lastIndex);
 
-        forEachAdjacent(last, [this](Tmp adjacentTmp) {
-            decrementDegree(adjacentTmp);
+        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
+            decrementDegree(adjacentTmpIndex);
         });
     }
 
-    template<typename Function>
-    void forEachAdjacent(Tmp tmp, Function function)
+    void freeze()
     {
-        for (Tmp adjacentTmp : m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]) {
-            if (!hasBeenSimplified(adjacentTmp))
-                function(adjacentTmp);
-        }
+        IndexType victimIndex = m_freezeWorklist.takeAny();
+        ASSERT_WITH_MESSAGE(getAlias(victimIndex) == victimIndex, "coalesce() should not leave aliased Tmp in the worklist.");
+        m_simplifyWorklist.append(victimIndex);
+        freezeMoves(victimIndex);
     }
 
-    bool hasBeenSimplified(Tmp tmp)
+    void freezeMoves(IndexType tmpIndex)
     {
-        return m_isOnSelectStack.quickGet(AbsoluteTmpMapper<type>::absoluteIndex(tmp)) || !!m_coalescedTmps[AbsoluteTmpMapper<type>::absoluteIndex(tmp)];
-    }
+        forEachNodeMoves(tmpIndex, [this, tmpIndex] (IndexType moveIndex) {
+            if (!m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.takeMove(moveIndex);
 
-    void decrementDegree(Tmp tmp)
-    {
-        ASSERT(m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]);
+            const MoveOperands& moveOperands = m_coalescingCandidates[moveIndex];
+            IndexType srcTmpIndex = moveOperands.srcIndex;
+            IndexType dstTmpIndex = moveOperands.dstIndex;
 
-        unsigned oldDegree = m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]--;
-        if (oldDegree == m_numberOfRegisters) {
-            enableMovesOnValueAndAdjacents(tmp);
-            m_spillWorklist.remove(tmp);
-            if (isMoveRelated(tmp))
-                m_freezeWorklist.add(tmp);
-            else
-                m_simplifyWorklist.append(tmp);
-        }
-    }
-
-    template<typename Function>
-    void forEachNodeMoves(Tmp tmp, Function function)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                function(moveIndex);
-        }
-    }
-
-    bool isMoveRelated(Tmp tmp)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                return true;
-        }
-        return false;
-    }
-
-    void enableMovesOnValue(Tmp tmp)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.returnMove(moveIndex);
-        }
-    }
-
-    void enableMovesOnValueAndAdjacents(Tmp tmp)
-    {
-        enableMovesOnValue(tmp);
-
-        forEachAdjacent(tmp, [this] (Tmp adjacentTmp) {
-            enableMovesOnValue(adjacentTmp);
+            IndexType originalOtherTmp = srcTmpIndex != tmpIndex ? srcTmpIndex : dstTmpIndex;
+            IndexType otherTmpIndex = getAlias(originalOtherTmp);
+            if (m_degrees[otherTmpIndex] < m_regsInPriorityOrder.size() && !isMoveRelated(otherTmpIndex)) {
+                if (m_freezeWorklist.remove(otherTmpIndex))
+                    m_simplifyWorklist.append(otherTmpIndex);
+            }
         });
     }
 
@@ -384,12 +145,10 @@
     {
         unsigned moveIndex = m_worklistMoves.takeLastMove();
         const MoveOperands& moveOperands = m_coalescingCandidates[moveIndex];
-        Tmp u = moveOperands.src;
-        u = getAlias(u);
-        Tmp v = moveOperands.dst;
-        v = getAlias(v);
+        IndexType u = getAlias(moveOperands.srcIndex);
+        IndexType v = getAlias(moveOperands.dstIndex);
 
-        if (v.isReg())
+        if (isPrecolored(v))
             std::swap(u, v);
 
         if (traceDebug)
@@ -400,7 +159,7 @@
 
             if (traceDebug)
                 dataLog("    Coalesced\n");
-        } else if (v.isReg() || m_interferenceEdges.contains(InterferenceEdge(u, v))) {
+        } else if (isPrecolored(v) || m_interferenceEdges.contains(InterferenceEdge(u, v))) {
             addWorkList(u);
             addWorkList(v);
 
@@ -421,316 +180,261 @@
         }
     }
 
-    bool canBeSafelyCoalesced(Tmp u, Tmp v)
+    void assignColors()
     {
-        ASSERT(!v.isReg());
-        if (u.isReg())
-            return precoloredCoalescingHeuristic(u, v);
-        return conservativeHeuristic(u, v);
-    }
+        ASSERT(m_simplifyWorklist.isEmpty());
+        ASSERT(m_worklistMoves.isEmpty());
+        ASSERT(m_freezeWorklist.isEmpty());
+        ASSERT(m_spillWorklist.isEmpty());
 
-    bool precoloredCoalescingHeuristic(Tmp u, Tmp v)
-    {
-        ASSERT(u.isReg());
-        ASSERT(!v.isReg());
+        // Reclaim as much memory as possible.
+        m_interferenceEdges.clear();
+        m_degrees.clear();
+        m_moveList.clear();
+        m_worklistMoves.clear();
+        m_simplifyWorklist.clear();
+        m_spillWorklist.clear();
+        m_freezeWorklist.clear();
 
-        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree >= K
-        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
-        // move, we may create an uncolorable graph.
-        const auto& adjacentsOfV = m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(v)];
-        for (Tmp adjacentTmp : adjacentsOfV) {
-            if (!adjacentTmp.isReg()
-                && !hasBeenSimplified(adjacentTmp)
-                && m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(adjacentTmp)] >= m_numberOfRegisters
-                && !m_interferenceEdges.contains(InterferenceEdge(u, adjacentTmp)))
-                return false;
-        }
-        return true;
-    }
+        // Try to color the Tmp on the stack.
+        m_coloredTmp.resize(m_adjacencyList.size());
 
-    bool conservativeHeuristic(Tmp u, Tmp v)
-    {
-        // This is using the Briggs' conservative coalescing rule:
-        // If the number of combined adjacent node with a degree >= K is less than K,
-        // it is safe to combine the two nodes. The reason is that we know that if the graph
-        // is colorable, we have fewer than K adjacents with high order and there is a color
-        // for the current node.
-        ASSERT(u != v);
-        ASSERT(!u.isReg());
-        ASSERT(!v.isReg());
+        while (!m_selectStack.isEmpty()) {
+            unsigned tmpIndex = m_selectStack.takeLast();
+            ASSERT(!isPrecolored(tmpIndex));
+            ASSERT(!m_coloredTmp[tmpIndex]);
 
-        const auto& adjacentsOfU = m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(u)];
-        const auto& adjacentsOfV = m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(v)];
+            RegisterSet coloredRegisters;
+            for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+                IndexType aliasTmpIndex = getAlias(adjacentTmpIndex);
+                Reg reg = m_coloredTmp[aliasTmpIndex];
 
-        if (adjacentsOfU.size() + adjacentsOfV.size() < m_numberOfRegisters) {
-            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
-            return true;
-        }
+                ASSERT(!isPrecolored(aliasTmpIndex) || (isPrecolored(aliasTmpIndex) && reg));
 
-        HashSet<Tmp> highOrderAdjacents;
+                if (reg)
+                    coloredRegisters.set(reg);
+            }
 
-        for (Tmp adjacentTmp : adjacentsOfU) {
-            ASSERT(adjacentTmp != v);
-            ASSERT(adjacentTmp != u);
-            if (!hasBeenSimplified(adjacentTmp) && m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(adjacentTmp)] >= m_numberOfRegisters) {
-                auto addResult = highOrderAdjacents.add(adjacentTmp);
-                if (addResult.isNewEntry && highOrderAdjacents.size() >= m_numberOfRegisters)
-                    return false;
+            bool colorAssigned = false;
+            for (Reg reg : m_regsInPriorityOrder) {
+                if (!coloredRegisters.get(reg)) {
+                    m_coloredTmp[tmpIndex] = reg;
+                    colorAssigned = true;
+                    break;
+                }
             }
+
+            if (!colorAssigned)
+                m_spilledTmps.append(tmpIndex);
         }
-        for (Tmp adjacentTmp : adjacentsOfV) {
-            ASSERT(adjacentTmp != u);
-            ASSERT(adjacentTmp != v);
-            if (!hasBeenSimplified(adjacentTmp) && m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(adjacentTmp)] >= m_numberOfRegisters) {
-                auto addResult = highOrderAdjacents.add(adjacentTmp);
-                if (addResult.isNewEntry && highOrderAdjacents.size() >= m_numberOfRegisters)
-                    return false;
-            }
-        }
+        m_selectStack.clear();
 
-        ASSERT(highOrderAdjacents.size() < m_numberOfRegisters);
-        return true;
+        if (m_spilledTmps.isEmpty())
+            m_coalescedTmpsAtSpill.clear();
+        else
+            m_coloredTmp.clear();
     }
 
-    void addWorkList(Tmp tmp)
+private:
+    void initializeDegrees(unsigned tmpArraySize)
     {
-        if (!tmp.isReg() && m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)] < m_numberOfRegisters && !isMoveRelated(tmp)) {
-            m_freezeWorklist.remove(tmp);
-            m_simplifyWorklist.append(tmp);
-        }
+        m_degrees.resize(tmpArraySize);
+
+        // All precolored registers have  an "infinite" degree.
+        unsigned firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (unsigned i = 0; i < firstNonRegIndex; ++i)
+            m_degrees[i] = std::numeric_limits<unsigned>::max();
+
+        bzero(m_degrees.data() + firstNonRegIndex, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
     }
 
-    void combine(Tmp u, Tmp v)
+    void addEdgeDistinct(IndexType a, IndexType b)
     {
-        if (!m_freezeWorklist.remove(v))
-            m_spillWorklist.remove(v);
+        ASSERT(a != b);
+        if (m_interferenceEdges.add(InterferenceEdge(a, b)).isNewEntry) {
+            if (!isPrecolored(a)) {
+                ASSERT(!m_adjacencyList[a].contains(b));
+                m_adjacencyList[a].append(b);
+                m_degrees[a]++;
+            }
 
-        ASSERT(!m_coalescedTmps[AbsoluteTmpMapper<type>::absoluteIndex(v)]);
-        m_coalescedTmps[AbsoluteTmpMapper<type>::absoluteIndex(v)] = u;
+            if (!isPrecolored(b)) {
+                ASSERT(!m_adjacencyList[b].contains(a));
+                m_adjacencyList[b].append(a);
+                m_degrees[b]++;
+            }
+        }
+    }
 
-        auto& vMoves = m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(v)];
-        m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(u)].add(vMoves.begin(), vMoves.end());
+    void decrementDegree(IndexType tmpIndex)
+    {
+        ASSERT(m_degrees[tmpIndex]);
 
-        forEachAdjacent(v, [this, u] (Tmp adjacentTmp) {
-            addEdge(adjacentTmp, u);
-            decrementDegree(adjacentTmp);
-        });
+        unsigned oldDegree = m_degrees[tmpIndex]--;
+        if (oldDegree == m_regsInPriorityOrder.size()) {
+            enableMovesOnValueAndAdjacents(tmpIndex);
+            m_spillWorklist.remove(tmpIndex);
+            if (isMoveRelated(tmpIndex))
+                m_freezeWorklist.add(tmpIndex);
+            else
+                m_simplifyWorklist.append(tmpIndex);
+        }
+    }
 
-        if (m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(u)] >= m_numberOfRegisters && m_freezeWorklist.remove(u))
-            m_spillWorklist.add(u);
+    bool isMoveRelated(IndexType tmpIndex)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                return true;
+        }
+        return false;
     }
 
-    void freeze()
+    template<typename Function>
+    void forEachAdjacent(IndexType tmpIndex, Function function)
     {
-        Tmp victim = m_freezeWorklist.takeAny();
-        ASSERT_WITH_MESSAGE(getAlias(victim) == victim, "coalesce() should not leave aliased Tmp in the worklist.");
-        m_simplifyWorklist.append(victim);
-        freezeMoves(victim);
+        for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+            if (!hasBeenSimplified(adjacentTmpIndex))
+                function(adjacentTmpIndex);
+        }
     }
 
-    void freezeMoves(Tmp tmp)
+    bool hasBeenSimplified(IndexType tmpIndex)
     {
-        forEachNodeMoves(tmp, [this, tmp] (unsigned moveIndex) {
-            if (!m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.takeMove(moveIndex);
+        return m_isOnSelectStack.quickGet(tmpIndex) || !!m_coalescedTmps[tmpIndex];
+    }
 
-            const MoveOperands& moveOperands = m_coalescingCandidates[moveIndex];
-            Tmp originalOtherTmp = moveOperands.src != tmp ? moveOperands.src : moveOperands.dst;
-            Tmp otherTmp = getAlias(originalOtherTmp);
-            if (m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(otherTmp)] < m_numberOfRegisters && !isMoveRelated(otherTmp)) {
-                if (m_freezeWorklist.remove(otherTmp))
-                    m_simplifyWorklist.append(otherTmp);
-            }
-        });
+    template<typename Function>
+    void forEachNodeMoves(IndexType tmpIndex, Function function)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                function(moveIndex);
+        }
     }
 
-    void selectSpill()
+    void enableMovesOnValue(IndexType tmpIndex)
     {
-        if (!m_hasSelectedSpill) {
-            m_hasSelectedSpill = true;
-
-            if (m_hasCoalescedNonTrivialMove)
-                m_coalescedTmpsAtSpill = m_coalescedTmps;
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.returnMove(moveIndex);
         }
+    }
 
-        auto iterator = m_spillWorklist.begin();
+    void enableMovesOnValueAndAdjacents(IndexType tmpIndex)
+    {
+        enableMovesOnValue(tmpIndex);
 
-        while (iterator != m_spillWorklist.end() && m_unspillableTmp.contains(*iterator))
-            ++iterator;
+        forEachAdjacent(tmpIndex, [this] (IndexType adjacentTmpIndex) {
+            enableMovesOnValue(adjacentTmpIndex);
+        });
+    }
 
-        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), "It is not possible to color the Air graph with the number of available registers.");
+    bool isPrecolored(IndexType tmpIndex)
+    {
+        return tmpIndex <= m_lastPrecoloredRegisterIndex;
+    }
 
-        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
-        // gets a register.
-        auto score = [&] (Tmp tmp) -> double {
-            // Air exposes the concept of "fast tmps", and we interpret that to mean that the tmp
-            // should always be in a register.
-            if (m_code.isFastTmp(tmp))
-                return 0;
-            
-            // All else being equal, the score should be directly related to the degree.
-            double degree = static_cast<double>(m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]);
-
-            // All else being equal, the score should be inversely related to the number of warm uses and
-            // defs.
-            const UseCounts<Tmp>::Counts& counts = m_useCounts[tmp];
-            double uses = counts.numWarmUses + counts.numDefs;
-
-            return degree / uses;
-        };
-
-        auto victimIterator = iterator;
-        double maxScore = score(*iterator);
-
-        ++iterator;
-        for (;iterator != m_spillWorklist.end(); ++iterator) {
-            double tmpScore = score(*iterator);
-            if (tmpScore > maxScore) {
-                if (m_unspillableTmp.contains(*iterator))
-                    continue;
-
-                victimIterator = iterator;
-                maxScore = tmpScore;
-            }
+    void addWorkList(IndexType tmpIndex)
+    {
+        if (!isPrecolored(tmpIndex) && m_degrees[tmpIndex] < m_regsInPriorityOrder.size() && !isMoveRelated(tmpIndex)) {
+            m_freezeWorklist.remove(tmpIndex);
+            m_simplifyWorklist.append(tmpIndex);
         }
-
-        Tmp victimTmp = *victimIterator;
-        m_spillWorklist.remove(victimIterator);
-        m_simplifyWorklist.append(victimTmp);
-        freezeMoves(victimTmp);
     }
 
-    void allocate()
+    void combine(IndexType u, IndexType v)
     {
-        ASSERT_WITH_MESSAGE(m_activeMoves.size() >= m_coalescingCandidates.size(), "The activeMove set should be big enough for the quick operations of BitVector.");
+        if (!m_freezeWorklist.remove(v))
+            m_spillWorklist.remove(v);
 
-        makeWorkList();
+        ASSERT(!m_coalescedTmps[v]);
+        m_coalescedTmps[v] = u;
 
-        if (debug) {
-            dataLog("Interference: ", listDump(m_interferenceEdges), "\n");
-            dumpInterferenceGraphInDot(WTF::dataFile());
-            dataLog("Initial work list\n");
-            dumpWorkLists(WTF::dataFile());
-        }
+        auto& vMoves = m_moveList[v];
+        m_moveList[u].add(vMoves.begin(), vMoves.end());
 
-        do {
-            if (traceDebug) {
-                dataLog("Before Graph simplification iteration\n");
-                dumpWorkLists(WTF::dataFile());
-            }
+        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
+            addEdgeDistinct(adjacentTmpIndex, u);
+            decrementDegree(adjacentTmpIndex);
+        });
 
-            if (!m_simplifyWorklist.isEmpty())
-                simplify();
-            else if (!m_worklistMoves.isEmpty())
-                coalesce();
-            else if (!m_freezeWorklist.isEmpty())
-                freeze();
-            else if (!m_spillWorklist.isEmpty())
-                selectSpill();
+        if (m_degrees[u] >= m_regsInPriorityOrder.size() && m_freezeWorklist.remove(u))
+            m_spillWorklist.add(u);
+    }
 
-            if (traceDebug) {
-                dataLog("After Graph simplification iteration\n");
-                dumpWorkLists(WTF::dataFile());
-            }
-        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || !m_spillWorklist.isEmpty());
-
-        assignColors();
+    bool canBeSafelyCoalesced(IndexType u, IndexType v)
+    {
+        ASSERT(!isPrecolored(v));
+        if (isPrecolored(u))
+            return precoloredCoalescingHeuristic(u, v);
+        return conservativeHeuristic(u, v);
     }
 
-    void assignColors()
+    bool conservativeHeuristic(IndexType u, IndexType v)
     {
-        ASSERT(m_simplifyWorklist.isEmpty());
-        ASSERT(m_worklistMoves.isEmpty());
-        ASSERT(m_freezeWorklist.isEmpty());
-        ASSERT(m_spillWorklist.isEmpty());
+        // This is using the Briggs' conservative coalescing rule:
+        // If the number of combined adjacent node with a degree >= K is less than K,
+        // it is safe to combine the two nodes. The reason is that we know that if the graph
+        // is colorable, we have fewer than K adjacents with high order and there is a color
+        // for the current node.
+        ASSERT(u != v);
+        ASSERT(!isPrecolored(u));
+        ASSERT(!isPrecolored(v));
 
-        // Reclaim as much memory as possible.
-        m_interferenceEdges.clear();
-        m_degrees.clear();
-        m_moveList.clear();
-        m_worklistMoves.clear();
-        m_simplifyWorklist.clear();
-        m_spillWorklist.clear();
-        m_freezeWorklist.clear();
+        const auto& adjacentsOfU = m_adjacencyList[u];
+        const auto& adjacentsOfV = m_adjacencyList[v];
 
-        // Try to color the Tmp on the stack.
-        m_coloredTmp.resize(m_adjacencyList.size());
-        const auto& registersInPriorityOrder = regsInPriorityOrder(type);
+        if (adjacentsOfU.size() + adjacentsOfV.size() < m_regsInPriorityOrder.size()) {
+            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
+            return true;
+        }
 
-        while (!m_selectStack.isEmpty()) {
-            Tmp tmp = m_selectStack.takeLast();
-            ASSERT(!tmp.isReg());
-            ASSERT(!m_coloredTmp[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]);
+        HashSet<IndexType> highOrderAdjacents;
 
-            RegisterSet coloredRegisters;
-            for (Tmp adjacentTmp : m_adjacencyList[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]) {
-                Tmp aliasTmp = getAlias(adjacentTmp);
-                if (aliasTmp.isReg()) {
-                    coloredRegisters.set(aliasTmp.reg());
-                    continue;
-                }
-
-                Reg reg = m_coloredTmp[AbsoluteTmpMapper<type>::absoluteIndex(aliasTmp)];
-                if (reg)
-                    coloredRegisters.set(reg);
+        for (IndexType adjacentTmpIndex : adjacentsOfU) {
+            ASSERT(adjacentTmpIndex != v);
+            ASSERT(adjacentTmpIndex != u);
+            if (!hasBeenSimplified(adjacentTmpIndex) && m_degrees[adjacentTmpIndex] >= m_regsInPriorityOrder.size()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry && highOrderAdjacents.size() >= m_regsInPriorityOrder.size())
+                    return false;
             }
-
-            bool colorAssigned = false;
-            for (Reg reg : registersInPriorityOrder) {
-                if (!coloredRegisters.get(reg)) {
-                    m_coloredTmp[AbsoluteTmpMapper<type>::absoluteIndex(tmp)] = reg;
-                    colorAssigned = true;
-                    break;
-                }
+        }
+        for (IndexType adjacentTmpIndex : adjacentsOfV) {
+            ASSERT(adjacentTmpIndex != u);
+            ASSERT(adjacentTmpIndex != v);
+            if (!hasBeenSimplified(adjacentTmpIndex) && m_degrees[adjacentTmpIndex] >= m_regsInPriorityOrder.size()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry && highOrderAdjacents.size() >= m_regsInPriorityOrder.size())
+                    return false;
             }
-
-            if (!colorAssigned)
-                m_spilledTmp.add(tmp);
         }
-        m_selectStack.clear();
 
-        if (m_spilledTmp.isEmpty())
-            m_coalescedTmpsAtSpill.clear();
-        else
-            m_coloredTmp.clear();
+        ASSERT(highOrderAdjacents.size() < m_regsInPriorityOrder.size());
+        return true;
     }
 
-#if PLATFORM(COCOA)
-#pragma mark - Debugging helpers.
-#endif
-
-    void dumpInterferenceGraphInDot(PrintStream& out)
+    bool precoloredCoalescingHeuristic(IndexType u, IndexType v)
     {
-        out.print("graph InterferenceGraph { \n");
+        ASSERT(isPrecolored(u));
+        ASSERT(!isPrecolored(v));
 
-        HashSet<Tmp> tmpsWithInterferences;
-        for (const auto& edge : m_interferenceEdges) {
-            tmpsWithInterferences.add(edge.first());
-            tmpsWithInterferences.add(edge.second());
+        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree >= K
+        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
+        // move, we may create an uncolorable graph.
+        const auto& adjacentsOfV = m_adjacencyList[v];
+        for (unsigned adjacentTmpIndex : adjacentsOfV) {
+            if (!isPrecolored(adjacentTmpIndex)
+                && !hasBeenSimplified(adjacentTmpIndex)
+                && m_degrees[adjacentTmpIndex] >= m_regsInPriorityOrder.size()
+                && !m_interferenceEdges.contains(InterferenceEdge(u, adjacentTmpIndex)))
+                return false;
         }
-
-        for (const auto& tmp : tmpsWithInterferences)
-            out.print("    ", tmp.internalValue(), " [label=\"", tmp, " (", m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)], ")\"];\n");
-
-        for (const auto& edge : m_interferenceEdges)
-            out.print("    ", edge.first().internalValue(), " -- ", edge.second().internalValue(), ";\n");
-        out.print("}\n");
+        return true;
     }
 
-    void dumpWorkLists(PrintStream& out)
-    {
-        out.print("Simplify work list:\n");
-        for (Tmp tmp : m_simplifyWorklist)
-            out.print("    ", tmp, "\n");
-        out.printf("Moves work list is empty? %d\n", m_worklistMoves.isEmpty());
-        out.print("Freeze work list:\n");
-        for (Tmp tmp : m_freezeWorklist)
-            out.print("    ", tmp, "\n");
-        out.print("Spill work list:\n");
-        for (Tmp tmp : m_spillWorklist)
-            out.print("    ", tmp, "\n");
-    }
-
+protected:
 #if PLATFORM(COCOA)
 #pragma mark -
 #endif
@@ -743,17 +447,15 @@
         {
         }
 
-        InterferenceEdge(Tmp a, Tmp b)
+        InterferenceEdge(IndexType a, IndexType b)
         {
-            ASSERT(a.internalValue());
-            ASSERT(b.internalValue());
+            ASSERT(a);
+            ASSERT(b);
             ASSERT_WITH_MESSAGE(a != b, "A Tmp can never interfere with itself. Doing so would force it to be the superposition of two registers.");
 
-            unsigned aInternal = a.internalValue();
-            unsigned bInternal = b.internalValue();
-            if (bInternal < aInternal)
-                std::swap(aInternal, bInternal);
-            m_value = static_cast<uint64_t>(aInternal) << 32 | bInternal;
+            if (b < a)
+                std::swap(a, b);
+            m_value = static_cast<uint64_t>(a) << 32 | b;
         }
 
         InterferenceEdge(WTF::HashTableDeletedValueType)
@@ -761,14 +463,14 @@
         {
         }
 
-        Tmp first() const
+        IndexType first() const
         {
-            return Tmp::tmpForInternalValue(m_value >> 32);
+            return m_value >> 32 & 0xffffffff;
         }
 
-        Tmp second() const
+        IndexType second() const
         {
-            return Tmp::tmpForInternalValue(m_value & 0xffffffff);
+            return m_value & 0xffffffff;
         }
 
         bool operator==(const InterferenceEdge other) const
@@ -802,38 +504,35 @@
     };
     typedef SimpleClassHashTraits<InterferenceEdge> InterferenceEdgeHashTraits;
 
-    Code& m_code;
-    const UseCounts<Tmp>& m_useCounts;
+    const Vector<Reg>& m_regsInPriorityOrder;
+    IndexType m_lastPrecoloredRegisterIndex { 0 };
 
-    const HashSet<Tmp>& m_unspillableTmp;
-    unsigned m_numberOfRegisters { 0 };
-
     // The interference graph.
     HashSet<InterferenceEdge, InterferenceEdgeHash, InterferenceEdgeHashTraits> m_interferenceEdges;
-    Vector<Vector<Tmp, 0, UnsafeVectorOverflow, 4>, 0, UnsafeVectorOverflow> m_adjacencyList;
-    Vector<unsigned, 0, UnsafeVectorOverflow> m_degrees;
+    Vector<Vector<IndexType, 0, UnsafeVectorOverflow, 4>, 0, UnsafeVectorOverflow> m_adjacencyList;
+    Vector<IndexType, 0, UnsafeVectorOverflow> m_degrees;
 
     // Instead of keeping track of the move instructions, we just keep their operands around and use the index
     // in the vector as the "identifier" for the move.
     struct MoveOperands {
-        Tmp src;
-        Tmp dst;
+        IndexType srcIndex;
+        IndexType dstIndex;
     };
     Vector<MoveOperands, 0, UnsafeVectorOverflow> m_coalescingCandidates;
 
     // List of every move instruction associated with a Tmp.
-    Vector<HashSet<unsigned, DefaultHash<unsigned>::Hash, WTF::UnsignedWithZeroKeyHashTraits<unsigned>>> m_moveList;
+    Vector<HashSet<IndexType, typename DefaultHash<IndexType>::Hash, WTF::UnsignedWithZeroKeyHashTraits<IndexType>>> m_moveList;
 
     // Colors.
     Vector<Reg, 0, UnsafeVectorOverflow> m_coloredTmp;
-    HashSet<Tmp> m_spilledTmp;
+    Vector<IndexType> m_spilledTmps;
 
     // Values that have been coalesced with an other value.
-    Vector<Tmp, 0, UnsafeVectorOverflow> m_coalescedTmps;
+    Vector<IndexType, 0, UnsafeVectorOverflow> m_coalescedTmps;
 
     // The stack of Tmp removed from the graph and ready for coloring.
     BitVector m_isOnSelectStack;
-    Vector<Tmp> m_selectStack;
+    Vector<IndexType> m_selectStack;
 
     struct OrderedMoveSet {
         unsigned addMove()
@@ -914,155 +613,571 @@
     // Set of "move" not yet ready for coalescing.
     BitVector m_activeMoves;
     // Low-degree, non-Move related.
-    Vector<Tmp> m_simplifyWorklist;
+    Vector<IndexType> m_simplifyWorklist;
     // High-degree Tmp.
-    HashSet<Tmp> m_spillWorklist;
+    HashSet<IndexType> m_spillWorklist;
     // Low-degree, Move related.
-    HashSet<Tmp> m_freezeWorklist;
+    HashSet<IndexType> m_freezeWorklist;
 
     bool m_hasSelectedSpill { false };
     bool m_hasCoalescedNonTrivialMove { false };
 
     // The mapping of Tmp to their alias for Moves that are always coalescing regardless of spilling.
-    Vector<Tmp, 0, UnsafeVectorOverflow> m_coalescedTmpsAtSpill;
+    Vector<IndexType, 0, UnsafeVectorOverflow> m_coalescedTmpsAtSpill;
 };
 
+// This perform all the tasks that are specific to certain register type.
 template<Arg::Type type>
-bool isUselessMoveInst(const Inst& inst)
-{
-    return MoveInstHelper<type>::mayBeCoalescable(inst) && inst.args[0].tmp() == inst.args[1].tmp();
-}
+class ColoringAllocator : public AbstractColoringAllocator<unsigned> {
+public:
+    ColoringAllocator(Code& code, const UseCounts<Tmp>& useCounts, const HashSet<unsigned>& unspillableTmp)
+        : AbstractColoringAllocator<unsigned>(regsInPriorityOrder(type), AbsoluteTmpMapper<type>::lastMachineRegisterIndex(), tmpArraySize(code))
+        , m_code(code)
+        , m_useCounts(useCounts)
+        , m_unspillableTmps(unspillableTmp)
+    {
+        initializePrecoloredTmp();
+        build();
+        allocate();
+    }
 
-template<Arg::Type type>
-void assignRegisterToTmpInProgram(Code& code, const IteratedRegisterCoalescingAllocator<type>& allocator)
-{
-    for (BasicBlock* block : code) {
-        // Give Tmp a valid register.
-        for (unsigned instIndex = 0; instIndex < block->size(); ++instIndex) {
-            Inst& inst = block->at(instIndex);
-            inst.forEachTmpFast([&] (Tmp& tmp) {
-                if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
-                    return;
+    Tmp getAlias(Tmp tmp) const
+    {
+        return AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(getAlias(AbsoluteTmpMapper<type>::absoluteIndex(tmp)));
+    }
 
-                Tmp aliasTmp = allocator.getAlias(tmp);
-                Tmp assignedTmp;
-                if (aliasTmp.isReg())
-                    assignedTmp = Tmp(aliasTmp.reg());
-                else {
-                    auto reg = allocator.allocatedReg(aliasTmp);
-                    ASSERT(reg);
-                    assignedTmp = Tmp(reg);
-                }
-                ASSERT(assignedTmp.isReg());
-                tmp = assignedTmp;
-            });
+    bool isUselessMove(const Inst& inst) const
+    {
+        return mayBeCoalescable(inst) && inst.args[0].tmp() == inst.args[1].tmp();
+    }
+
+    Tmp getAliasWhenSpilling(Tmp tmp) const
+    {
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.isEmpty(), "This function is only valid for coalescing during spilling.");
+
+        if (m_coalescedTmpsAtSpill.isEmpty())
+            return tmp;
+
+        unsigned aliasIndex = AbsoluteTmpMapper<type>::absoluteIndex(tmp);
+        while (unsigned nextAliasIndex = m_coalescedTmpsAtSpill[aliasIndex])
+            aliasIndex = nextAliasIndex;
+
+        Tmp alias = AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(aliasIndex);
+
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.contains(aliasIndex) || alias == tmp, "The aliases at spill should always be colorable. Something went horribly wrong.");
+
+        return alias;
+    }
+
+    template<typename IndexIterator>
+    class IndexToTmpIteratorAdaptor {
+    public:
+        IndexToTmpIteratorAdaptor(IndexIterator&& indexIterator)
+            : m_indexIterator(WTF::move(indexIterator))
+        {
         }
 
-        // Remove all the useless moves we created in this block.
-        block->insts().removeAllMatching(isUselessMoveInst<type>);
+        Tmp operator*() const { return AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(*m_indexIterator); }
+        IndexToTmpIteratorAdaptor& operator++() { ++m_indexIterator; return *this; }
+
+        bool operator==(const IndexToTmpIteratorAdaptor& other) const
+        {
+            return m_indexIterator == other.m_indexIterator;
+        }
+
+        bool operator!=(const IndexToTmpIteratorAdaptor& other) const
+        {
+            return !(*this == other);
+        }
+
+    private:
+        IndexIterator m_indexIterator;
+    };
+
+    template<typename Collection>
+    class IndexToTmpIterableAdaptor {
+    public:
+        IndexToTmpIterableAdaptor(const Collection& collection)
+            : m_collection(collection)
+        {
+        }
+
+        IndexToTmpIteratorAdaptor<typename Collection::const_iterator> begin() const
+        {
+            return m_collection.begin();
+        }
+
+        IndexToTmpIteratorAdaptor<typename Collection::const_iterator> end() const
+        {
+            return m_collection.end();
+        }
+
+    private:
+        const Collection& m_collection;
+    };
+
+    IndexToTmpIterableAdaptor<Vector<unsigned>> spilledTmps() const { return m_spilledTmps; }
+
+    bool requiresSpilling() const { return !m_spilledTmps.isEmpty(); }
+
+    Reg allocatedReg(Tmp tmp) const
+    {
+        ASSERT(!tmp.isReg());
+        ASSERT(m_coloredTmp.size());
+        ASSERT(tmp.isGP() == (type == Arg::GP));
+
+        Reg reg = m_coloredTmp[AbsoluteTmpMapper<type>::absoluteIndex(tmp)];
+        if (!reg) {
+            // We only care about Tmps that interfere. A Tmp that never interfere with anything
+            // can take any register.
+            reg = regsInPriorityOrder(type).first();
+        }
+        return reg;
     }
-}
 
-template<Arg::Type type>
-void addSpillAndFillToProgram(Code& code, const IteratedRegisterCoalescingAllocator<type>& allocator, HashSet<Tmp>& unspillableTmp)
-{
-    const HashSet<Tmp>& spilledTmp = allocator.spilledTmp();
+private:
+    static unsigned tmpArraySize(Code& code)
+    {
+        unsigned numTmps = code.numTmps(type);
+        return AbsoluteTmpMapper<type>::absoluteIndex(numTmps);
+    }
 
-    // All the spilled values become unspillable.
-    unspillableTmp.add(spilledTmp.begin(), spilledTmp.end());
+    void initializePrecoloredTmp()
+    {
+        m_coloredTmp.resize(m_lastPrecoloredRegisterIndex + 1);
+        for (unsigned i = 1; i <= m_lastPrecoloredRegisterIndex; ++i) {
+            Tmp tmp = AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(i);
+            ASSERT(tmp.isReg());
+            m_coloredTmp[i] = tmp.reg();
+        }
+    }
 
-    // Allocate stack slot for each spilled value.
-    HashMap<Tmp, StackSlot*> stackSlots;
-    for (Tmp tmp : spilledTmp) {
-        bool isNewTmp = stackSlots.add(tmp, code.addStackSlot(8, StackSlotKind::Anonymous)).isNewEntry;
-        ASSERT_UNUSED(isNewTmp, isNewTmp);
+    void build()
+    {
+        TmpLiveness<type> liveness(m_code);
+        for (BasicBlock* block : m_code) {
+            typename TmpLiveness<type>::LocalCalc localCalc(liveness, block);
+            for (unsigned instIndex = block->size(); instIndex--;) {
+                Inst& inst = block->at(instIndex);
+                Inst* nextInst = instIndex + 1 < block->size() ? &block->at(instIndex + 1) : nullptr;
+                build(inst, nextInst, localCalc);
+                localCalc.execute(instIndex);
+            }
+        }
     }
 
-    // Rewrite the program to get rid of the spilled Tmp.
-    InsertionSet insertionSet(code);
-    for (BasicBlock* block : code) {
-        bool hasAliasedTmps = false;
+    void build(Inst& inst, Inst* nextInst, const typename TmpLiveness<type>::LocalCalc& localCalc)
+    {
+        inst.forEachTmpWithExtraClobberedRegs(
+            nextInst,
+            [&] (const Tmp& arg, Arg::Role role, Arg::Type argType) {
+                if (!Arg::isDef(role) || argType != type)
+                    return;
+                
+                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
+                // We should not use forEachDefAndExtraClobberedTmp() here since colored Tmps
+                // do not need interference edges in our implementation.
+                inst.forEachTmp([&] (Tmp& otherArg, Arg::Role role, Arg::Type argType) {
+                    if (!Arg::isDef(role) || argType != type)
+                        return;
 
-        for (unsigned instIndex = 0; instIndex < block->size(); ++instIndex) {
-            Inst& inst = block->at(instIndex);
+                    addEdge(arg, otherArg);
+                });
+            });
 
-            // Try to replace the register use by memory use when possible.
-            for (unsigned i = 0; i < inst.args.size(); ++i) {
-                Arg& arg = inst.args[i];
-                if (arg.isTmp() && arg.type() == type && !arg.isReg()) {
-                    auto stackSlotEntry = stackSlots.find(arg.tmp());
-                    if (stackSlotEntry != stackSlots.end() && inst.admitsStack(i))
-                        arg = Arg::stack(stackSlotEntry->value);
+        if (mayBeCoalescable(inst)) {
+            // We do not want the Use() of this move to interfere with the Def(), even if it is live
+            // after the Move. If we were to add the interference edge, it would be impossible to
+            // coalesce the Move even if the two Tmp never interfere anywhere.
+            Tmp defTmp;
+            Tmp useTmp;
+            inst.forEachTmp([&defTmp, &useTmp] (Tmp& argTmp, Arg::Role role, Arg::Type) {
+                if (Arg::isDef(role))
+                    defTmp = argTmp;
+                else {
+                    ASSERT(Arg::isEarlyUse(role));
+                    useTmp = argTmp;
                 }
+            });
+            ASSERT(defTmp);
+            ASSERT(useTmp);
+
+            unsigned nextMoveIndex = m_coalescingCandidates.size();
+            m_coalescingCandidates.append({ AbsoluteTmpMapper<type>::absoluteIndex(useTmp), AbsoluteTmpMapper<type>::absoluteIndex(defTmp) });
+
+            unsigned newIndexInWorklist = m_worklistMoves.addMove();
+            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
+
+            ASSERT(nextMoveIndex <= m_activeMoves.size());
+            m_activeMoves.ensureSize(nextMoveIndex + 1);
+
+            for (const Arg& arg : inst.args) {
+                auto& list = m_moveList[AbsoluteTmpMapper<type>::absoluteIndex(arg.tmp())];
+                list.add(nextMoveIndex);
             }
 
-            // For every other case, add Load/Store as needed.
-            inst.forEachTmp([&] (Tmp& tmp, Arg::Role role, Arg::Type argType) {
-                if (tmp.isReg() || argType != type)
-                    return;
+            for (const Tmp& liveTmp : localCalc.live()) {
+                if (liveTmp != useTmp)
+                    addEdge(defTmp, liveTmp);
+            }
 
-                auto stackSlotEntry = stackSlots.find(tmp);
-                if (stackSlotEntry == stackSlots.end()) {
-                    Tmp alias = allocator.getAliasWhenSpilling(tmp);
-                    if (alias != tmp) {
-                        tmp = alias;
-                        hasAliasedTmps = true;
+            // The next instruction could have early clobbers. We need to consider those now.
+            if (nextInst && nextInst->hasSpecial()) {
+                nextInst->extraEarlyClobberedRegs().forEach([&] (Reg reg) {
+                    if (reg.isGPR() == (type == Arg::GP)) {
+                        for (const Tmp& liveTmp : localCalc.live())
+                            addEdge(Tmp(reg), liveTmp);
                     }
+                });
+            }
+        } else
+            addEdges(inst, nextInst, localCalc.live());
+    }
+
+    void addEdges(Inst& inst, Inst* nextInst, typename TmpLiveness<type>::LocalCalc::Iterable liveTmps)
+    {
+        // All the Def()s interfere with everthing live.
+        inst.forEachTmpWithExtraClobberedRegs(
+            nextInst,
+            [&] (const Tmp& arg, Arg::Role role, Arg::Type argType) {
+                if (!Arg::isDef(role) || argType != type)
                     return;
+                
+                for (const Tmp& liveTmp : liveTmps) {
+                    if (liveTmp.isGP() == (type == Arg::GP))
+                        addEdge(arg, liveTmp);
                 }
+            });
+    }
 
-                Arg arg = Arg::stack(stackSlotEntry->value);
-                Opcode move = type == Arg::GP ? Move : MoveDouble;
+    void addEdge(Tmp a, Tmp b)
+    {
+        ASSERT_WITH_MESSAGE(a.isGP() == b.isGP(), "An interference between registers of different types does not make sense, it can lead to non-colorable graphs.");
 
-                if (Arg::isAnyUse(role)) {
-                    Tmp newTmp = code.newTmp(type);
-                    insertionSet.insert(instIndex, move, inst.origin, arg, newTmp);
-                    tmp = newTmp;
+        addEdge(AbsoluteTmpMapper<type>::absoluteIndex(a), AbsoluteTmpMapper<type>::absoluteIndex(b));
+    }
 
-                    // Any new Fill() should never be spilled.
-                    unspillableTmp.add(tmp);
-                }
-                if (Arg::isDef(role))
-                    insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
-            });
+    bool mayBeCoalescable(const Inst& inst) const
+    {
+        switch (type) {
+        case Arg::GP:
+            switch (inst.opcode) {
+            case Move:
+                break;
+            default:
+                return false;
+            }
+            break;
+        case Arg::FP:
+            switch (inst.opcode) {
+            case MoveFloat:
+            case MoveDouble:
+                break;
+            default:
+                return false;
+            }
+            break;
         }
-        insertionSet.execute(block);
 
-        if (hasAliasedTmps)
-            block->insts().removeAllMatching(isUselessMoveInst<type>);
+        ASSERT_WITH_MESSAGE(inst.args.size() == 2, "We assume coalecable moves only have two arguments in a few places.");
+
+        if (!inst.args[0].isTmp() || !inst.args[1].isTmp())
+            return false;
+
+        ASSERT(inst.args[0].type() == type);
+        ASSERT(inst.args[1].type() == type);
+
+        return true;
     }
-}
 
-template<Arg::Type type>
-void iteratedRegisterCoalescingOnType(
-    Code& code, const UseCounts<Tmp>& useCounts, unsigned& numIterations)
-{
-    HashSet<Tmp> unspillableTmps;
-    while (true) {
-        numIterations++;
-        IteratedRegisterCoalescingAllocator<type> allocator(code, useCounts, unspillableTmps);
-        if (allocator.spilledTmp().isEmpty()) {
-            assignRegisterToTmpInProgram(code, allocator);
-            return;
+    void selectSpill()
+    {
+        if (!m_hasSelectedSpill) {
+            m_hasSelectedSpill = true;
+
+            if (m_hasCoalescedNonTrivialMove)
+                m_coalescedTmpsAtSpill = m_coalescedTmps;
         }
-        addSpillAndFillToProgram<type>(code, allocator, unspillableTmps);
+
+        auto iterator = m_spillWorklist.begin();
+
+        while (iterator != m_spillWorklist.end() && m_unspillableTmps.contains(*iterator))
+            ++iterator;
+
+        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), "It is not possible to color the Air graph with the number of available registers.");
+
+        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
+        // gets a register.
+        auto score = [&] (Tmp tmp) -> double {
+            // Air exposes the concept of "fast tmps", and we interpret that to mean that the tmp
+            // should always be in a register.
+            if (m_code.isFastTmp(tmp))
+                return 0;
+            
+            // All else being equal, the score should be directly related to the degree.
+            double degree = static_cast<double>(m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)]);
+
+            // All else being equal, the score should be inversely related to the number of warm uses and
+            // defs.
+            const UseCounts<Tmp>::Counts& counts = m_useCounts[tmp];
+            double uses = counts.numWarmUses + counts.numDefs;
+
+            return degree / uses;
+        };
+
+        auto victimIterator = iterator;
+        double maxScore = score(AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(*iterator));
+
+        ++iterator;
+        for (;iterator != m_spillWorklist.end(); ++iterator) {
+            double tmpScore = score(AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(*iterator));
+            if (tmpScore > maxScore) {
+                if (m_unspillableTmps.contains(*iterator))
+                    continue;
+
+                victimIterator = iterator;
+                maxScore = tmpScore;
+            }
+        }
+
+        unsigned victimIndex = *victimIterator;
+        m_spillWorklist.remove(victimIterator);
+        m_simplifyWorklist.append(victimIndex);
+
+        freezeMoves(victimIndex);
     }
-}
 
+    void allocate()
+    {
+        ASSERT_WITH_MESSAGE(m_activeMoves.size() >= m_coalescingCandidates.size(), "The activeMove set should be big enough for the quick operations of BitVector.");
+
+        makeWorkList();
+
+        if (debug) {
+            dataLog("Interference: ", listDump(m_interferenceEdges), "\n");
+            dumpInterferenceGraphInDot(WTF::dataFile());
+            dataLog("Initial work list\n");
+            dumpWorkLists(WTF::dataFile());
+        }
+
+        do {
+            if (traceDebug) {
+                dataLog("Before Graph simplification iteration\n");
+                dumpWorkLists(WTF::dataFile());
+            }
+
+            if (!m_simplifyWorklist.isEmpty())
+                simplify();
+            else if (!m_worklistMoves.isEmpty())
+                coalesce();
+            else if (!m_freezeWorklist.isEmpty())
+                freeze();
+            else if (!m_spillWorklist.isEmpty())
+                selectSpill();
+
+            if (traceDebug) {
+                dataLog("After Graph simplification iteration\n");
+                dumpWorkLists(WTF::dataFile());
+            }
+        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || !m_spillWorklist.isEmpty());
+
+        assignColors();
+    }
+
+#if PLATFORM(COCOA)
+#pragma mark - Debugging helpers.
+#endif
+
+    void dumpInterferenceGraphInDot(PrintStream& out)
+    {
+        out.print("graph InterferenceGraph { \n");
+
+        HashSet<Tmp> tmpsWithInterferences;
+        for (const auto& edge : m_interferenceEdges) {
+            tmpsWithInterferences.add(AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(edge.first()));
+            tmpsWithInterferences.add(AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(edge.second()));
+        }
+
+        for (const auto& tmp : tmpsWithInterferences)
+            out.print("    ", tmp.internalValue(), " [label=\"", tmp, " (", m_degrees[AbsoluteTmpMapper<type>::absoluteIndex(tmp)], ")\"];\n");
+
+        for (const auto& edge : m_interferenceEdges)
+            out.print("    ", edge.first(), " -- ", edge.second(), ";\n");
+        out.print("}\n");
+    }
+
+    void dumpWorkLists(PrintStream& out)
+    {
+        out.print("Simplify work list:\n");
+        for (unsigned tmpIndex : m_simplifyWorklist)
+            out.print("    ", AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(tmpIndex), "\n");
+        out.printf("Moves work list is empty? %d\n", m_worklistMoves.isEmpty());
+        out.print("Freeze work list:\n");
+        for (unsigned tmpIndex : m_freezeWorklist)
+            out.print("    ", AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(tmpIndex), "\n");
+        out.print("Spill work list:\n");
+        for (unsigned tmpIndex : m_spillWorklist)
+            out.print("    ", AbsoluteTmpMapper<type>::tmpFromAbsoluteIndex(tmpIndex), "\n");
+    }
+
+    using AbstractColoringAllocator<unsigned>::addEdge;
+    using AbstractColoringAllocator<unsigned>::getAlias;
+
+    Code& m_code;
+    // FIXME: spilling should not type specific. It is only a side effect of using UseCounts.
+    const UseCounts<Tmp>& m_useCounts;
+    const HashSet<unsigned>& m_unspillableTmps;
+};
+
+class IteratedRegisterCoalescing {
+public:
+    IteratedRegisterCoalescing(Code& code)
+        : m_code(code)
+        , m_useCounts(code)
+    {
+    }
+
+    void run()
+    {
+        iteratedRegisterCoalescingOnType<Arg::GP>();
+        iteratedRegisterCoalescingOnType<Arg::FP>();
+
+        if (reportStats)
+            dataLog("Num iterations = ", m_numIterations, "\n");
+    }
+
+private:
+    template<Arg::Type type>
+    void iteratedRegisterCoalescingOnType()
+    {
+        HashSet<unsigned> unspillableTmps;
+        while (true) {
+            ++m_numIterations;
+            ColoringAllocator<type> allocator(m_code, m_useCounts, unspillableTmps);
+            if (!allocator.requiresSpilling()) {
+                assignRegistersToTmp(allocator);
+                return;
+            }
+            addSpillAndFill<type>(allocator, unspillableTmps);
+        }
+    }
+
+    template<Arg::Type type>
+    void assignRegistersToTmp(const ColoringAllocator<type>& allocator)
+    {
+        for (BasicBlock* block : m_code) {
+            // Give Tmp a valid register.
+            for (unsigned instIndex = 0; instIndex < block->size(); ++instIndex) {
+                Inst& inst = block->at(instIndex);
+                inst.forEachTmpFast([&] (Tmp& tmp) {
+                    if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
+                        return;
+
+                    Tmp aliasTmp = allocator.getAlias(tmp);
+                    Tmp assignedTmp;
+                    if (aliasTmp.isReg())
+                        assignedTmp = Tmp(aliasTmp.reg());
+                    else {
+                        auto reg = allocator.allocatedReg(aliasTmp);
+                        ASSERT(reg);
+                        assignedTmp = Tmp(reg);
+                    }
+                    ASSERT(assignedTmp.isReg());
+                    tmp = assignedTmp;
+                });
+            }
+
+            // Remove all the useless moves we created in this block.
+            block->insts().removeAllMatching([&] (const Inst& inst) {
+                return allocator.isUselessMove(inst);
+            });
+        }
+    }
+
+    template<Arg::Type type>
+    void addSpillAndFill(const ColoringAllocator<type>& allocator, HashSet<unsigned>& unspillableTmps)
+    {
+        HashMap<Tmp, StackSlot*> stackSlots;
+        for (Tmp tmp : allocator.spilledTmps()) {
+            // All the spilled values become unspillable.
+            unspillableTmps.add(AbsoluteTmpMapper<type>::absoluteIndex(tmp));
+
+            // Allocate stack slot for each spilled value.
+            bool isNewTmp = stackSlots.add(tmp, m_code.addStackSlot(8, StackSlotKind::Anonymous)).isNewEntry;
+            ASSERT_UNUSED(isNewTmp, isNewTmp);
+        }
+
+        // Rewrite the program to get rid of the spilled Tmp.
+        InsertionSet insertionSet(m_code);
+        for (BasicBlock* block : m_code) {
+            bool hasAliasedTmps = false;
+
+            for (unsigned instIndex = 0; instIndex < block->size(); ++instIndex) {
+                Inst& inst = block->at(instIndex);
+
+                // Try to replace the register use by memory use when possible.
+                for (unsigned i = 0; i < inst.args.size(); ++i) {
+                    Arg& arg = inst.args[i];
+                    if (arg.isTmp() && arg.type() == type && !arg.isReg()) {
+                        auto stackSlotEntry = stackSlots.find(arg.tmp());
+                        if (stackSlotEntry != stackSlots.end() && inst.admitsStack(i))
+                            arg = Arg::stack(stackSlotEntry->value);
+                    }
+                }
+
+                // For every other case, add Load/Store as needed.
+                inst.forEachTmp([&] (Tmp& tmp, Arg::Role role, Arg::Type argType) {
+                    if (tmp.isReg() || argType != type)
+                        return;
+
+                    auto stackSlotEntry = stackSlots.find(tmp);
+                    if (stackSlotEntry == stackSlots.end()) {
+                        Tmp alias = allocator.getAliasWhenSpilling(tmp);
+                        if (alias != tmp) {
+                            tmp = alias;
+                            hasAliasedTmps = true;
+                        }
+                        return;
+                    }
+
+                    Arg arg = Arg::stack(stackSlotEntry->value);
+                    Opcode move = type == Arg::GP ? Move : MoveDouble;
+
+                    if (Arg::isAnyUse(role)) {
+                        Tmp newTmp = m_code.newTmp(type);
+                        insertionSet.insert(instIndex, move, inst.origin, arg, newTmp);
+                        tmp = newTmp;
+
+                        // Any new Fill() should never be spilled.
+                        unspillableTmps.add(AbsoluteTmpMapper<type>::absoluteIndex(tmp));
+                    }
+                    if (Arg::isDef(role))
+                        insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
+                });
+            }
+            insertionSet.execute(block);
+
+            if (hasAliasedTmps) {
+                block->insts().removeAllMatching([&] (const Inst& inst) {
+                    return allocator.isUselessMove(inst);
+                });
+            }
+        }
+    }
+
+    Code& m_code;
+    UseCounts<Tmp> m_useCounts;
+    unsigned m_numIterations { 0 };
+};
+
 } // anonymous namespace
 
 void iteratedRegisterCoalescing(Code& code)
 {
     PhaseScope phaseScope(code, "iteratedRegisterCoalescing");
 
-    unsigned numIterations = 0;
-
-    UseCounts<Tmp> useCounts(code);
-    iteratedRegisterCoalescingOnType<Arg::GP>(code, useCounts, numIterations);
-    iteratedRegisterCoalescingOnType<Arg::FP>(code, useCounts, numIterations);
-
-    if (reportStats)
-        dataLog("Num iterations = ", numIterations, "\n");
+    IteratedRegisterCoalescing iteratedRegisterCoalescing(code);
+    iteratedRegisterCoalescing.run();
 }
 
 } } } // namespace JSC::B3::Air

Modified: trunk/Source/_javascript_Core/b3/air/AirTmpInlines.h (194315 => 194316)


--- trunk/Source/_javascript_Core/b3/air/AirTmpInlines.h	2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/_javascript_Core/b3/air/AirTmpInlines.h	2015-12-19 21:04:55 UTC (rev 194316)
@@ -57,6 +57,11 @@
         return absoluteIndex(Tmp::gpTmpForIndex(tmpIndex));
     }
 
+    static unsigned lastMachineRegisterIndex()
+    {
+        return absoluteIndex(Tmp(MacroAssembler::lastRegister()));
+    }
+
     static Tmp tmpFromAbsoluteIndex(unsigned tmpIndex)
     {
         return Tmp::tmpForInternalValue(tmpIndex);
@@ -77,6 +82,11 @@
         return absoluteIndex(Tmp::fpTmpForIndex(tmpIndex));
     }
 
+    static unsigned lastMachineRegisterIndex()
+    {
+        return absoluteIndex(Tmp(MacroAssembler::lastFPRegister()));
+    }
+
     static Tmp tmpFromAbsoluteIndex(unsigned tmpIndex)
     {
         return Tmp::tmpForInternalValue(-tmpIndex);
_______________________________________________
webkit-changes mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to