Title: [256681] branches/safari-609-branch
Revision
256681
Author
repst...@apple.com
Date
2020-02-14 19:02:05 -0800 (Fri, 14 Feb 2020)

Log Message

Cherry-pick r255611. rdar://problem/59446995

    Reduce size of HashMap and HashSet
    https://bugs.webkit.org/show_bug.cgi?id=207138

    Reviewed by Yusuke Suzuki.

    Source/WTF:

    This reduces sizeof(HashMap) and sizeof(HashSet) from 24 to 8 on 64-bit systems.
    I measured that the overwhelming majority of HashMaps and HashSets never see more than 0 elements,
    so I moved the table metadata (tableSize, tableSizeMask, keyCount, deletedCount) to inside the
    dynamically allocated memory.  This makes another branch in size() for non-empty tables
    and an additional read and write when rehashing in exchange for fewer writes in the constructor
    and increased cache locality of everything that uses HashMap and HashSet, which is basically everything.

    * wtf/HashTable.h:
    (WTF::HashTable::~HashTable):
    (WTF::HashTable::end):
    (WTF::HashTable::end const):
    (WTF::HashTable::random):
    (WTF::HashTable::size const):
    (WTF::HashTable::capacity const):
    (WTF::HashTable::isEmpty const):
    (WTF::HashTable::reserveInitialCapacity):
    (WTF::HashTable::shouldExpand const):
    (WTF::HashTable::mustRehashInPlace const):
    (WTF::HashTable::shouldShrink const):
    (WTF::HashTable::shrink):
    (WTF::HashTable::makeIterator):
    (WTF::HashTable::makeConstIterator const):
    (WTF::HashTable::makeKnownGoodIterator):
    (WTF::HashTable::makeKnownGoodConstIterator const):
    (WTF::HashTable::tableSize const):
    (WTF::HashTable::setTableSize const):
    (WTF::HashTable::tableSizeMask const):
    (WTF::HashTable::setTableSizeMask):
    (WTF::HashTable::keyCount const):
    (WTF::HashTable::setKeyCount const):
    (WTF::HashTable::deletedCount const):
    (WTF::HashTable::setDeletedCount const):
    (WTF::KeyTraits>::HashTable):
    (WTF::KeyTraits>::inlineLookup):
    (WTF::KeyTraits>::lookupForWriting):
    (WTF::KeyTraits>::fullLookupForWriting):
    (WTF::KeyTraits>::addUniqueForInitialization):
    (WTF::KeyTraits>::add):
    (WTF::KeyTraits>::addPassingHashCode):
    (WTF::KeyTraits>::remove):
    (WTF::KeyTraits>::removeIf):
    (WTF::KeyTraits>::allocateTable):
    (WTF::KeyTraits>::deallocateTable):
    (WTF::KeyTraits>::expand):
    (WTF::KeyTraits>::shrinkToBestSize):
    (WTF::KeyTraits>::deleteReleasedWeakBuckets):
    (WTF::KeyTraits>::rehash):
    (WTF::KeyTraits>::clear):
    (WTF::KeyTraits>::swap):
    (WTF::KeyTraits>::checkTableConsistencyExceptSize const):

    Tools:

    * TestWebKitAPI/Tests/WTF/HashMap.cpp:
    (TestWebKitAPI::TEST):
    * TestWebKitAPI/Tests/WTF/HashSet.cpp:
    (TestWebKitAPI::TEST):

    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@255611 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Modified Paths

Diff

Modified: branches/safari-609-branch/Source/WTF/ChangeLog (256680 => 256681)


--- branches/safari-609-branch/Source/WTF/ChangeLog	2020-02-15 03:02:02 UTC (rev 256680)
+++ branches/safari-609-branch/Source/WTF/ChangeLog	2020-02-15 03:02:05 UTC (rev 256681)
@@ -1,3 +1,134 @@
+2020-02-14  Russell Epstein  <repst...@apple.com>
+
+        Cherry-pick r255611. rdar://problem/59446995
+
+    Reduce size of HashMap and HashSet
+    https://bugs.webkit.org/show_bug.cgi?id=207138
+    
+    Reviewed by Yusuke Suzuki.
+    
+    Source/WTF:
+    
+    This reduces sizeof(HashMap) and sizeof(HashSet) from 24 to 8 on 64-bit systems.
+    I measured that the overwhelming majority of HashMaps and HashSets never see more than 0 elements,
+    so I moved the table metadata (tableSize, tableSizeMask, keyCount, deletedCount) to inside the
+    dynamically allocated memory.  This makes another branch in size() for non-empty tables
+    and an additional read and write when rehashing in exchange for fewer writes in the constructor
+    and increased cache locality of everything that uses HashMap and HashSet, which is basically everything.
+    
+    * wtf/HashTable.h:
+    (WTF::HashTable::~HashTable):
+    (WTF::HashTable::end):
+    (WTF::HashTable::end const):
+    (WTF::HashTable::random):
+    (WTF::HashTable::size const):
+    (WTF::HashTable::capacity const):
+    (WTF::HashTable::isEmpty const):
+    (WTF::HashTable::reserveInitialCapacity):
+    (WTF::HashTable::shouldExpand const):
+    (WTF::HashTable::mustRehashInPlace const):
+    (WTF::HashTable::shouldShrink const):
+    (WTF::HashTable::shrink):
+    (WTF::HashTable::makeIterator):
+    (WTF::HashTable::makeConstIterator const):
+    (WTF::HashTable::makeKnownGoodIterator):
+    (WTF::HashTable::makeKnownGoodConstIterator const):
+    (WTF::HashTable::tableSize const):
+    (WTF::HashTable::setTableSize const):
+    (WTF::HashTable::tableSizeMask const):
+    (WTF::HashTable::setTableSizeMask):
+    (WTF::HashTable::keyCount const):
+    (WTF::HashTable::setKeyCount const):
+    (WTF::HashTable::deletedCount const):
+    (WTF::HashTable::setDeletedCount const):
+    (WTF::KeyTraits>::HashTable):
+    (WTF::KeyTraits>::inlineLookup):
+    (WTF::KeyTraits>::lookupForWriting):
+    (WTF::KeyTraits>::fullLookupForWriting):
+    (WTF::KeyTraits>::addUniqueForInitialization):
+    (WTF::KeyTraits>::add):
+    (WTF::KeyTraits>::addPassingHashCode):
+    (WTF::KeyTraits>::remove):
+    (WTF::KeyTraits>::removeIf):
+    (WTF::KeyTraits>::allocateTable):
+    (WTF::KeyTraits>::deallocateTable):
+    (WTF::KeyTraits>::expand):
+    (WTF::KeyTraits>::shrinkToBestSize):
+    (WTF::KeyTraits>::deleteReleasedWeakBuckets):
+    (WTF::KeyTraits>::rehash):
+    (WTF::KeyTraits>::clear):
+    (WTF::KeyTraits>::swap):
+    (WTF::KeyTraits>::checkTableConsistencyExceptSize const):
+    
+    Tools:
+    
+    * TestWebKitAPI/Tests/WTF/HashMap.cpp:
+    (TestWebKitAPI::TEST):
+    * TestWebKitAPI/Tests/WTF/HashSet.cpp:
+    (TestWebKitAPI::TEST):
+    
+    
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@255611 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2020-02-03  Alex Christensen  <achristen...@webkit.org>
+
+            Reduce size of HashMap and HashSet
+            https://bugs.webkit.org/show_bug.cgi?id=207138
+
+            Reviewed by Yusuke Suzuki.
+
+            This reduces sizeof(HashMap) and sizeof(HashSet) from 24 to 8 on 64-bit systems.
+            I measured that the overwhelming majority of HashMaps and HashSets never see more than 0 elements,
+            so I moved the table metadata (tableSize, tableSizeMask, keyCount, deletedCount) to inside the
+            dynamically allocated memory.  This makes another branch in size() for non-empty tables
+            and an additional read and write when rehashing in exchange for fewer writes in the constructor
+            and increased cache locality of everything that uses HashMap and HashSet, which is basically everything.
+
+            * wtf/HashTable.h:
+            (WTF::HashTable::~HashTable):
+            (WTF::HashTable::end):
+            (WTF::HashTable::end const):
+            (WTF::HashTable::random):
+            (WTF::HashTable::size const):
+            (WTF::HashTable::capacity const):
+            (WTF::HashTable::isEmpty const):
+            (WTF::HashTable::reserveInitialCapacity):
+            (WTF::HashTable::shouldExpand const):
+            (WTF::HashTable::mustRehashInPlace const):
+            (WTF::HashTable::shouldShrink const):
+            (WTF::HashTable::shrink):
+            (WTF::HashTable::makeIterator):
+            (WTF::HashTable::makeConstIterator const):
+            (WTF::HashTable::makeKnownGoodIterator):
+            (WTF::HashTable::makeKnownGoodConstIterator const):
+            (WTF::HashTable::tableSize const):
+            (WTF::HashTable::setTableSize const):
+            (WTF::HashTable::tableSizeMask const):
+            (WTF::HashTable::setTableSizeMask):
+            (WTF::HashTable::keyCount const):
+            (WTF::HashTable::setKeyCount const):
+            (WTF::HashTable::deletedCount const):
+            (WTF::HashTable::setDeletedCount const):
+            (WTF::KeyTraits>::HashTable):
+            (WTF::KeyTraits>::inlineLookup):
+            (WTF::KeyTraits>::lookupForWriting):
+            (WTF::KeyTraits>::fullLookupForWriting):
+            (WTF::KeyTraits>::addUniqueForInitialization):
+            (WTF::KeyTraits>::add):
+            (WTF::KeyTraits>::addPassingHashCode):
+            (WTF::KeyTraits>::remove):
+            (WTF::KeyTraits>::removeIf):
+            (WTF::KeyTraits>::allocateTable):
+            (WTF::KeyTraits>::deallocateTable):
+            (WTF::KeyTraits>::expand):
+            (WTF::KeyTraits>::shrinkToBestSize):
+            (WTF::KeyTraits>::deleteReleasedWeakBuckets):
+            (WTF::KeyTraits>::rehash):
+            (WTF::KeyTraits>::clear):
+            (WTF::KeyTraits>::swap):
+            (WTF::KeyTraits>::checkTableConsistencyExceptSize const):
+
 2020-02-11  Alan Coon  <alanc...@apple.com>
 
         Cherry-pick r255846. rdar://problem/59299151

Modified: branches/safari-609-branch/Source/WTF/wtf/HashTable.h (256680 => 256681)


--- branches/safari-609-branch/Source/WTF/wtf/HashTable.h	2020-02-15 03:02:02 UTC (rev 256680)
+++ branches/safari-609-branch/Source/WTF/wtf/HashTable.h	2020-02-15 03:02:05 UTC (rev 256681)
@@ -366,7 +366,7 @@
         {
             invalidateIterators(); 
             if (m_table)
-                deallocateTable(m_table, m_tableSize);
+                deallocateTable(m_table);
 #if CHECK_HASHTABLE_USE_AFTER_DESTRUCTION
             m_table = (ValueType*)(uintptr_t)0xbbadbeef;
 #endif
@@ -383,9 +383,9 @@
         // This is more efficient because we don't have to skip all the empty and deleted
         // buckets, and iterating an empty table is a common case that's worth optimizing.
         iterator begin() { return isEmpty() ? end() : makeIterator(m_table); }
-        iterator end() { return makeKnownGoodIterator(m_table + m_tableSize); }
+        iterator end() { return makeKnownGoodIterator(m_table + tableSize()); }
         const_iterator begin() const { return isEmpty() ? end() : makeConstIterator(m_table); }
-        const_iterator end() const { return makeKnownGoodConstIterator(m_table + m_tableSize); }
+        const_iterator end() const { return makeKnownGoodConstIterator(m_table + tableSize()); }
 
         iterator random()
         {
@@ -393,7 +393,7 @@
                 return end();
 
             while (1) {
-                auto& bucket = m_table[weakRandomUint32() & m_tableSizeMask];
+                auto& bucket = m_table[weakRandomUint32() & tableSizeMask()];
                 if (!isEmptyOrDeletedBucket(bucket))
                     return makeKnownGoodIterator(&bucket);
             };
@@ -401,22 +401,23 @@
 
         const_iterator random() const { return static_cast<const_iterator>(const_cast<HashTable*>(this)->random()); }
 
-        unsigned size() const { return m_keyCount; }
-        unsigned capacity() const { return m_tableSize; }
-        bool isEmpty() const { return !m_keyCount; }
+        unsigned size() const { return keyCount(); }
+        unsigned capacity() const { return tableSize(); }
+        bool isEmpty() const { return !keyCount(); }
 
         void reserveInitialCapacity(unsigned keyCount)
         {
             ASSERT(!m_table);
-            ASSERT(!m_tableSize);
-            ASSERT(!m_deletedCount);
+            ASSERT(!tableSize());
 
             unsigned minimumTableSize = KeyTraits::minimumTableSize;
             unsigned newTableSize = std::max(minimumTableSize, computeBestTableSize(keyCount));
 
-            m_tableSize = newTableSize;
-            m_tableSizeMask = newTableSize - 1;
             m_table = allocateTable(newTableSize);
+            setTableSize(newTableSize);
+            setTableSizeMask(newTableSize - 1);
+            setDeletedCount(0);
+            setKeyCount(0);
         }
 
         AddResult add(const ValueType& value) { return add<IdentityTranslatorType>(Extractor::extract(value), value); }
@@ -468,7 +469,7 @@
 
     private:
         static ValueType* allocateTable(unsigned size);
-        static void deallocateTable(ValueType* table, unsigned size);
+        static void deallocateTable(ValueType* table);
 
         typedef std::pair<ValueType*, bool> LookupType;
         typedef std::pair<LookupType, unsigned> FullLookupType;
@@ -486,11 +487,11 @@
         void remove(ValueType*);
 
         static constexpr unsigned computeBestTableSize(unsigned keyCount);
-        bool shouldExpand() const { return (m_keyCount + m_deletedCount) * m_maxLoad >= m_tableSize; }
-        bool mustRehashInPlace() const { return m_keyCount * m_minLoad < m_tableSize * 2; }
-        bool shouldShrink() const { return m_keyCount * m_minLoad < m_tableSize && m_tableSize > KeyTraits::minimumTableSize; }
+        bool shouldExpand() const { return (keyCount() + deletedCount()) * maxLoad >= tableSize(); }
+        bool mustRehashInPlace() const { return keyCount() * minLoad < tableSize() * 2; }
+        bool shouldShrink() const { return keyCount() * minLoad < tableSize() && tableSize() > KeyTraits::minimumTableSize; }
         ValueType* expand(ValueType* entry = nullptr);
-        void shrink() { rehash(m_tableSize / 2, nullptr); }
+        void shrink() { rehash(tableSize() / 2, nullptr); }
         void shrinkToBestSize();
     
         void deleteReleasedWeakBuckets();
@@ -504,10 +505,10 @@
         FullLookupType makeLookupResult(ValueType* position, bool found, unsigned hash)
             { return FullLookupType(LookupType(position, found), hash); }
 
-        iterator makeIterator(ValueType* pos) { return iterator(this, pos, m_table + m_tableSize); }
-        const_iterator makeConstIterator(ValueType* pos) const { return const_iterator(this, pos, m_table + m_tableSize); }
-        iterator makeKnownGoodIterator(ValueType* pos) { return iterator(this, pos, m_table + m_tableSize, HashItemKnownGood); }
-        const_iterator makeKnownGoodConstIterator(ValueType* pos) const { return const_iterator(this, pos, m_table + m_tableSize, HashItemKnownGood); }
+        iterator makeIterator(ValueType* pos) { return iterator(this, pos, m_table + tableSize()); }
+        const_iterator makeConstIterator(ValueType* pos) const { return const_iterator(this, pos, m_table + tableSize()); }
+        iterator makeKnownGoodIterator(ValueType* pos) { return iterator(this, pos, m_table + tableSize(), HashItemKnownGood); }
+        const_iterator makeKnownGoodConstIterator(ValueType* pos) const { return const_iterator(this, pos, m_table + tableSize(), HashItemKnownGood); }
 
 #if !ASSERT_DISABLED
         void checkTableConsistencyExceptSize() const;
@@ -521,15 +522,26 @@
         static void invalidateIterators() { }
 #endif
 
-        static constexpr unsigned m_maxLoad = 2;
-        static constexpr unsigned m_minLoad = 6;
+        static constexpr unsigned maxLoad = 2;
+        static constexpr unsigned minLoad = 6;
 
-        ValueType* m_table;
-        unsigned m_tableSize;
-        unsigned m_tableSizeMask;
-        unsigned m_keyCount;
-        unsigned m_deletedCount;
+        static constexpr int tableSizeOffset = -1;
+        static constexpr int tableSizeMaskOffset = -2;
+        static constexpr int keyCountOffset = -3;
+        static constexpr int deletedCountOffset = -4;
+        static constexpr unsigned metadataSize = 4 * sizeof(unsigned);
 
+        unsigned tableSize() const { return m_table ? reinterpret_cast<unsigned*>(m_table)[tableSizeOffset] : 0; }
+        void setTableSize(unsigned size) const { ASSERT(m_table); reinterpret_cast<unsigned*>(m_table)[tableSizeOffset] = size; }
+        unsigned tableSizeMask() const { ASSERT(m_table); return m_table ? reinterpret_cast<unsigned*>(m_table)[tableSizeMaskOffset] : 0; }
+        void setTableSizeMask(unsigned mask) { ASSERT(m_table); reinterpret_cast<unsigned*>(m_table)[tableSizeMaskOffset] = mask; }
+        unsigned keyCount() const { return m_table ? reinterpret_cast<unsigned*>(m_table)[keyCountOffset] : 0; }
+        void setKeyCount(unsigned count) const { ASSERT(m_table); reinterpret_cast<unsigned*>(m_table)[keyCountOffset] = count; }
+        unsigned deletedCount() const { ASSERT(m_table); return reinterpret_cast<unsigned*>(m_table)[deletedCountOffset]; }
+        void setDeletedCount(unsigned count) const { ASSERT(m_table); reinterpret_cast<unsigned*>(m_table)[deletedCountOffset] = count; }
+
+        ValueType* m_table { nullptr };
+
 #if CHECK_HASHTABLE_ITERATORS
     public:
         // All access to m_iterators should be guarded with m_mutex.
@@ -584,11 +596,7 @@
 
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
     inline HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::HashTable()
-        : m_table(0)
-        , m_tableSize(0)
-        , m_tableSizeMask(0)
-        , m_keyCount(0)
-        , m_deletedCount(0)
+        : m_table(nullptr)
 #if CHECK_HASHTABLE_ITERATORS
         , m_iterators(0)
         , m_mutex(makeUnique<Lock>())
@@ -649,14 +657,14 @@
         checkKey<HashTranslator>(key);
 
         unsigned k = 0;
-        unsigned sizeMask = m_tableSizeMask;
         ValueType* table = m_table;
+        if (!table)
+            return nullptr;
+
+        unsigned sizeMask = tableSizeMask();
         unsigned h = HashTranslator::hash(key);
         unsigned i = h & sizeMask;
 
-        if (!table)
-            return 0;
-
 #if DUMP_HASHTABLE_STATS
         ++HashTableStats::numAccesses;
         unsigned probeCount = 0;
@@ -707,7 +715,7 @@
 
         unsigned k = 0;
         ValueType* table = m_table;
-        unsigned sizeMask = m_tableSizeMask;
+        unsigned sizeMask = tableSizeMask();
         unsigned h = HashTranslator::hash(key);
         unsigned i = h & sizeMask;
 
@@ -768,7 +776,7 @@
 
         unsigned k = 0;
         ValueType* table = m_table;
-        unsigned sizeMask = m_tableSizeMask;
+        unsigned sizeMask = tableSizeMask();
         unsigned h = HashTranslator::hash(key);
         unsigned i = h & sizeMask;
 
@@ -834,7 +842,7 @@
 
         unsigned k = 0;
         ValueType* table = m_table;
-        unsigned sizeMask = m_tableSizeMask;
+        unsigned sizeMask = tableSizeMask();
         unsigned h = HashTranslator::hash(key);
         unsigned i = h & sizeMask;
 
@@ -915,7 +923,7 @@
 
         unsigned k = 0;
         ValueType* table = m_table;
-        unsigned sizeMask = m_tableSizeMask;
+        unsigned sizeMask = tableSizeMask();
         unsigned h = HashTranslator::hash(key);
         unsigned i = h & sizeMask;
 
@@ -969,11 +977,11 @@
         if (deletedEntry) {
             initializeBucket(*deletedEntry);
             entry = deletedEntry;
-            --m_deletedCount; 
+            setDeletedCount(deletedCount() - 1);
         }
 
         HashTranslator::translate(*entry, std::forward<T>(key), std::forward<Extra>(extra));
-        ++m_keyCount;
+        setKeyCount(keyCount() + 1);
         
         if (shouldExpand())
             entry = expand(entry);
@@ -1007,11 +1015,11 @@
         
         if (isDeletedBucket(*entry)) {
             initializeBucket(*entry);
-            --m_deletedCount;
+            setDeletedCount(deletedCount() - 1);
         }
 
         HashTranslator::translate(*entry, std::forward<T>(key), std::forward<Extra>(extra), h);
-        ++m_keyCount;
+        setKeyCount(keyCount() + 1);
 
         if (shouldExpand())
             entry = expand(entry);
@@ -1105,8 +1113,8 @@
 #endif
 
         deleteBucket(*pos);
-        ++m_deletedCount;
-        --m_keyCount;
+        setDeletedCount(deletedCount() + 1);
+        setKeyCount(keyCount() - 1);
 
         if (shouldShrink())
             shrink();
@@ -1157,7 +1165,7 @@
         unsigned removedBucketCount = 0;
         ValueType* table = m_table;
 
-        for (unsigned i = m_tableSize; i--;) {
+        for (unsigned i = tableSize(); i--;) {
             ValueType& bucket = table[i];
             if (isEmptyOrDeletedBucket(bucket))
                 continue;
@@ -1168,8 +1176,10 @@
             deleteBucket(bucket);
             ++removedBucketCount;
         }
-        m_deletedCount += removedBucketCount;
-        m_keyCount -= removedBucketCount;
+        if (removedBucketCount) {
+            setDeletedCount(deletedCount() + removedBucketCount);
+            setKeyCount(keyCount() - removedBucketCount);
+        }
 
         if (shouldShrink())
             shrinkToBestSize();
@@ -1181,12 +1191,14 @@
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
     auto HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::allocateTable(unsigned size) -> ValueType*
     {
+        static_assert(!(metadataSize % alignof(ValueType)));
+
         // would use a template member function with explicit specializations here, but
         // gcc doesn't appear to support that
         if (Traits::emptyValueIsZero)
-            return static_cast<ValueType*>(HashTableMalloc::zeroedMalloc(size * sizeof(ValueType)));
+            return reinterpret_cast<ValueType*>(static_cast<char*>(HashTableMalloc::zeroedMalloc(metadataSize + size * sizeof(ValueType))) + metadataSize);
 
-        ValueType* result = static_cast<ValueType*>(HashTableMalloc::malloc(size * sizeof(ValueType)));
+        ValueType* result = reinterpret_cast<ValueType*>(static_cast<char*>(HashTableMalloc::malloc(metadataSize + size * sizeof(ValueType))) + metadataSize);
         for (unsigned i = 0; i < size; i++)
             initializeBucket(result[i]);
         return result;
@@ -1193,13 +1205,14 @@
     }
 
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
-    void HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::deallocateTable(ValueType* table, unsigned size)
+    void HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::deallocateTable(ValueType* table)
     {
+        unsigned size = reinterpret_cast<unsigned*>(table)[tableSizeOffset];
         for (unsigned i = 0; i < size; ++i) {
             if (!isDeletedBucket(table[i]))
                 table[i].~ValueType();
         }
-        HashTableMalloc::free(table);
+        HashTableMalloc::free(reinterpret_cast<char*>(table) - metadataSize);
     }
 
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
@@ -1209,12 +1222,13 @@
             deleteReleasedWeakBuckets();
 
         unsigned newSize;
-        if (m_tableSize == 0)
+        unsigned oldSize = tableSize();
+        if (!oldSize)
             newSize = KeyTraits::minimumTableSize;
         else if (mustRehashInPlace())
-            newSize = m_tableSize;
+            newSize = oldSize;
         else
-            newSize = m_tableSize * 2;
+            newSize = oldSize * 2;
 
         return rehash(newSize, entry);
     }
@@ -1239,18 +1253,19 @@
     void HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::shrinkToBestSize()
     {
         unsigned minimumTableSize = KeyTraits::minimumTableSize;
-        rehash(std::max(minimumTableSize, computeBestTableSize(m_keyCount)), nullptr);
+        rehash(std::max(minimumTableSize, computeBestTableSize(keyCount())), nullptr);
     }
 
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
     void HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::deleteReleasedWeakBuckets()
     {
-        for (unsigned i = 0; i < m_tableSize; ++i) {
+        unsigned tableSize = this->tableSize();
+        for (unsigned i = 0; i < tableSize; ++i) {
             auto& entry = m_table[i];
             if (isReleasedWeakBucket(entry)) {
                 deleteBucket(entry);
-                ++m_deletedCount;
-                --m_keyCount;
+                setDeletedCount(deletedCount() + 1);
+                setKeyCount(keyCount() - 1);
             }
         }
     }
@@ -1260,7 +1275,7 @@
     {
         internalCheckTableConsistencyExceptSize();
 
-        unsigned oldTableSize = m_tableSize;
+        unsigned oldTableSize = tableSize();
         ValueType* oldTable = m_table;
 
 #if DUMP_HASHTABLE_STATS
@@ -1273,9 +1288,12 @@
             ++m_stats->numRehashes;
 #endif
 
-        m_tableSize = newTableSize;
-        m_tableSizeMask = newTableSize - 1;
+        unsigned oldKeyCount = keyCount();
         m_table = allocateTable(newTableSize);
+        setTableSize(newTableSize);
+        setTableSizeMask(newTableSize - 1);
+        setDeletedCount(0);
+        setKeyCount(oldKeyCount);
 
         Value* newEntry = nullptr;
         for (unsigned i = 0; i != oldTableSize; ++i) {
@@ -1294,7 +1312,7 @@
             if (isReleasedWeakBucket(oldEntry)) {
                 ASSERT(std::addressof(oldEntry) != entry);
                 oldEntry.~ValueType();
-                --m_keyCount;
+                setKeyCount(keyCount() - 1);
                 continue;
             }
 
@@ -1306,10 +1324,9 @@
             }
         }
 
-        m_deletedCount = 0;
+        if (oldTable)
+            HashTableMalloc::free(reinterpret_cast<char*>(oldTable) - metadataSize);
 
-        HashTableMalloc::free(oldTable);
-
         internalCheckTableConsistency();
         return newEntry;
     }
@@ -1321,21 +1338,13 @@
         if (!m_table)
             return;
 
-        deallocateTable(m_table, m_tableSize);
-        m_table = 0;
-        m_tableSize = 0;
-        m_tableSizeMask = 0;
-        m_keyCount = 0;
-        m_deletedCount = 0;
+        deallocateTable(m_table);
+        m_table = nullptr;
     }
 
     template<typename Key, typename Value, typename Extractor, typename HashFunctions, typename Traits, typename KeyTraits>
     HashTable<Key, Value, Extractor, HashFunctions, Traits, KeyTraits>::HashTable(const HashTable& other)
         : m_table(nullptr)
-        , m_tableSize(0)
-        , m_tableSizeMask(0)
-        , m_keyCount(0)
-        , m_deletedCount(0)
 #if CHECK_HASHTABLE_ITERATORS
         , m_iterators(nullptr)
         , m_mutex(makeUnique<Lock>())
@@ -1348,10 +1357,12 @@
         if (!otherKeyCount)
             return;
 
-        m_tableSize = computeBestTableSize(otherKeyCount);
-        m_tableSizeMask = m_tableSize - 1;
-        m_keyCount = otherKeyCount;
-        m_table = allocateTable(m_tableSize);
+        unsigned bestTableSize = computeBestTableSize(otherKeyCount);
+        m_table = allocateTable(bestTableSize);
+        setTableSize(bestTableSize);
+        setTableSizeMask(bestTableSize - 1);
+        setKeyCount(otherKeyCount);
+        setDeletedCount(0);
 
         for (const auto& otherValue : other)
             addUniqueForInitialization<IdentityTranslatorType>(Extractor::extract(otherValue), otherValue);
@@ -1364,10 +1375,6 @@
         other.invalidateIterators();
 
         std::swap(m_table, other.m_table);
-        std::swap(m_tableSize, other.m_tableSize);
-        std::swap(m_tableSizeMask, other.m_tableSizeMask);
-        std::swap(m_keyCount, other.m_keyCount);
-        std::swap(m_deletedCount, other.m_deletedCount);
 
 #if DUMP_HASHTABLE_STATS_PER_TABLE
         m_stats.swap(other.m_stats);
@@ -1391,18 +1398,8 @@
     {
         other.invalidateIterators();
 
-        m_table = other.m_table;
-        m_tableSize = other.m_tableSize;
-        m_tableSizeMask = other.m_tableSizeMask;
-        m_keyCount = other.m_keyCount;
-        m_deletedCount = other.m_deletedCount;
+        m_table = std::exchange(other.m_table, nullptr);
 
-        other.m_table = nullptr;
-        other.m_tableSize = 0;
-        other.m_tableSizeMask = 0;
-        other.m_keyCount = 0;
-        other.m_deletedCount = 0;
-
 #if DUMP_HASHTABLE_STATS_PER_TABLE
         m_stats = WTFMove(other.m_stats);
         other.m_stats = nullptr;
@@ -1435,7 +1432,8 @@
 
         unsigned count = 0;
         unsigned deletedCount = 0;
-        for (unsigned j = 0; j < m_tableSize; ++j) {
+        unsigned tableSize = this->tableSize();
+        for (unsigned j = 0; j < tableSize; ++j) {
             ValueType* entry = m_table + j;
             if (isEmptyBucket(*entry))
                 continue;
@@ -1453,11 +1451,11 @@
             ValueCheck<Key>::checkConsistency(key);
         }
 
-        ASSERT(count == m_keyCount);
-        ASSERT(deletedCount == m_deletedCount);
-        ASSERT(m_tableSize >= KeyTraits::minimumTableSize);
-        ASSERT(m_tableSizeMask);
-        ASSERT(m_tableSize == m_tableSizeMask + 1);
+        ASSERT(count == keyCount());
+        ASSERT(deletedCount == this->deletedCount());
+        ASSERT(this->tableSize() >= KeyTraits::minimumTableSize);
+        ASSERT(tableSizeMask());
+        ASSERT(this->tableSize() == tableSizeMask() + 1);
     }
 
 #endif // ASSERT_DISABLED

Modified: branches/safari-609-branch/Tools/ChangeLog (256680 => 256681)


--- branches/safari-609-branch/Tools/ChangeLog	2020-02-15 03:02:02 UTC (rev 256680)
+++ branches/safari-609-branch/Tools/ChangeLog	2020-02-15 03:02:05 UTC (rev 256681)
@@ -1,5 +1,90 @@
 2020-02-14  Russell Epstein  <repst...@apple.com>
 
+        Cherry-pick r255611. rdar://problem/59446995
+
+    Reduce size of HashMap and HashSet
+    https://bugs.webkit.org/show_bug.cgi?id=207138
+    
+    Reviewed by Yusuke Suzuki.
+    
+    Source/WTF:
+    
+    This reduces sizeof(HashMap) and sizeof(HashSet) from 24 to 8 on 64-bit systems.
+    I measured that the overwhelming majority of HashMaps and HashSets never see more than 0 elements,
+    so I moved the table metadata (tableSize, tableSizeMask, keyCount, deletedCount) to inside the
+    dynamically allocated memory.  This makes another branch in size() for non-empty tables
+    and an additional read and write when rehashing in exchange for fewer writes in the constructor
+    and increased cache locality of everything that uses HashMap and HashSet, which is basically everything.
+    
+    * wtf/HashTable.h:
+    (WTF::HashTable::~HashTable):
+    (WTF::HashTable::end):
+    (WTF::HashTable::end const):
+    (WTF::HashTable::random):
+    (WTF::HashTable::size const):
+    (WTF::HashTable::capacity const):
+    (WTF::HashTable::isEmpty const):
+    (WTF::HashTable::reserveInitialCapacity):
+    (WTF::HashTable::shouldExpand const):
+    (WTF::HashTable::mustRehashInPlace const):
+    (WTF::HashTable::shouldShrink const):
+    (WTF::HashTable::shrink):
+    (WTF::HashTable::makeIterator):
+    (WTF::HashTable::makeConstIterator const):
+    (WTF::HashTable::makeKnownGoodIterator):
+    (WTF::HashTable::makeKnownGoodConstIterator const):
+    (WTF::HashTable::tableSize const):
+    (WTF::HashTable::setTableSize const):
+    (WTF::HashTable::tableSizeMask const):
+    (WTF::HashTable::setTableSizeMask):
+    (WTF::HashTable::keyCount const):
+    (WTF::HashTable::setKeyCount const):
+    (WTF::HashTable::deletedCount const):
+    (WTF::HashTable::setDeletedCount const):
+    (WTF::KeyTraits>::HashTable):
+    (WTF::KeyTraits>::inlineLookup):
+    (WTF::KeyTraits>::lookupForWriting):
+    (WTF::KeyTraits>::fullLookupForWriting):
+    (WTF::KeyTraits>::addUniqueForInitialization):
+    (WTF::KeyTraits>::add):
+    (WTF::KeyTraits>::addPassingHashCode):
+    (WTF::KeyTraits>::remove):
+    (WTF::KeyTraits>::removeIf):
+    (WTF::KeyTraits>::allocateTable):
+    (WTF::KeyTraits>::deallocateTable):
+    (WTF::KeyTraits>::expand):
+    (WTF::KeyTraits>::shrinkToBestSize):
+    (WTF::KeyTraits>::deleteReleasedWeakBuckets):
+    (WTF::KeyTraits>::rehash):
+    (WTF::KeyTraits>::clear):
+    (WTF::KeyTraits>::swap):
+    (WTF::KeyTraits>::checkTableConsistencyExceptSize const):
+    
+    Tools:
+    
+    * TestWebKitAPI/Tests/WTF/HashMap.cpp:
+    (TestWebKitAPI::TEST):
+    * TestWebKitAPI/Tests/WTF/HashSet.cpp:
+    (TestWebKitAPI::TEST):
+    
+    
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@255611 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2020-02-03  Alex Christensen  <achristen...@webkit.org>
+
+            Reduce size of HashMap and HashSet
+            https://bugs.webkit.org/show_bug.cgi?id=207138
+
+            Reviewed by Yusuke Suzuki.
+
+            * TestWebKitAPI/Tests/WTF/HashMap.cpp:
+            (TestWebKitAPI::TEST):
+            * TestWebKitAPI/Tests/WTF/HashSet.cpp:
+            (TestWebKitAPI::TEST):
+
+2020-02-14  Russell Epstein  <repst...@apple.com>
+
         Cherry-pick r256395. rdar://problem/59447024
 
     Bug 207424: Crash in WebCore::ParsedContentType::parseContentType when parsing invalid MIME type

Modified: branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashMap.cpp (256680 => 256681)


--- branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashMap.cpp	2020-02-15 03:02:02 UTC (rev 256680)
+++ branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashMap.cpp	2020-02-15 03:02:05 UTC (rev 256681)
@@ -88,6 +88,9 @@
     const double negativeZeroKey = -zeroKey;
 
     DoubleHashMap map;
+#if !CHECK_HASHTABLE_ITERATORS &&!DUMP_HASHTABLE_STATS_PER_TABLE
+    static_assert(sizeof(map) == sizeof(void*));
+#endif
 
     map.add(clobberKey, 1);
     map.add(zeroKey, 2);

Modified: branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashSet.cpp (256680 => 256681)


--- branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashSet.cpp	2020-02-15 03:02:02 UTC (rev 256680)
+++ branches/safari-609-branch/Tools/TestWebKitAPI/Tests/WTF/HashSet.cpp	2020-02-15 03:02:05 UTC (rev 256681)
@@ -170,6 +170,9 @@
     ConstructorDestructorCounter::TestingScope scope;
 
     HashSet<std::unique_ptr<ConstructorDestructorCounter>> set;
+#if !CHECK_HASHTABLE_ITERATORS &&!DUMP_HASHTABLE_STATS_PER_TABLE
+    static_assert(sizeof(set) == sizeof(void*));
+#endif
 
     auto uniquePtr = makeUnique<ConstructorDestructorCounter>();
     ConstructorDestructorCounter* ptr = uniquePtr.get();
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to