Title: [288342] trunk/Source/bmalloc
Revision
288342
Author
fpi...@apple.com
Date
2022-01-20 18:43:34 -0800 (Thu, 20 Jan 2022)

Log Message

[libpas] medium directory lookup should bail if begin_index is zero to catch races with expendable memory decommit (cherry pick 434465bfb8e0c285d6763cf6aa0e04982199f824)
https://bugs.webkit.org/show_bug.cgi?id=235280

Reviewed by Yusuke Suzuki.

I've been seeing crashes in pas_segregated_heap_ensure_allocator_index where the directory that is
passed to the function doesn't match the size. The most likely reason why this is happening is that
the medium directory lookup raced with expendable memory decommit and returned the wrong directory.
To figure out how this happens, I added a bunch of tests to ExpendableMemoryTests. This change
includes various small fixes (like removing assertions) that were found by doing such testing, and it
also includes a test and a change that I think exactly catches what is going on:

- Expendable memory is decommitted so that the medium lookup sees begin_index == 0, but end_index
  still has its original value. This will cause it to return a tuple that is for a too-large size
  class.
- Some other thread rematerializes the expendable memory right after the medium lookup finishes, but
  before it loads the directory.
- The medium lookup finally loads the directory from the tuple, and now sees a non-NULL directory, so
  it thinks that everything is fine.

This race barely "works" since:

- Any other field in the medium tuple being zero would cause the medium lookup to fail, which would
  then cause a slow path that rematerializes expendable memory under a lock.
- Rematerialization of expendable memory adjusts the mutation count, so this race would only go
  undetected if the rematerialization happened after the medium lookup search but before when the
  medium lookup loads the directory.

The solution is to just have the medium lookup fail if begin_index == 0. Begin_index can never
legitimately be zero, because there's no way that a size class would want to be responsible for both
index 0 (i.e. the zero-byte object) and objects big enough to require medium lookup.

This adds new tests. While running those new tests, I found and fixed two other bugs:

- Recomputation of the index_to_small_allocator_index table subtly mishandles the cached_index case.
  Previously, it was only special-casing it only when the directory was not participating in lookup
  tables at all, but actually it needs to special-case it anytime that the directory doesn't otherwise
  think that it should set the entry at cached_index.

- Expendable memory commit/decommit was playing fast-and-loose with version numbers. This fixes it so
  that there is a global monotonically increasing version number.

* libpas/src/libpas/bmalloc_heap.c:
(bmalloc_flex_heap_ref_get_heap):
(bmalloc_auxiliary_heap_ref_get_heap):
(bmalloc_get_heap):
* libpas/src/libpas/bmalloc_heap.h:
* libpas/src/libpas/pas_expendable_memory.c:
(pas_expendable_memory_state_version_next):
(pas_expendable_memory_construct):
(pas_expendable_memory_commit_if_necessary):
(scavenge_impl):
(pas_expendable_memory_scavenge):
* libpas/src/libpas/pas_expendable_memory.h:
* libpas/src/libpas/pas_scavenger.c:
(handle_expendable_memory):
(scavenger_thread_main):
(pas_scavenger_decommit_expendable_memory):
(pas_scavenger_fake_decommit_expendable_memory):
* libpas/src/libpas/pas_scavenger.h:
* libpas/src/libpas/pas_segregated_heap.c:
(medium_directory_tuple_for_index_impl):
(pas_segregated_heap_medium_directory_tuple_for_index):
(pas_segregated_heap_medium_allocator_index_for_index):
(recompute_size_lookup):
(rematerialize_size_lookup_set_medium_directory_tuple):
(pas_segregated_heap_ensure_allocator_index):
(check_size_lookup_recomputation_set_medium_directory_tuple):
(check_size_lookup_recomputation_dump_directory):
(check_size_lookup_recomputation):
(check_size_lookup_recomputation_if_appropriate):
(pas_segregated_heap_ensure_size_directory_for_size):
* libpas/src/libpas/pas_segregated_heap.h:
* libpas/src/libpas/pas_segregated_size_directory.h:
(pas_segregated_size_directory_get_tlc_allocator_index):
* libpas/src/libpas/pas_try_allocate_primitive.h:
(pas_try_allocate_primitive_impl_casual_case):
(pas_try_allocate_primitive_impl_inline_only):
* libpas/src/test/ExpendableMemoryTests.cpp:
(std::testRage):
(std::testRematerializeAfterSearchOfDecommitted):
(std::testBasicSizeClass):
(addExpendableMemoryTests):
* libpas/src/test/TestHarness.cpp:
(RuntimeConfigTestScope::RuntimeConfigTestScope):

Modified Paths

Diff

Modified: trunk/Source/bmalloc/ChangeLog (288341 => 288342)


--- trunk/Source/bmalloc/ChangeLog	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/ChangeLog	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,3 +1,91 @@
+2022-01-20  Filip Pizlo  <fpi...@apple.com>
+
+        [libpas] medium directory lookup should bail if begin_index is zero to catch races with expendable memory decommit (cherry pick 434465bfb8e0c285d6763cf6aa0e04982199f824)
+        https://bugs.webkit.org/show_bug.cgi?id=235280
+
+        Reviewed by Yusuke Suzuki.
+
+        I've been seeing crashes in pas_segregated_heap_ensure_allocator_index where the directory that is
+        passed to the function doesn't match the size. The most likely reason why this is happening is that
+        the medium directory lookup raced with expendable memory decommit and returned the wrong directory.
+        To figure out how this happens, I added a bunch of tests to ExpendableMemoryTests. This change
+        includes various small fixes (like removing assertions) that were found by doing such testing, and it
+        also includes a test and a change that I think exactly catches what is going on:
+
+        - Expendable memory is decommitted so that the medium lookup sees begin_index == 0, but end_index
+          still has its original value. This will cause it to return a tuple that is for a too-large size
+          class.
+        - Some other thread rematerializes the expendable memory right after the medium lookup finishes, but
+          before it loads the directory.
+        - The medium lookup finally loads the directory from the tuple, and now sees a non-NULL directory, so
+          it thinks that everything is fine.
+
+        This race barely "works" since:
+
+        - Any other field in the medium tuple being zero would cause the medium lookup to fail, which would
+          then cause a slow path that rematerializes expendable memory under a lock.
+        - Rematerialization of expendable memory adjusts the mutation count, so this race would only go
+          undetected if the rematerialization happened after the medium lookup search but before when the
+          medium lookup loads the directory.
+
+        The solution is to just have the medium lookup fail if begin_index == 0. Begin_index can never
+        legitimately be zero, because there's no way that a size class would want to be responsible for both
+        index 0 (i.e. the zero-byte object) and objects big enough to require medium lookup.
+
+        This adds new tests. While running those new tests, I found and fixed two other bugs:
+
+        - Recomputation of the index_to_small_allocator_index table subtly mishandles the cached_index case.
+          Previously, it was only special-casing it only when the directory was not participating in lookup
+          tables at all, but actually it needs to special-case it anytime that the directory doesn't otherwise
+          think that it should set the entry at cached_index.
+
+        - Expendable memory commit/decommit was playing fast-and-loose with version numbers. This fixes it so
+          that there is a global monotonically increasing version number.
+
+        * libpas/src/libpas/bmalloc_heap.c:
+        (bmalloc_flex_heap_ref_get_heap):
+        (bmalloc_auxiliary_heap_ref_get_heap):
+        (bmalloc_get_heap):
+        * libpas/src/libpas/bmalloc_heap.h:
+        * libpas/src/libpas/pas_expendable_memory.c:
+        (pas_expendable_memory_state_version_next):
+        (pas_expendable_memory_construct):
+        (pas_expendable_memory_commit_if_necessary):
+        (scavenge_impl):
+        (pas_expendable_memory_scavenge):
+        * libpas/src/libpas/pas_expendable_memory.h:
+        * libpas/src/libpas/pas_scavenger.c:
+        (handle_expendable_memory):
+        (scavenger_thread_main):
+        (pas_scavenger_decommit_expendable_memory):
+        (pas_scavenger_fake_decommit_expendable_memory):
+        * libpas/src/libpas/pas_scavenger.h:
+        * libpas/src/libpas/pas_segregated_heap.c:
+        (medium_directory_tuple_for_index_impl):
+        (pas_segregated_heap_medium_directory_tuple_for_index):
+        (pas_segregated_heap_medium_allocator_index_for_index):
+        (recompute_size_lookup):
+        (rematerialize_size_lookup_set_medium_directory_tuple):
+        (pas_segregated_heap_ensure_allocator_index):
+        (check_size_lookup_recomputation_set_medium_directory_tuple):
+        (check_size_lookup_recomputation_dump_directory):
+        (check_size_lookup_recomputation):
+        (check_size_lookup_recomputation_if_appropriate):
+        (pas_segregated_heap_ensure_size_directory_for_size):
+        * libpas/src/libpas/pas_segregated_heap.h:
+        * libpas/src/libpas/pas_segregated_size_directory.h:
+        (pas_segregated_size_directory_get_tlc_allocator_index):
+        * libpas/src/libpas/pas_try_allocate_primitive.h:
+        (pas_try_allocate_primitive_impl_casual_case):
+        (pas_try_allocate_primitive_impl_inline_only):
+        * libpas/src/test/ExpendableMemoryTests.cpp:
+        (std::testRage):
+        (std::testRematerializeAfterSearchOfDecommitted):
+        (std::testBasicSizeClass):
+        (addExpendableMemoryTests):
+        * libpas/src/test/TestHarness.cpp:
+        (RuntimeConfigTestScope::RuntimeConfigTestScope):
+
 2022-01-20  Ben Nham  <n...@apple.com>
 
         Make bmalloc work better with various MallocStackLogging modes

Modified: trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.c (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.c	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.c	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019-2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2019-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,6 +35,7 @@
 #include "pas_deallocate.h"
 #include "pas_ensure_heap_forced_into_reserved_memory.h"
 #include "pas_get_allocation_size.h"
+#include "pas_get_heap.h"
 
 PAS_BEGIN_EXTERN_C;
 
@@ -271,6 +272,12 @@
     return bmalloc_reallocate_flex_inline(heap_ref, old_ptr, new_size);
 }
 
+pas_heap* bmalloc_flex_heap_ref_get_heap(pas_primitive_heap_ref* heap_ref)
+{
+    return pas_ensure_heap(&heap_ref->base, pas_primitive_heap_ref_kind,
+                           &bmalloc_heap_config, &bmalloc_flex_runtime_config.base);
+}
+
 PAS_NEVER_INLINE void* bmalloc_try_allocate_auxiliary_with_alignment_casual(
     pas_primitive_heap_ref* heap_ref, size_t size, size_t alignment)
 {
@@ -337,6 +344,12 @@
     return bmalloc_reallocate_auxiliary_inline(old_ptr, heap_ref, new_size, free_mode);
 }
 
+pas_heap* bmalloc_auxiliary_heap_ref_get_heap(pas_primitive_heap_ref* heap_ref)
+{
+    return pas_ensure_heap(&heap_ref->base, pas_primitive_heap_ref_kind,
+                           &bmalloc_heap_config, &bmalloc_primitive_runtime_config.base);
+}
+
 void bmalloc_deallocate(void* ptr)
 {
     bmalloc_deallocate_inline(ptr);
@@ -361,6 +374,11 @@
     return pas_get_allocation_size(ptr, BMALLOC_HEAP_CONFIG);
 }
 
+pas_heap* bmalloc_get_heap(void* ptr)
+{
+    return pas_get_heap(ptr, BMALLOC_HEAP_CONFIG);
+}
+
 PAS_END_EXTERN_C;
 
 #endif /* PAS_ENABLE_BMALLOC */

Modified: trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/bmalloc_heap.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2021-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -96,6 +96,8 @@
 PAS_BAPI void* bmalloc_try_reallocate_flex(pas_primitive_heap_ref* heap_ref, void* old_ptr, size_t new_size);
 PAS_BAPI void* bmalloc_reallocate_flex(pas_primitive_heap_ref* heap_ref, void* old_ptr, size_t new_size);
 
+PAS_API pas_heap* bmalloc_flex_heap_ref_get_heap(pas_primitive_heap_ref* heap_ref);
+
 PAS_API void* bmalloc_try_allocate_auxiliary(pas_primitive_heap_ref* heap_ref,
                                              size_t size);
 PAS_API void* bmalloc_allocate_auxiliary(pas_primitive_heap_ref* heap_ref,
@@ -122,6 +124,8 @@
                                            size_t new_size,
                                            pas_reallocate_free_mode free_mode);
 
+PAS_API pas_heap* bmalloc_auxiliary_heap_ref_get_heap(pas_primitive_heap_ref* heap_ref);
+
 PAS_API void bmalloc_deallocate(void*);
 
 PAS_API pas_heap* bmalloc_force_auxiliary_heap_into_reserved_memory(pas_primitive_heap_ref* heap_ref,
@@ -129,6 +133,7 @@
                                                                     uintptr_t end);
 
 PAS_BAPI size_t bmalloc_heap_ref_get_type_size(pas_heap_ref* heap_ref);
+PAS_API pas_heap* bmalloc_get_heap(void* ptr);
 PAS_BAPI size_t bmalloc_get_allocation_size(void* ptr);
 
 PAS_END_EXTERN_C;

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.c (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.c	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.c	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2021-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -33,6 +33,17 @@
 #include "pas_heap_lock.h"
 #include "pas_page_malloc.h"
 
+pas_expendable_memory_state_version pas_expendable_memory_version_counter = 1;
+
+pas_expendable_memory_state_version pas_expendable_memory_state_version_next(void)
+{
+    pas_expendable_memory_state_version result;
+    pas_heap_lock_assert_held();
+    result = ++pas_expendable_memory_version_counter;
+    PAS_ASSERT(result > 1);
+    return result;
+}
+
 void pas_expendable_memory_construct(pas_expendable_memory* memory,
                                      size_t size)
 {
@@ -43,7 +54,7 @@
     memory->size = (unsigned)size;
 
     PAS_ASSERT(pas_is_aligned(size, PAS_EXPENDABLE_MEMORY_PAGE_SIZE));
-    
+
     for (index = size / PAS_EXPENDABLE_MEMORY_PAGE_SIZE; index--;) {
         memory->states[index] =
             pas_expendable_memory_state_create(PAS_EXPENDABLE_MEMORY_STATE_KIND_DECOMMITTED, 1);
@@ -220,10 +231,12 @@
     }
 
     PAS_ASSERT(first_version >= header_version);
-    PAS_ASSERT(last_version >= header_version);
-    PAS_ASSERT(first_version > header_version || last_version > header_version);
 
-    new_version = PAS_MAX(first_version, last_version);
+    /* We'd like to assert that last_version >= header_version, except that it's possible for someone to
+       do a commit_if_necessary on the prefix of this object, and then not update last_version. So,
+       last_version could be stuck arbitrarily in the past. */
+
+    new_version = pas_expendable_memory_state_version_next();
     new_state = pas_expendable_memory_state_create(PAS_EXPENDABLE_MEMORY_STATE_KIND_JUST_USED, new_version);
 
     header->states[first] = new_state;
@@ -239,7 +252,7 @@
 
     PAS_ASSERT(first_version > header_version);
 
-    new_version = first_version;
+    new_version = pas_expendable_memory_state_version_next();
     new_state = pas_expendable_memory_state_create(PAS_EXPENDABLE_MEMORY_STATE_KIND_JUST_USED, new_version);
 
     header->states[first] = new_state;
@@ -249,16 +262,19 @@
     return true;
 }
 
-static bool scavenge_impl(pas_expendable_memory* header,
-                          void* payload,
-                          pas_expendable_memory_scavenge_kind scavenge_kind)
+static PAS_ALWAYS_INLINE bool scavenge_impl(pas_expendable_memory* header,
+                                            void* payload,
+                                            pas_expendable_memory_scavenge_kind scavenge_kind)
 {
     size_t index;
     size_t index_end;
     bool result;
+    pas_expendable_memory_state_version decommit_version;
     
     pas_heap_lock_assert_held();
 
+    decommit_version = pas_expendable_memory_state_version_next();
+
     PAS_ASSERT(header->size);
     PAS_ASSERT(pas_is_aligned(header->size, PAS_EXPENDABLE_MEMORY_PAGE_SIZE));
     PAS_ASSERT(header->bump < header->size);
@@ -312,7 +328,8 @@
                     break;
                 }
             } else {
-                PAS_ASSERT(scavenge_kind == pas_expendable_memory_scavenge_forced);
+                PAS_ASSERT(scavenge_kind == pas_expendable_memory_scavenge_forced
+                           || scavenge_kind == pas_expendable_memory_scavenge_forced_fake);
                 if (kind < PAS_EXPENDABLE_MEMORY_STATE_KIND_JUST_USED)
                     break;
                 PAS_TESTING_ASSERT(kind <= PAS_EXPENDABLE_MEMORY_STATE_KIND_MAX_JUST_USED);
@@ -319,7 +336,7 @@
             }
             header->states[other_index] = pas_expendable_memory_state_create(
                 PAS_EXPENDABLE_MEMORY_STATE_KIND_DECOMMITTED,
-                pas_expendable_memory_state_get_version(state) + 1);
+                decommit_version);
         }
 
         /* Make sure that by the time we decommit, nobody can lie about using the stuff we are decommitting.
@@ -327,8 +344,10 @@
            memory. So, it might happen after we have already decommitted, or decided to decommit. */
         pas_store_store_fence();
 
-        pas_page_malloc_decommit_asymmetric((char*)payload + index * PAS_EXPENDABLE_MEMORY_PAGE_SIZE,
-                                            (other_index - index) * PAS_EXPENDABLE_MEMORY_PAGE_SIZE);
+        if (scavenge_kind != pas_expendable_memory_scavenge_forced_fake) {
+            pas_page_malloc_decommit_asymmetric((char*)payload + index * PAS_EXPENDABLE_MEMORY_PAGE_SIZE,
+                                                (other_index - index) * PAS_EXPENDABLE_MEMORY_PAGE_SIZE);
+        }
 
         /* At this point, any of the pages in this range could get decommitted, but it won't necessarily
            happen immediately. Any write to these pages will cancel the decommit, or undo it if it's already
@@ -362,9 +381,6 @@
         index = other_index - 1;
     }
 
-    if (scavenge_kind == pas_expendable_memory_scavenge_forced)
-        PAS_ASSERT(!result);
-
     return result;
 }
 
@@ -372,13 +388,18 @@
                                     void* payload,
                                     pas_expendable_memory_scavenge_kind kind)
 {
-    switch (kind) {
-    case pas_expendable_memory_scavenge_periodic:
+    bool result;
+    
+    if (kind == pas_expendable_memory_scavenge_periodic) {
+        /* Specialize for this case. We want the scavenger to be fast. */
         return scavenge_impl(header, payload, pas_expendable_memory_scavenge_periodic);
-    case pas_expendable_memory_scavenge_forced:
-        return scavenge_impl(header, payload, pas_expendable_memory_scavenge_forced);
     }
-    PAS_ASSERT(!"Should not be reached");
+
+    PAS_ASSERT(kind == pas_expendable_memory_scavenge_forced
+               || kind == pas_expendable_memory_scavenge_forced_fake);
+
+    result = scavenge_impl(header, payload, kind);
+    PAS_ASSERT(!result);
     return false;
 }
 

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_expendable_memory.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2021-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -53,6 +53,8 @@
 
 #define PAS_EXPENDABLE_MEMORY_PAGE_SIZE 16384lu
 
+PAS_API extern pas_expendable_memory_state_version pas_expendable_memory_version_counter;
+
 enum pas_expendable_memory_touch_kind {
     pas_expendable_memory_touch_to_note_use,
     pas_expendable_memory_touch_to_commit_if_necessary
@@ -61,8 +63,15 @@
 typedef enum pas_expendable_memory_touch_kind pas_expendable_memory_touch_kind;
 
 enum pas_expendable_memory_scavenge_kind {
+    /* Decommits only things that haven't been used recently, and does the count increment that allows us
+       to tell that something hasn't been used recently. */
     pas_expendable_memory_scavenge_periodic,
-    pas_expendable_memory_scavenge_forced
+
+    /* Decommits everything that it can. */
+    pas_expendable_memory_scavenge_forced,
+
+    /* Pretends to decommit everything that it can without making any syscalls (useful for testing). */
+    pas_expendable_memory_scavenge_forced_fake,
 };
 
 typedef enum pas_expendable_memory_scavenge_kind pas_expendable_memory_scavenge_kind;
@@ -79,6 +88,8 @@
     return state >> PAS_EXPENDABLE_MEMORY_STATE_NUM_KIND_BITS;
 }
 
+PAS_API pas_expendable_memory_state_version pas_expendable_memory_state_version_next(void);
+
 static inline pas_expendable_memory_state pas_expendable_memory_state_create(
     pas_expendable_memory_state_kind kind,
     pas_expendable_memory_state_version version)

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.c (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.c	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.c	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019-2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2019-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -133,6 +133,16 @@
         printf("Woke up from timed wait at %.2lf.\n", get_time_in_milliseconds());
 }
 
+static bool handle_expendable_memory(pas_expendable_memory_scavenge_kind kind)
+{
+    bool should_go_again = false;
+    pas_heap_lock_lock();
+    should_go_again |= pas_compact_expendable_memory_scavenge(kind);
+    should_go_again |= pas_large_expendable_memory_scavenge(kind);
+    pas_heap_lock_unlock();
+    return should_go_again;
+}
+
 static void* scavenger_thread_main(void* arg)
 {
     pas_scavenger_data* data;
@@ -200,10 +210,7 @@
                                            pas_deallocator_scavenge_flush_log_if_clean_action,
                                            pas_lock_is_not_held);
 
-        pas_heap_lock_lock();
-        should_go_again |= pas_compact_expendable_memory_scavenge(pas_expendable_memory_scavenge_periodic);
-        should_go_again |= pas_large_expendable_memory_scavenge(pas_expendable_memory_scavenge_periodic);
-        pas_heap_lock_unlock();
+        should_go_again |= handle_expendable_memory(pas_expendable_memory_scavenge_periodic);
 
         /* For the purposes of performance tuning, as well as some of the scavenger tests, the epoch
            is time in nanoseconds.
@@ -478,12 +485,14 @@
 
 void pas_scavenger_decommit_expendable_memory(void)
 {
-    pas_heap_lock_lock();
-    pas_compact_expendable_memory_scavenge(pas_expendable_memory_scavenge_forced);
-    pas_large_expendable_memory_scavenge(pas_expendable_memory_scavenge_forced);
-    pas_heap_lock_unlock();
+    handle_expendable_memory(pas_expendable_memory_scavenge_forced);
 }
 
+void pas_scavenger_fake_decommit_expendable_memory(void)
+{
+    handle_expendable_memory(pas_expendable_memory_scavenge_forced_fake);
+}
+
 size_t pas_scavenger_decommit_free_memory(void)
 {
     pas_page_sharing_pool_scavenge_result result;

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_scavenger.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019-2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2019-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -103,6 +103,7 @@
 PAS_API void pas_scavenger_clear_all_caches_except_remote_tlcs(void);
 PAS_API void pas_scavenger_clear_all_caches(void);
 PAS_API void pas_scavenger_decommit_expendable_memory(void);
+PAS_API void pas_scavenger_fake_decommit_expendable_memory(void); /* Useful for testing. */
 PAS_API size_t pas_scavenger_decommit_free_memory(void);
 
 PAS_API void pas_scavenger_run_synchronously_now(void);

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.c (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.c	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.c	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2018-2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2018-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -47,6 +47,11 @@
 
 unsigned pas_segregated_heap_num_size_lookup_rematerializations;
 
+static void check_size_lookup_recomputation_if_appropriate(pas_segregated_heap* heap,
+                                                           pas_heap_config* config,
+                                                           unsigned *cached_index,
+                                                           const char* where);
+
 static size_t min_object_size_for_heap_config(pas_heap_config* config)
 {
     pas_segregated_page_config_variant segregated_variant;
@@ -378,6 +383,15 @@
         }
 
         begin_index = directory->begin_index;
+
+        /* This is necessary to guard against the medium tuple array being decommitted at the wrong time,
+           or the tuple straddling page boundary, leading to the begin index being zero and the end_index
+           having its original value. */
+        if (!begin_index) {
+            result.tuple = NULL;
+            return result;
+        }
+        
         end_index = directory->end_index;
         
         result.dependency += begin_index + end_index;
@@ -448,6 +462,8 @@
     pas_segregated_heap_medium_size_directory_search_mode search_mode,
     pas_lock_hold_mode heap_lock_hold_mode)
 {
+    static const bool verbose = false;
+    
     pas_segregated_heap_rare_data* rare_data;
     pas_mutation_count saved_count;
     pas_segregated_heap_medium_directory_tuple* medium_directories;
@@ -478,8 +494,11 @@
         rare_data, medium_directories, num_medium_directories, index, search_mode);
     
     if (pas_mutation_count_matches_with_dependency(
-            &rare_data->mutation_count, saved_count, result.dependency))
+            &rare_data->mutation_count, saved_count, result.dependency)) {
+        if (verbose && !result.tuple)
+            pas_log("did not find tuple\n");
         return result.tuple;
+    }
     
     return medium_directory_tuple_for_index_with_lock(
         heap, index, search_mode, pas_lock_is_not_held);
@@ -491,13 +510,20 @@
     pas_segregated_heap_medium_size_directory_search_mode search_mode,
     pas_lock_hold_mode heap_lock_hold_mode)
 {
+    static const bool verbose = false;
+    
     pas_segregated_heap_medium_directory_tuple* medium_directory;
     
     medium_directory = pas_segregated_heap_medium_directory_tuple_for_index(
         heap, index, search_mode, heap_lock_hold_mode);
     
-    if (medium_directory)
-        return medium_directory->allocator_index;
+    if (medium_directory) {
+        unsigned result;
+        result = medium_directory->allocator_index;
+        if (verbose && !result)
+            pas_log("found null allocator index\n");
+        return result;
+    }
     
     return 0;
 }
@@ -649,6 +675,8 @@
         size_t index;
         pas_allocator_index allocator_index;
         pas_segregated_size_directory_data* data;
+        size_t extra_index_for_allocator;
+        bool have_extra_index_for_allocator;
 
         data = ""
         if (data)
@@ -658,38 +686,50 @@
 
         PAS_ASSERT(allocator_index != (pas_allocator_index)UINT_MAX);
 
-        if (pas_segregated_size_directory_min_index(directory) == UINT_MAX) {
-            /* We now know that this directory is not in the size directory table, becaus the min_index is
-               UINT_MAX. But it's possible that we've put the basic size directory into the allocator_index
-               table even though we haven't put it into the size directory table because array allocation could
-               find the basic size directory by looking at the basic directory pointer and then
-               ensure_size_lookup and stash the allocator_index in the allocator_index table. */
-            if (directory->base.is_basic_size_directory
-                && pas_segregated_heap_cached_index_is_set(cached_index)) {
-                index = pas_segregated_heap_get_cached_index(heap, cached_index, config);
-                if (index < heap->small_index_upper_bound)
+        have_extra_index_for_allocator = false;
+        extra_index_for_allocator = 0;
+        if (allocator_index
+            && directory->base.is_basic_size_directory
+            && pas_segregated_heap_cached_index_is_set(cached_index)) {
+            index = pas_segregated_heap_get_cached_index(heap, cached_index, config);
+            if (index < heap->small_index_upper_bound) {
+                /* It's possible that we've put the basic size directory into the allocator_index table even
+                   though we haven't put it into the size directory table because array allocation could
+                   find the basic size directory by looking at the basic directory pointer and then
+                   ensure_size_lookup and stash the allocator_index in the allocator_index table. */
+                have_extra_index_for_allocator = true;
+                extra_index_for_allocator = index;
+            }
+        }
+        
+        if (pas_segregated_size_directory_min_index(directory) != UINT_MAX) {
+            PAS_ASSERT(pas_segregated_size_directory_min_index(directory)
+                       <= pas_segregated_heap_index_for_size(directory->object_size, *config));
+            
+            for (index = pas_segregated_size_directory_min_index(directory);
+                 index < PAS_MIN(pas_segregated_heap_index_for_size(directory->object_size, *config) + 1,
+                                 heap->small_index_upper_bound);
+                 index++) {
+                set_index_to_small_size_directory(index, directory, arg);
+                
+                if (allocator_index) {
                     set_index_to_small_allocator_index(index, allocator_index, arg);
+                    
+                    if (have_extra_index_for_allocator
+                        && extra_index_for_allocator == index)
+                        have_extra_index_for_allocator = false;
+                }
             }
 
-            continue;
+            if (pas_segregated_heap_index_for_size(directory->object_size, *config)
+                >= heap->small_index_upper_bound) {
+                size_directory_min_heap_add(
+                    &min_heap, directory, &pas_large_utility_free_heap_allocation_config);
+            }
         }
-
-        PAS_ASSERT(pas_segregated_size_directory_min_index(directory)
-                   <= pas_segregated_heap_index_for_size(directory->object_size, *config));
-
-        for (index = pas_segregated_size_directory_min_index(directory);
-             index < PAS_MIN(pas_segregated_heap_index_for_size(directory->object_size, *config) + 1,
-                             heap->small_index_upper_bound);
-             index++) {
-            set_index_to_small_size_directory(index, directory, arg);
-
-            if (allocator_index)
-                set_index_to_small_allocator_index(index, allocator_index, arg);
-        }
-
-        if (pas_segregated_heap_index_for_size(directory->object_size, *config)
-            >= heap->small_index_upper_bound)
-            size_directory_min_heap_add(&min_heap, directory, &pas_large_utility_free_heap_allocation_config);
+        
+        if (have_extra_index_for_allocator)
+            set_index_to_small_allocator_index(extra_index_for_allocator, allocator_index, arg);
     }
 
     for (medium_tuple_index = 0;
@@ -706,6 +746,7 @@
             tuple.allocator_index = 0;
         PAS_ASSERT(tuple.allocator_index != (pas_allocator_index)UINT_MAX);
         tuple.begin_index = pas_segregated_size_directory_min_index(directory);
+        PAS_ASSERT(tuple.begin_index);
         tuple.end_index = (unsigned)pas_segregated_heap_index_for_size(directory->object_size, *config);
         
         set_medium_directory_tuple(medium_tuple_index, &tuple, arg);
@@ -755,6 +796,7 @@
     
     tuples = pas_segregated_heap_medium_directory_tuple_ptr_load(&data->medium_directories);
     PAS_ASSERT(tuples);
+    PAS_ASSERT(tuple->begin_index);
     tuples[medium_tuple_index] = *tuple;
 }
 
@@ -809,18 +851,19 @@
 
     rematerialize_size_lookup_if_necessary(heap, config, cached_index);
 
+    check_size_lookup_recomputation_if_appropriate(
+        heap, config, cached_index, "start of pas_segregated_heap_ensure_allocator_index");
+
     parent_heap = pas_heap_for_segregated_heap(heap);
     
     PAS_ASSERT(size <= directory->object_size);
     PAS_ASSERT(!pas_heap_config_is_utility(config));
     
-    if (verbose) {
-        printf("In pas_segregated_heap_ensure_allocator_index\n");
-        printf("size = %zu\n", size);
-    }
+    if (verbose)
+        pas_log("%p: In pas_segregated_heap_ensure_allocator_index, size = %zu\n", pthread_self(), size);
     index = pas_segregated_heap_index_for_size(size, *config);
     if (verbose)
-        printf("index = %zu\n", index);
+        pas_log("index = %zu\n", index);
 
     allocator_index =
         pas_segregated_size_directory_data_ptr_load(&directory->data)->allocator_index;
@@ -829,7 +872,7 @@
     PAS_ASSERT(allocator_index < (unsigned)(pas_allocator_index)UINT_MAX);
     
     if (verbose)
-        printf("allocator_index = %u\n", allocator_index);
+        pas_log("allocator_index = %u\n", allocator_index);
     
     did_cache_allocator_index = false;
     
@@ -836,7 +879,7 @@
     if (pas_segregated_heap_index_is_cached_index_and_cached_index_is_set(heap, cached_index, index, config)
         && parent_heap && parent_heap->heap_ref) {
         if (verbose) {
-            printf("pas_segregated_heap_ensure_allocator_index_for_size_directory: "
+            pas_log("pas_segregated_heap_ensure_allocator_index_for_size_directory: "
                    "Caching as cached index!\n");
         }
         PAS_ASSERT(!parent_heap->heap_ref->allocator_index ||
@@ -875,6 +918,9 @@
         medium_directory->allocator_index = (pas_allocator_index)allocator_index;
     }
     
+    check_size_lookup_recomputation_if_appropriate(
+        heap, config, cached_index, "end of pas_segregated_heap_ensure_allocator_index");
+
     return allocator_index;
 }
 
@@ -1087,6 +1133,7 @@
 
     data = ""
 
+    PAS_ASSERT(value->begin_index);
     PAS_ASSERT(medium_tuple_index == data->num_medium_directories);
     PAS_ASSERT((unsigned)(medium_tuple_index + 1u) == medium_tuple_index + 1u);
     data->num_medium_directories = (unsigned)(medium_tuple_index + 1u);
@@ -1144,9 +1191,30 @@
     }
 }
 
+static bool check_size_lookup_recomputation_dump_directory(pas_segregated_heap* heap,
+                                                           pas_segregated_size_directory* directory,
+                                                           void* arg)
+{
+    PAS_UNUSED_PARAM(heap);
+    PAS_ASSERT(!arg);
+
+    pas_log("    ");
+    pas_segregated_size_directory_dump_reference(directory, &pas_log_stream.base);
+    pas_log(": min_index = %u, object_size = %u, allocator_index = %u",
+            pas_segregated_size_directory_min_index(directory),
+            directory->object_size,
+            pas_segregated_size_directory_get_tlc_allocator_index(directory));
+    if (directory->base.is_basic_size_directory)
+        pas_log(", is basic");
+    pas_log("\n");
+    
+    return true;
+}
+
 static void check_size_lookup_recomputation(pas_segregated_heap* heap,
                                             pas_heap_config* config,
-                                            unsigned *cached_index)
+                                            unsigned *cached_index,
+                                            const char* where)
 {
     check_size_lookup_recomputation_data data;
     size_t bitvector_size;
@@ -1209,6 +1277,21 @@
         check_size_lookup_recomputation_did_become_not_all_good(&data);
     }
 
+    if (!data.is_all_good) {
+        pas_heap* parent_heap;
+        pas_log("Encountered size recomputation failure for heap %p (%s, ",
+                heap, pas_heap_config_kind_get_string(config->kind));
+        parent_heap = pas_heap_for_segregated_heap(heap);
+        if (parent_heap)
+            config->dump_type(parent_heap->type, &pas_log_stream.base);
+        else
+            pas_log("no type");
+        pas_log(") at %s.\n", where);
+
+        pas_log("Directories:\n");
+        pas_segregated_heap_for_each_size_directory(heap, check_size_lookup_recomputation_dump_directory, NULL);
+    }
+
     PAS_ASSERT(data.is_all_good);
 
     pas_large_utility_free_heap_deallocate(data.seen_index_to_small_allocator_index, bitvector_size);
@@ -1215,6 +1298,18 @@
     pas_large_utility_free_heap_deallocate(data.seen_index_to_small_size_directory, bitvector_size);
 }
 
+static void check_size_lookup_recomputation_if_appropriate(pas_segregated_heap* heap,
+                                                           pas_heap_config* config,
+                                                           unsigned *cached_index,
+                                                           const char* where)
+{
+    if (!PAS_ENABLE_TESTING)
+        return;
+    if (pas_heap_config_is_utility(config))
+        return;
+    check_size_lookup_recomputation(heap, config, cached_index, where);
+}
+
 pas_segregated_size_directory*
 pas_segregated_heap_ensure_size_directory_for_size(
     pas_segregated_heap* heap,
@@ -1288,6 +1383,11 @@
 
     rematerialize_size_lookup_if_necessary(heap, config, cached_index);
 
+    is_utility = pas_heap_config_is_utility(config);
+
+    check_size_lookup_recomputation_if_appropriate(
+        heap, config, cached_index, "start of pas_segregated_heap_ensure_size_directory_for_size");
+
     parent_heap = pas_heap_for_segregated_heap(heap);
 
     index = pas_segregated_heap_index_for_size(size, *config);
@@ -1337,8 +1437,6 @@
         || pas_is_aligned(result->object_size, pas_segregated_size_directory_alignment(result))
         || pas_segregated_size_directory_is_bitfit(result));
 
-    is_utility = pas_heap_config_is_utility(config);
-
     if (verbose)
         pas_log("Small index upper bound = %u\n", heap->small_index_upper_bound);
     
@@ -1439,6 +1537,7 @@
                 begin_index = index + 1;
                 PAS_ASSERT((pas_segregated_heap_medium_directory_index)begin_index == begin_index);
                 medium_tuple->begin_index = (pas_segregated_heap_medium_directory_index)begin_index;
+                PAS_ASSERT(medium_tuple->begin_index);
             } else {
                 pas_segregated_heap_medium_directory_tuple* medium_directories;
                 size_t medium_tuple_index;
@@ -1612,8 +1711,8 @@
             double bytes_dirtied_per_object_by_candidate;
         
             if (verbose) {
-                printf("object_size = %lu\n", object_size);
-                printf("candidate->object_size = %u\n", candidate->object_size);
+                pas_log("object_size = %lu\n", object_size);
+                pas_log("candidate->object_size = %u\n", candidate->object_size);
             }
 
             if (candidate->base.page_config_kind != pas_segregated_page_config_kind_null) {
@@ -1709,7 +1808,7 @@
                 best_page_config,
                 creation_mode);
             if (verbose)
-                printf("Created size class = %p\n", result);
+                pas_log("Created size class = %p\n", result);
 
             basic_size_directory_and_head = pas_compact_atomic_segregated_size_directory_ptr_load(
                 &heap->basic_size_directory_and_head);
@@ -1734,7 +1833,7 @@
             pas_segregated_size_directory* directory;
             
             if (verbose) {
-                printf("pas_segregated_heap_ensure_size_directory_for_size: "
+                pas_log("pas_segregated_heap_ensure_size_directory_for_size: "
                        "Caching as basic size class!\n");
             }
             directory = pas_compact_atomic_segregated_size_directory_ptr_load(
@@ -1782,8 +1881,8 @@
             result->base.is_basic_size_directory = true;
         } else {
             if (verbose) {
-                printf("pas_segregated_heap_ensure_size_directory_for_count: "
-                       "NOT caching as basic size class!\n");
+                pas_log("pas_segregated_heap_ensure_size_directory_for_count: "
+                        "NOT caching as basic size class!\n");
             }
         }
 
@@ -1885,6 +1984,7 @@
                     &next_tuple->directory) == result) {
                 size_t begin_index;
                 begin_index = PAS_MIN(index, (size_t)next_tuple->begin_index);
+                PAS_ASSERT(begin_index);
                 PAS_ASSERT((pas_segregated_heap_medium_directory_index)begin_index == begin_index);
                 next_tuple->begin_index = (pas_segregated_heap_medium_directory_index)begin_index;
             } else {
@@ -1976,7 +2076,8 @@
                     pas_log("In rare_data = %p, Installing medium tuple %zu...%zu\n",
                             rare_data, index, medium_install_index);
                 }
-                
+
+                PAS_ASSERT(index);
                 medium_directory->begin_index = (pas_segregated_heap_medium_directory_index)index;
                 medium_directory->end_index =
                     (pas_segregated_heap_medium_directory_index)medium_install_index;
@@ -2014,8 +2115,8 @@
                    == result);
     }
 
-    if (PAS_ENABLE_TESTING && !is_utility)
-        check_size_lookup_recomputation(heap, config, cached_index);
+    check_size_lookup_recomputation_if_appropriate(
+        heap, config, cached_index, "end of pas_segregated_heap_ensure_size_directory_for_size");
 
     return result;
 }

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_heap.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -118,7 +118,10 @@
     
     /* This is the `index` we would use to do a lookup. A medium directory tuple represents a
        contiguous range of indices that map to a single directory. */
-    pas_segregated_heap_medium_directory_index begin_index; /* inclusive */
+    pas_segregated_heap_medium_directory_index begin_index; /* inclusive, but cannot be zero, so we can
+                                                               detect races (we could relax this if we just
+                                                               made this "begin_index_plus_one" or something
+                                                               like that) */
     pas_segregated_heap_medium_directory_index end_index; /* inclusive */
 };
 

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_size_directory.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_size_directory.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_segregated_size_directory.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2018-2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2018-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -305,6 +305,16 @@
     return data && data->allocator_index;
 }
 
+static inline pas_allocator_index pas_segregated_size_directory_get_tlc_allocator_index(
+    pas_segregated_size_directory* directory)
+{
+    pas_segregated_size_directory_data* data;
+    data = ""
+    if (data)
+        return data->allocator_index;
+    return 0;
+}
+
 /* Call with heap lock held. */
 PAS_API void pas_segregated_size_directory_create_tlc_view_cache(
     pas_segregated_size_directory* directory);

Modified: trunk/Source/bmalloc/libpas/src/libpas/pas_try_allocate_primitive.h (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/libpas/pas_try_allocate_primitive.h	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/libpas/pas_try_allocate_primitive.h	2022-01-21 02:43:34 UTC (rev 288342)
@@ -80,10 +80,13 @@
     /* Have a fast path for when you allocate some size all the time or
        most of the time. This saves both time and space. The space savings are
        the more interesting part, since */
+
+    if (verbose)
+        pas_log("%p: getting allocator index.\n", pthread_self());
     
     if (index == heap_ref->cached_index) {
         if (verbose)
-            printf("Found cached index.\n");
+            pas_log("Found cached index.\n");
         allocator_index = heap_ref->base.allocator_index;
     } else {
         pas_heap* heap;
@@ -90,7 +93,7 @@
         pas_segregated_heap* segregated_heap;
         
         if (verbose)
-            printf("Using full lookup.\n");
+            pas_log("Using full lookup.\n");
 
         heap = pas_ensure_heap(&heap_ref->base, pas_primitive_heap_ref_kind, config.config_ptr,
                                runtime_config);
@@ -102,10 +105,18 @@
     }
     
     if (verbose)
-        printf("allocator_index = %u\n", allocator_index);
+        pas_log("allocator_index = %u\n", allocator_index);
+
+    /* Cool fact: there could be a race where the allocator_index we get is 0 (i.e. unselected) even though
+       at this point, the directory has already initialized the allocator_index. That's because between when
+       we got the allocator_index above and now, some other thread could have already initialized the
+       directory's allocator_index. */
     
     allocator = pas_thread_local_cache_get_local_allocator_if_can_set_cache_for_possibly_uninitialized_index(
         allocator_index, config.config_ptr);
+
+    if (verbose && !allocator.did_succeed)
+        pas_log("%p: Failed to quickly get the allocator, allocator_index = %u.\n", pthread_self(), allocator_index);
     
     /* This should be specialized out in the non-alignment case because of ALWAYS_INLINE and
        alignment being the constant 1. */
@@ -145,7 +156,7 @@
     
     if (index == heap_ref->cached_index) {
         if (verbose)
-            printf("Found cached index.\n");
+            pas_log("Found cached index.\n");
         allocator_index = heap_ref->base.allocator_index;
     } else {
         pas_heap* heap;
@@ -152,7 +163,7 @@
         pas_segregated_heap* segregated_heap;
         
         if (verbose)
-            printf("Using full lookup.\n");
+            pas_log("Using full lookup.\n");
 
         heap = heap_ref->base.heap;
         if (!heap)
@@ -164,7 +175,7 @@
     }
     
     if (verbose)
-        printf("allocator_index = %u\n", allocator_index);
+        pas_log("allocator_index = %u\n", allocator_index);
 
     cache = pas_thread_local_cache_try_get();
     if (PAS_UNLIKELY(!cache))
@@ -173,9 +184,12 @@
     allocator =
         pas_thread_local_cache_try_get_local_allocator_or_unselected_for_uninitialized_index(
             cache, allocator_index);
-    if (PAS_UNLIKELY(!allocator.did_succeed))
+    if (PAS_UNLIKELY(!allocator.did_succeed)) {
+        if (verbose)
+            pas_log("Could not quickly get the allocator.\n");
         return pas_allocation_result_create_failure();
-    
+    }
+
     /* This should be specialized out in the non-alignment case because of ALWAYS_INLINE and
        alignment being the constant 1. */
     if (PAS_UNLIKELY(

Modified: trunk/Source/bmalloc/libpas/src/test/ExpendableMemoryTests.cpp (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/test/ExpendableMemoryTests.cpp	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/test/ExpendableMemoryTests.cpp	2022-01-21 02:43:34 UTC (rev 288342)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2021 Apple Inc. All rights reserved.
+ * Copyright (c) 2021-2022 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,11 +28,14 @@
 #if PAS_ENABLE_BMALLOC
 
 #include "bmalloc_heap.h"
+#include "bmalloc_heap_config.h"
 #include <condition_variable>
 #include <mutex>
 #include "pas_compact_expendable_memory.h"
 #include "pas_large_expendable_memory.h"
 #include "pas_segregated_heap.h"
+#include "pas_segregated_size_directory_inlines.h"
+#include <thread>
 
 using namespace std;
 
@@ -212,6 +215,127 @@
     CHECK(pas_segregated_heap_num_size_lookup_rematerializations);
 }
 
+void testRage(unsigned numHeaps, function<unsigned(unsigned)> allocationSize, unsigned numThreads,
+              unsigned count, function<void()> sleep)
+{
+    thread* threads = new thread[numThreads];
+    pas_primitive_heap_ref* heaps = new pas_primitive_heap_ref[numHeaps];
+
+    for (unsigned i = numHeaps; i--;)
+        heaps[i] = BMALLOC_FLEX_HEAP_REF_INITIALIZER(new bmalloc_type(BMALLOC_TYPE_INITIALIZER(1, 1, "test")));
+
+    mutex lock;
+    unsigned numThreadsDone = 0;
+
+    for (unsigned i = numThreads; i--;) {
+        threads[i] = thread([&] () {
+            for (unsigned j = 0; j < count; ++j) {
+                pas_primitive_heap_ref* heap = heaps + deterministicRandomNumber(numHeaps);
+                size_t size = allocationSize(j);
+                void* ptr = bmalloc_allocate_flex(heap, size);
+                CHECK(ptr);
+                CHECK_GREATER_EQUAL(bmalloc_get_allocation_size(ptr), size);
+                CHECK_EQUAL(bmalloc_get_heap(ptr),
+                            bmalloc_flex_heap_ref_get_heap(heap));
+                bmalloc_deallocate(ptr);
+            }
+            lock_guard<mutex> locker(lock);
+            numThreadsDone++;
+        });
+    }
+
+    while (numThreadsDone < numThreads) {
+        pas_scavenger_decommit_expendable_memory();
+        sleep();
+    }
+}
+
+void testRematerializeAfterSearchOfDecommitted()
+{
+    static constexpr unsigned initialSize = 16;
+    static constexpr unsigned size = 10752;
+    static constexpr unsigned someOtherSize = 5000;
+    
+    pas_primitive_heap_ref heapRef = BMALLOC_FLEX_HEAP_REF_INITIALIZER(
+        new bmalloc_type(BMALLOC_TYPE_INITIALIZER(1, 1, "test")));
+    pas_heap* heap = bmalloc_flex_heap_ref_get_heap(&heapRef);
+
+    void* ptr = bmalloc_allocate_flex(&heapRef, initialSize);
+    CHECK_EQUAL(bmalloc_get_allocation_size(ptr), initialSize);
+    CHECK_EQUAL(bmalloc_get_heap(ptr), heap);
+    CHECK_EQUAL(heapRef.cached_index, pas_segregated_heap_index_for_size(initialSize, BMALLOC_HEAP_CONFIG));
+
+    ptr = bmalloc_allocate_flex(&heapRef, size);
+    CHECK_EQUAL(bmalloc_get_allocation_size(ptr), size);
+    CHECK_EQUAL(bmalloc_get_heap(ptr), heap);
+
+    pas_segregated_view view = pas_segregated_view_for_object(
+        reinterpret_cast<uintptr_t>(ptr), &bmalloc_heap_config);
+    pas_segregated_size_directory* directory = pas_segregated_view_get_size_directory(view);
+
+    pas_segregated_heap_medium_directory_tuple* tuple =
+        pas_segregated_heap_medium_directory_tuple_for_index(
+            &heap->segregated_heap,
+            pas_segregated_heap_index_for_size(size, BMALLOC_HEAP_CONFIG),
+            pas_segregated_heap_medium_size_directory_search_within_size_class_progression,
+            pas_lock_is_not_held);
+
+    CHECK(tuple);
+    CHECK_EQUAL(pas_compact_atomic_segregated_size_directory_ptr_load(&tuple->directory),
+                directory);
+
+    pas_scavenger_fake_decommit_expendable_memory();
+
+    tuple->begin_index = 0;
+
+    pas_segregated_heap_medium_directory_tuple* someOtherTuple =
+        pas_segregated_heap_medium_directory_tuple_for_index(
+            &heap->segregated_heap,
+            pas_segregated_heap_index_for_size(someOtherSize, BMALLOC_HEAP_CONFIG),
+            pas_segregated_heap_medium_size_directory_search_within_size_class_progression,
+            pas_lock_is_not_held);
+
+    if (someOtherTuple) {
+        cout << "Unexpectedly found a tuple: " << someOtherTuple << "\n";
+        cout << "It points at directory = "
+             << pas_compact_atomic_segregated_size_directory_ptr_load(&someOtherTuple->directory) << "\n";
+        cout << "Our original directory is = " << directory << "\n";
+    }
+    
+    CHECK(!someOtherTuple);
+}
+
+void testBasicSizeClass(unsigned firstSize, unsigned secondSize)
+{
+    static constexpr bool verbose = false;
+    
+    pas_primitive_heap_ref heapRef = BMALLOC_FLEX_HEAP_REF_INITIALIZER(
+        new bmalloc_type(BMALLOC_TYPE_INITIALIZER(1, 1, "test")));
+
+    if (verbose)
+        cout << "Allocating " << firstSize << "\n";
+    void* ptr = bmalloc_allocate_flex(&heapRef, firstSize);
+    if (verbose)
+        cout << "Allocating " << secondSize << "\n";
+    bmalloc_allocate_flex(&heapRef, secondSize);
+
+    if (verbose)
+        cout << "Doing some checks.\n";
+    CHECK(pas_thread_local_cache_try_get());
+    CHECK_EQUAL(heapRef.cached_index, pas_segregated_heap_index_for_size(firstSize, BMALLOC_HEAP_CONFIG));
+    CHECK(heapRef.base.allocator_index);
+    CHECK(pas_thread_local_cache_try_get_local_allocator_or_unselected_for_uninitialized_index(
+              pas_thread_local_cache_try_get(), heapRef.base.allocator_index).did_succeed);
+    if (verbose)
+        cout << "Did some checks.\n";
+
+    pas_segregated_view view = pas_segregated_view_for_object(
+        reinterpret_cast<uintptr_t>(ptr), &bmalloc_heap_config);
+    pas_segregated_size_directory* directory = pas_segregated_view_get_size_directory(view);
+    pas_segregated_size_directory_select_allocator(
+        directory, firstSize, pas_avoid_size_lookup, &bmalloc_heap_config, &heapRef.cached_index);
+}
+
 } // anonymous namespace
 
 #endif // PAS_ENABLE_BMALLOC
@@ -219,9 +343,26 @@
 void addExpendableMemoryTests()
 {
 #if PAS_ENABLE_BMALLOC
+    ForceTLAs forceTLAs;
+    
     ADD_TEST(testSynchronousScavengingExpendsExpendableMemory());
     ADD_TEST(testScavengerExpendsExpendableMemory());
     ADD_TEST(testSoManyHeaps());
+    ADD_TEST(testRage(10, [] (unsigned) { return deterministicRandomNumber(100000); }, 10, 100000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(10, [] (unsigned j) { return deterministicRandomNumber(j); }, 2, 1000000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(10, [] (unsigned j) { return j; }, 10, 100000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(10, [] (unsigned j) { return j; }, 1, 1000000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(1000, [] (unsigned) { return deterministicRandomNumber(100000); }, 10, 100000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(1000, [] (unsigned j) { return deterministicRandomNumber(j); }, 2, 1000000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(1000, [] (unsigned j) { return j; }, 10, 100000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(10, [] (unsigned) { return deterministicRandomNumber(100000); }, 2, 1000000, [] () { sched_yield(); }));
+    ADD_TEST(testRage(10, [] (unsigned j) { return deterministicRandomNumber(j); }, 10, 100000, [] () { usleep(1); }));
+    ADD_TEST(testRage(10, [] (unsigned j) { return j; }, 1, 100000, [] () { usleep(1); }));
+    ADD_TEST(testRage(1000, [] (unsigned) { return deterministicRandomNumber(100000); }, 2, 1000000, [] () { usleep(1); }));
+    ADD_TEST(testRage(1000, [] (unsigned j) { return deterministicRandomNumber(j); }, 2, 1000000, [] () { usleep(1); }));
+    ADD_TEST(testRage(1000, [] (unsigned j) { return j; }, 10, 100000, [] () { usleep(1); }));
+    ADD_TEST(testRematerializeAfterSearchOfDecommitted());
+    ADD_TEST(testBasicSizeClass(0, 16));
 #endif // PAS_ENABLE_BMALLOC
 }
 

Modified: trunk/Source/bmalloc/libpas/src/test/TestHarness.cpp (288341 => 288342)


--- trunk/Source/bmalloc/libpas/src/test/TestHarness.cpp	2022-01-21 02:39:53 UTC (rev 288341)
+++ trunk/Source/bmalloc/libpas/src/test/TestHarness.cpp	2022-01-21 02:43:34 UTC (rev 288342)
@@ -27,6 +27,8 @@
 
 #include "Verifier.h"
 #include <atomic>
+#include "bmalloc_heap_config.h"
+#include "hotbit_heap_config.h"
 #include "iso_heap_config.h"
 #include "iso_test_heap_config.h"
 #include "jit_heap.h"
@@ -103,6 +105,8 @@
             FOR_EACH_RUNTIME_CONFIG(iso_test, setUp);
             FOR_EACH_RUNTIME_CONFIG(minalign32, setUp);
             FOR_EACH_RUNTIME_CONFIG(pagesize64k, setUp);
+            FOR_EACH_RUNTIME_CONFIG(bmalloc, setUp);
+            FOR_EACH_RUNTIME_CONFIG(hotbit, setUp);
             setUp(pas_utility_heap_runtime_config);
             setUp(jit_heap_runtime_config);
         })
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to