On 07/06/22 9:39 am, Stuart Marks wrote:
Hi Jai,

The error

    java.lang.OutOfMemoryError: Java heap space

indicates that the VM really has run out of memory. Presumably if you increased the heap size, it would actually be able to allocate that memory. You might have to add the /othervm test directive and add JVM options to require a larger heap.

The table size must be a power of two, so the largest table size that will be allocated is 1 << 30 or 1073741824 as you noted. That will take about 8GB of heap (in the no-compressed-OOP case). That's not terribly large, but we might want to check to see if there are other tests that require that much memory.

As you also noted, WeakHashMap eagerly allocates its table whereas LinkedHashMap and HashMap do not. I think this is an acceptable behavior variation. Note that we had to avoid this case in WhiteboxResizeTest:

https://github.com/openjdk/jdk/blob/master/test/jdk/java/util/HashMap/WhiteBoxResizeTest.java#L167

We might have to make similar special cases here for WHM.

I don't think we need to document this behavior difference. More precisely: this kind of implementation variation doesn't need to be specified. In the future we might change WHM to allocate lazily.

The API should accommodate extremely large values of numMappings. Even if it's larger than 1 << 30 and the table size is allocated at 1 << 30, it's still possible to add numMappings mappings without resizing. (Memory permitting, of course.) Doing so will violate the load factor invariant, and it might result in more collisions than one would like, but it should still work.

Thank you for those inputs, Stuart.



I think we just need to decide whether we want to have a test that allocates this much memory, and if so, to apply the necessary settings to make sure the JVM has enough heap.

As a start, I've removed the large value testing in these basic tests and moved the PR out of the draft state.

-Jaikiran

Reply via email to