Hi Jai,
The error
java.lang.OutOfMemoryError: Java heap space
indicates that the VM really has run out of memory. Presumably if you increased the
heap size, it would actually be able to allocate that memory. You might have to add
the /othervm test directive and add JVM options to require a larger heap.
The table size must be a power of two, so the largest table size that will be
allocated is 1 << 30 or 1073741824 as you noted. That will take about 8GB of heap
(in the no-compressed-OOP case). That's not terribly large, but we might want to
check to see if there are other tests that require that much memory.
As you also noted, WeakHashMap eagerly allocates its table whereas LinkedHashMap and
HashMap do not. I think this is an acceptable behavior variation. Note that we had
to avoid this case in WhiteboxResizeTest:
https://github.com/openjdk/jdk/blob/master/test/jdk/java/util/HashMap/WhiteBoxResizeTest.java#L167
We might have to make similar special cases here for WHM.
I don't think we need to document this behavior difference. More precisely: this
kind of implementation variation doesn't need to be specified. In the future we
might change WHM to allocate lazily.
The API should accommodate extremely large values of numMappings. Even if it's
larger than 1 << 30 and the table size is allocated at 1 << 30, it's still possible
to add numMappings mappings without resizing. (Memory permitting, of course.) Doing
so will violate the load factor invariant, and it might result in more collisions
than one would like, but it should still work.
I think we just need to decide whether we want to have a test that allocates this
much memory, and if so, to apply the necessary settings to make sure the JVM has
enough heap.
s'marks
On 6/6/22 12:01 AM, Jaikiran Pai wrote:
In a recent enhancement we added new APIs to construct LinkedHashMap, HashMap and
WeakHashMap instances as part of https://bugs.openjdk.java.net/browse/JDK-8186958.
Since we missed adding tests for that change, I have been working on adding some
basic tests for this change as part of
https://bugs.openjdk.java.net/browse/JDK-8285405. The draft PR is here
https://github.com/openjdk/jdk/pull/9036.
It's in draft state because it has uncovered an aspect of this change that we might
have to address or document for these new APIs. Specifically, the tests I added now
have a test which does the equivalent of:
// numMappings = 2147483647
var w = WeakHashMap.newWeakHashMap(Integer.MAX_VALUE);
Similar tests have been added for HashMap and LinkedHashMap too, but for the sake of
this discussion, I'll focus on WeakHashMap. Running this code/test runs into:
test NewWeakHashMap.testNewWeakHashMapNonNegative(2147483647): failure
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.WeakHashMap.newTable(WeakHashMap.java:194)
at java.base/java.util.WeakHashMap.<init>(WeakHashMap.java:221)
at java.base/java.util.WeakHashMap.<init>(WeakHashMap.java:238)
at java.base/java.util.WeakHashMap.newWeakHashMap(WeakHashMap.java:1363)
at NewWeakHashMap.testNewWeakHashMapNonNegative(NewWeakHashMap.java:69)
This exception happens with only WeakHashMap. LinkedHashMap and HashMap don't show
this behaviour. It appears that WeakHashMap eagerly creates an large array (of
length 1073741824 in this case) in the newTable method which gets called by its
constructor.
This raises a few questions about these new APIs - these APIs take an integer and
the document allows positive values. So the current Integer.MAX_VALUE in theory is a
valid integer value for this API. Should these APIs document what might happen when
such a large numMapping is passed to it? Should that documentation be different for
different classes (as seen the HashMap and LinkedHashMap behave differently as
compared to WeakHashMap)? Should this "numMappings" be considered a hard value? In
other words, the current documentation of this new API states:
"Creates a new, empty WeakHashMap suitable for the expected number of mappings
....
and its initial capacity is generally large enough so that the expected number
of mappings can be added without resizing the map."
The documentation doesn't seem to guarantee that the resizing won't occur. So in
cases like these where the numMappings is a very large value, should the
implementation(s) have logic which doesn't trigger this OOM error?
-Jaikiran