On 12/14/18 1:37 AM, Michal Vala wrote: > Thanks Martin for finding this serious issue and the testcase. >
Sorry that I wasn't paying attention to this and so forced Martin to discover the hard way that because of LinkeHashMap, you can't skip doubling steps (at least not without a lot of rework). Also, the documentation should have mentioned this. A simpler way to reduce overhead in the case at hand is just to loop in putMapEntries: --- HashMap.java.~1.9.~ 2018-11-11 15:43:24.982878495 -0500 +++ HashMap.java 2018-12-16 09:05:48.924727867 -0500 @@ -502,8 +502,13 @@ if (t > threshold) threshold = tableSizeFor(t); } - else if (s > threshold) - resize(); + else { + // Because of LinkedHashMap constraints, we cannot + // expand all at once, but can reduce total resize + // effort by repeated doubling now vs later + while (table.length < MAXIMUM_CAPACITY && s > threshold) + resize(); + } for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) { K key = e.getKey(); V value = e.getValue();