Randomized test is not deterministic now. Can we have also original one?

On 12/18/18 10:29 PM, Martin Buchholz wrote:
We have changes in jsr166 CVS  ready for going into openjdk
https://cr.openjdk.java.net/~martin/webrevs/jdk/jsr166-integration/HashMap-resize/index.html
(except for the microbenchmark, which doesn't fit into our pipeline at the
moment).
Pardon the fiddling ...
---
Let's check in this order:

+                while (s > threshold && table.length < MAXIMUM_CAPACITY)

---
Let's add more HashMap pseudo-methods to the test:

+
+    Object[] table(HashMap map) {
+        try {
+            return (Object[]) TABLE.get(map);
+        } catch (Throwable t) { throw new AssertionError(t); }
+    }
+
+    int capacity(HashMap map) {
+        return table(map).length;
+    }


and some more assertions:

+    @Test
+    public void capacityTest() {
+        HashMap<Integer, Integer> map = new HashMap<>();
+        assertNull(table(map));
+
+        map.put(1, 1);
+        assertEquals(capacity(map), 16);
+
+        map.putAll(IntStream.range(0, 64).boxed().collect(toMap(i ->
i, i -> i)));
+        assertEquals(capacity(map), 128);
+    }


I did my own TODO to randomize testBug8210280

On Mon, Dec 17, 2018 at 7:40 AM Michal Vala <mv...@redhat.com> wrote:

All tests I've run passed, benchmarks show ~15% performance boost for
putAllWithBigMapToNonEmptyMap.

On 12/17/18 7:32 AM, Michal Vala wrote:
Hi,

thanks Doug, this is nice reduction.

Here's the new webrev:
http://cr.openjdk.java.net/~mvala/jdk/jdk/JDK-8210280/webrev.03/

Just a nitpick, issue is in using linked-list in buckets. The same is
used for
both HashMap and LinkedHashMap, so mentioning just LinkedHashMap might
be
confising. I've updated the comment s/LinkedHashMap/linked-list buckets/.

I'm just running tier1 tests and benchmarks.

On 12/16/18 3:23 PM, Doug Lea wrote:

On 12/14/18 1:37 AM, Michal Vala wrote:
Thanks Martin for finding this serious issue and the testcase.


Sorry that I wasn't paying attention to this and so forced Martin to
discover the hard way that because of LinkeHashMap, you can't skip
doubling steps (at least not without a lot of rework). Also, the
documentation should have mentioned this. A simpler way to reduce
overhead in the case at hand is just to loop in putMapEntries:

--- HashMap.java.~1.9.~    2018-11-11 15:43:24.982878495 -0500
+++ HashMap.java    2018-12-16 09:05:48.924727867 -0500
@@ -502,8 +502,13 @@
                   if (t > threshold)
                       threshold = tableSizeFor(t);
               }
-            else if (s > threshold)
-                resize();
+            else {
+                // Because of LinkedHashMap constraints, we cannot
+                // expand all at once, but can reduce total resize
+                // effort by repeated doubling now vs later
+                while (table.length <  MAXIMUM_CAPACITY && s >
threshold)
+                    resize();
+            }
               for (Map.Entry<? extends K, ? extends V> e :
m.entrySet()) {
                   K key = e.getKey();
                   V value = e.getValue();



--
Michal Vala
OpenJDK QE
Red Hat Czech



--
Michal Vala
OpenJDK QE
Red Hat Czech

Reply via email to