Hi Martin,

Thank you for looking into this!  Please see the comments inline.


On 7/15/18 9:14 AM, Martin Buchholz wrote:
Hi Ivan,

Here are some thoughts while looking at this:

---

WeakHashMap promises to have similar "capacity" handling to HashMap, but implementations (and doc?) seem more different than necessary (diverged over time?), and there don't seem to be any tests.

HashMap seems to deal with this problem by doing computations using float not int. Choose the best one and use it in both source files.

                float ft = ((float)s / loadFactor) + 1.0F;
                int t = ((ft < (float)MAXIMUM_CAPACITY) ?
                         (int)ft : MAXIMUM_CAPACITY);

I compared the bytecode generated for
1)     (int) ((float)s / loadFactor) + 1.0F
2)     (int) (s / loadFactor + 1);

and the results are actually exactly the same:
         0: iload_0
         1: i2f
         2: ldc           #2                  // float 0.75f
         4: fdiv
         5: fconst_1
         6: fadd
         7: f2i

So, I just modified the webrev to make it look more like what we currently have in HashMap.


---

Consider adding a WhiteBox test. An existing one for ConcurrentHashMap could be modified to test internal table maintenance. PriorityBlockingQueue/WhiteBox.java is an example of a test that ensures two implementations stay in sync.

I cannot see how to verify this fix easily, and I didn't really think that such a trivial fix is worth a dedicated test.

If you disagree, I can try to write a test that will pass a custom map as an argument to WeakHashMap constructor. From the custom map's size() we can retrieve a reference to the WeakHashMap being constructed and make sure that its table is either null or doesn't have the default size.
Sounds fragile though, so I would prefer to just set noreg-hard label :)

---

i see
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
but that nan check doesn't have much value.  It can be removed using
if (! (loadFactor > 0))

This pattern is used in several places (HashMap, HashSet, Hashtable, WeakHashMap). If simplifying it in the way you suggest gives a performance gain, I wonder if JVM can be taught to optimize checks for NaN via combining them with comparisons nearby?

---

HashMap's resize() doesn't take an arg, while WeakHashMap's does.  Why?
As a result, I see in HashMap.putMapEntries

            else if (s > threshold)
                resize();

which suggests that if you create a fresh HashMap, then putAll(hugeMap) it will repeatedly resize instead of resizing to the target capacity in one step. Which seems like a HashMap bug.

I agree that adding a hint to HashMap.resize() may help to allocate the new table more optimally.
I think is should be a separate enhancement though.

Here's the updated webrev with only stylistic change, comparing to the previous one:
http://cr.openjdk.java.net/~igerasim/8207314/01/webrev/

With kind regards,
Ivan


On Fri, Jul 13, 2018 at 10:22 PM, Ivan Gerasimov <ivan.gerasi...@oracle.com <mailto:ivan.gerasi...@oracle.com>> wrote:

    Hello!

    When a WeakHashMap is constructed from another Map m, then the
    initial capacity is calculated as

    (int) (m.size() / DEFAULT_LOAD_FACTOR) + 1.

    For large values of m.size() this becomes negative due to integer
    overflow.

    The result is that the WeakHashMap is initially constructed with
    the default initial capacity 16, and then is immediately resized.

    Would you please help review a trivial patch?

    BUGURL: https://bugs.openjdk.java.net/browse/JDK-8207314
    <https://bugs.openjdk.java.net/browse/JDK-8207314>
    WEBREV: http://cr.openjdk.java.net/~igerasim/8207314/00/webrev/
    <http://cr.openjdk.java.net/%7Eigerasim/8207314/00/webrev/>

    Thanks in advance!

-- With kind regards,
    Ivan Gerasimov



--
With kind regards,
Ivan Gerasimov

Reply via email to