Hi Ivan, Here are some thoughts while looking at this:
--- WeakHashMap promises to have similar "capacity" handling to HashMap, but implementations (and doc?) seem more different than necessary (diverged over time?), and there don't seem to be any tests. HashMap seems to deal with this problem by doing computations using float not int. Choose the best one and use it in both source files. float ft = ((float)s / loadFactor) + 1.0F; int t = ((ft < (float)MAXIMUM_CAPACITY) ? (int)ft : MAXIMUM_CAPACITY); --- Consider adding a WhiteBox test. An existing one for ConcurrentHashMap could be modified to test internal table maintenance. PriorityBlockingQueue/WhiteBox.java is an example of a test that ensures two implementations stay in sync. --- i see if (loadFactor <= 0 || Float.isNaN(loadFactor)) but that nan check doesn't have much value. It can be removed using if (! (loadFactor > 0)) --- HashMap's resize() doesn't take an arg, while WeakHashMap's does. Why? As a result, I see in HashMap.putMapEntries else if (s > threshold) resize(); which suggests that if you create a fresh HashMap, then putAll(hugeMap) it will repeatedly resize instead of resizing to the target capacity in one step. Which seems like a HashMap bug. On Fri, Jul 13, 2018 at 10:22 PM, Ivan Gerasimov <ivan.gerasi...@oracle.com> wrote: > Hello! > > When a WeakHashMap is constructed from another Map m, then the initial > capacity is calculated as > > (int) (m.size() / DEFAULT_LOAD_FACTOR) + 1. > > For large values of m.size() this becomes negative due to integer overflow. > > The result is that the WeakHashMap is initially constructed with the > default initial capacity 16, and then is immediately resized. > > Would you please help review a trivial patch? > > BUGURL: https://bugs.openjdk.java.net/browse/JDK-8207314 > WEBREV: http://cr.openjdk.java.net/~igerasim/8207314/00/webrev/ > > Thanks in advance! > > -- > With kind regards, > Ivan Gerasimov > >