Am 09.03.2010 21:02, schrieb Osvaldo Doederlein:
Should we really consider this a VM bug? I'm not sure that it's a good
idea to allocate a single object which size exceeds 4Gb (for a byte[]
- due to the object header and array size field) - even on a 64-bit
VM. An array with 2^32 elements is impossible, the maximum allowed by
the size field is 2^32-1 which will be just as bad as 2^32-N for any
other tiny positive N, for algorithms that love arrays of [base-2-]
"round" sizes.
And then if this bug is fixed, it may have slightly different
variations. For a long[] or double[] array, the allocation for the
maximum size would exceed 32Gb, so it exceeds the maximum heap size
for 64-bit HotSpot with CompressedOops. (Ok this is an artificial
issue because we won't like have a 100% free heap, so the only
impediment for "new long[2^32-1]" would be the array header.)
My suggestion: impose some fixed N (maybe 64, or 0x100, ...), limiting
arrays to 2^32-N (for ANY element type). The artificial restriction
should be large enough to fit the object header of any vendor's JVM,
plus the per-object overhead of any reasonable heap structure. This
limit could be added to the spec, so the implementation is not a bug
anymore :) and it would be a portable limit. Otherwise, some app may
work reliably on HotSpot if it relies on the fact that 2^32-5
positions are possible, but may break on some other vendor's JVM where
perhaps the implementation limit is 2^32-13 or something else.
Please allow to correct:
it's 231-N !
...but +1 for your arguments.
In [base-2-] "round" sense, why there is the "+1" in [1] ?
I think [2] would look best. I'm sure, HotSpot anyway would optimize to
(oldCapacity + (oldCapacity >> 1))
Look at the HotSpot disassembly for String#hashCode(), h*31 becomes h<<5-h.
-Ulf
[1] current PriorityQueue snippet:
...
int newCapacity = ((oldCapacity < 64)?
((oldCapacity + 1) * 2):
((oldCapacity / 2) * 3));
...
[2] new PriorityQueue snippet:
...
int newCapacity += (oldCapacity < 64) ?
oldCapacity : oldCapacity / 2;
...