Hi Jim,

This kind of code exists in other places too, trying to balance preferred grow sizes
with necessary ones and implementation limits.  The rationale and design has
been raised several times.  That last time it resulting in consolidating similar code
into jdk.internalutil.ArraysUppport.newLength().

If you have any reason to touch that code at all you can probably replace all that code with a call to newLength().

Regards, Roger


On 5/30/20 12:51 PM, James Laskey wrote:
Understood. Just trying to balance correctness with providing meaningful 
exceptions.

I suppose another approach is to let it all go deep and catch the error on the 
way back and provide the detail then. Not likely win any fans.

📱

On May 30, 2020, at 12:30 PM, Martin Buchholz <marti...@google.com> wrote:

I wrote an earlier version of this grow logic, and then it was
transplanted into other classes.

We strongly suspect that the VM will throw OOME when we try to
allocate an array beyond MAX_ARRAY_SIZE, so are reluctant to do so,
but we also consider the VM behavior a bug that may eventually get
fixed (or is already a non-issue with a different VM).  We are trying
for good behavior with both sorts of VM.

On Sat, May 30, 2020 at 7:32 AM Jim Laskey <james.las...@oracle.com> wrote:

I'm working through https://bugs.openjdk.java.net/browse/JDK-8230744 
<https://bugs.openjdk.java.net/browse/JDK-8230744> Several classes throw 
OutOfMemoryError without message .

I'm wondering why hugeCapacity in 
src/jdk.zipfs/share/classes/jdk/nio/zipfs/ByteArrayChannel.java is defined as

    /**
     * The maximum size of array to allocate.
     * Some VMs reserve some header words in an array.
     * Attempts to allocate larger arrays may result in
     * OutOfMemoryError: Requested array size exceeds VM limit
     */
    private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

    /**
     * Increases the capacity to ensure that it can hold at least the
     * number of elements specified by the minimum capacity argument.
     *
     * @param minCapacity the desired minimum capacity
     */
    private void grow(int minCapacity) {
        // overflow-conscious code
        int oldCapacity = buf.length;
        int newCapacity = oldCapacity << 1;
        if (newCapacity - minCapacity < 0)
            newCapacity = minCapacity;
        if (newCapacity - MAX_ARRAY_SIZE > 0)
            newCapacity = hugeCapacity(minCapacity);
        buf = Arrays.copyOf(buf, newCapacity);
    }

    private static int hugeCapacity(int minCapacity) {
        if (minCapacity < 0) // overflow
            throw new OutOfMemoryError();
        return (minCapacity > MAX_ARRAY_SIZE) ?
            Integer.MAX_VALUE :
            MAX_ARRAY_SIZE;
    }

It just seems that it's pushing the inevitable off to Arrays.copyOf.  Shouldn't 
it be:

    private static int hugeCapacity(int minCapacity) {
        if (minCapacity < 0 || minCapacity > MAX_ARRAY_SIZE) {
            throw
                new OutOfMemoryError("ByteArrayChannel exceeds maximum size: " +
                                      MAX_ARRAY_SIZE);
        }

        return MAX_ARRAY_SIZE;
    }

Real question: is there some hidden purpose behind this kind of logic?


Cheers,

-- Jim



Reply via email to