On Sat, May 30, 2020 at 9:53 AM James Laskey <james.las...@oracle.com>
wrote:

> Understood. Just trying to balance correctness with providing meaningful
> exceptions.
>

The VM should be providing meaningful exceptions.  We might get OOME for
any int array size.

For attempts to allocate more than Integer.MAX_VALUE elements, there is no
way the grow code can do that, so throwing then seems reasonable.  Generic
grow code can't know why it was asked to allocate an impossibly large
amount; reworking the grow handling might be good, but few will appreciate
the effort.  (testing is a particular challenge)


>
> I suppose another approach is to let it all go deep and catch the error on
> the way back and provide the detail then. Not likely win any fans.
>
> 📱
>
> > On May 30, 2020, at 12:30 PM, Martin Buchholz <marti...@google.com>
> wrote:
> >
> > I wrote an earlier version of this grow logic, and then it was
> > transplanted into other classes.
> >
> > We strongly suspect that the VM will throw OOME when we try to
> > allocate an array beyond MAX_ARRAY_SIZE, so are reluctant to do so,
> > but we also consider the VM behavior a bug that may eventually get
> > fixed (or is already a non-issue with a different VM).  We are trying
> > for good behavior with both sorts of VM.
> >
> >> On Sat, May 30, 2020 at 7:32 AM Jim Laskey <james.las...@oracle.com>
> wrote:
> >>
> >> I'm working through https://bugs.openjdk.java.net/browse/JDK-8230744 <
> https://bugs.openjdk.java.net/browse/JDK-8230744> Several classes throw
> OutOfMemoryError without message .
> >>
> >> I'm wondering why hugeCapacity in
> src/jdk.zipfs/share/classes/jdk/nio/zipfs/ByteArrayChannel.java is defined
> as
> >>
> >>    /**
> >>     * The maximum size of array to allocate.
> >>     * Some VMs reserve some header words in an array.
> >>     * Attempts to allocate larger arrays may result in
> >>     * OutOfMemoryError: Requested array size exceeds VM limit
> >>     */
> >>    private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> >>
> >>    /**
> >>     * Increases the capacity to ensure that it can hold at least the
> >>     * number of elements specified by the minimum capacity argument.
> >>     *
> >>     * @param minCapacity the desired minimum capacity
> >>     */
> >>    private void grow(int minCapacity) {
> >>        // overflow-conscious code
> >>        int oldCapacity = buf.length;
> >>        int newCapacity = oldCapacity << 1;
> >>        if (newCapacity - minCapacity < 0)
> >>            newCapacity = minCapacity;
> >>        if (newCapacity - MAX_ARRAY_SIZE > 0)
> >>            newCapacity = hugeCapacity(minCapacity);
> >>        buf = Arrays.copyOf(buf, newCapacity);
> >>    }
> >>
> >>    private static int hugeCapacity(int minCapacity) {
> >>        if (minCapacity < 0) // overflow
> >>            throw new OutOfMemoryError();
> >>        return (minCapacity > MAX_ARRAY_SIZE) ?
> >>            Integer.MAX_VALUE :
> >>            MAX_ARRAY_SIZE;
> >>    }
> >>
> >> It just seems that it's pushing the inevitable off to Arrays.copyOf.
> Shouldn't it be:
> >>
> >>    private static int hugeCapacity(int minCapacity) {
> >>        if (minCapacity < 0 || minCapacity > MAX_ARRAY_SIZE) {
> >>            throw
> >>                new OutOfMemoryError("ByteArrayChannel exceeds maximum
> size: " +
> >>                                      MAX_ARRAY_SIZE);
> >>        }
> >>
> >>        return MAX_ARRAY_SIZE;
> >>    }
> >>
> >> Real question: is there some hidden purpose behind this kind of logic?
> >>
> >>
> >> Cheers,
> >>
> >> -- Jim
> >>
> >>
>
>

Reply via email to