[
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17104735#comment-17104735
]
Nanda kumar commented on HADOOP-15524:
--------------------------------------
Thanks for the update [~arp].
I'm +1 on the change. Just retriggered Jenkins, will merge it after the build.
https://builds.apache.org/job/hadoop-multibranch/job/PR-393/10/
> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> -------------------------------------------------------------------
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
> Issue Type: Bug
> Components: io
> Reporter: Joseph Smith
> Assignee: Joseph Smith
> Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal
> array. On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
> * The maximum size of array to allocate.
> * Some VMs reserve some header words in an array.
> * Attempts to allocate larger arrays may result in
> * OutOfMemoryError: Requested array size exceeds VM limit
> */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>
> BytesWritable.setSize should use something similar to prevent an OOME from
> occurring.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]