[
https://issues.apache.org/jira/browse/HADOOP-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014430#comment-15014430
]
Hudson commented on HADOOP-11901:
---------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #686 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/686/])
HADOOP-11901. BytesWritable fails to support 2G chunks due to integer (wheat9:
rev 747455a13b710266e1084d2f5a3b18ba14b386e5)
* hadoop-common-project/hadoop-common/CHANGES.txt
*
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
> BytesWritable fails to support 2G chunks due to integer overflow
> ----------------------------------------------------------------
>
> Key: HADOOP-11901
> URL: https://issues.apache.org/jira/browse/HADOOP-11901
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Reynold Xin
> Assignee: Reynold Xin
> Fix For: 2.8.0
>
> Attachments: HADOOP-11901 (3).diff, HADOOP-11901.diff
>
>
> BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2).
> This is an unsafe operation since it restricts the max size to ~700MB, since
> 700MB * 3 > 2GB.
> I didn't write a test case for this case because in order to trigger this,
> I'd need to allocate around 700MB, which is pretty expensive to do in a unit
> test. Note that I didn't throw any exception in the case integer overflow as
> I didn't want to change that behavior (callers to this might expect a
> java.lang.NegativeArraySizeException).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)