[
https://issues.apache.org/jira/browse/MAPREDUCE-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748135#comment-17748135
]
ASF GitHub Bot commented on MAPREDUCE-7446:
-------------------------------------------
tomicooler commented on code in PR #5895:
URL: https://github.com/apache/hadoop/pull/5895#discussion_r1276203099
##########
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java:
##########
@@ -433,8 +434,11 @@ public boolean nextRawKey(DataInputBuffer key) throws
IOException {
}
public void nextRawValue(DataInputBuffer value) throws IOException {
+ long targetSizeLong = currentValueLength + (currentValueLength >> 1);
Review Comment:
This overflows like this. Example:
```
class HelloWorld {
public static void main(String[] args) {
int currentValueLength = Integer.MAX_VALUE - 20000;
final int ARRAY_MAX_SIZE = Integer.MAX_VALUE - 8;
for (int i = 0; i < 10; i++) {
long targetSizeLong = currentValueLength + (currentValueLength
>> 1);
int targetSize = (int) Math.min(targetSizeLong, ARRAY_MAX_SIZE);
System.out.println("targetSizeLong: " + targetSizeLong + "
targetSize: " + targetSize);
currentValueLength = targetSize;
}
}
}
```
```
targetSizeLong: -1073771826 targetSize: -1073771826
targetSizeLong: -1610657739 targetSize: -1610657739
targetSizeLong: 1878980687 targetSize: 1878980687
targetSizeLong: -1476496266 targetSize: -1476496266
targetSizeLong: 2080222897 targetSize: 2080222897
targetSizeLong: -1174632951 targetSize: -1174632951
targetSizeLong: -1761949427 targetSize: -1761949427
targetSizeLong: 1652043155 targetSize: 1652043155
targetSizeLong: -1816902564 targetSize: -1816902564
targetSizeLong: 1569613450 targetSize: 1569613450
```
Fix:
```
long targetSizeLong = currentValueLength + (long)(currentValueLength >> 1);
```
> NegativeArraySizeException when running MR jobs with large data size
> --------------------------------------------------------------------
>
> Key: MAPREDUCE-7446
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7446
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Reporter: Peter Szucs
> Assignee: Peter Szucs
> Priority: Major
> Labels: pull-request-available
>
> We are using bit shifting to double the byte array in IFile's
> [nextRawValue|https://github.infra.cloudera.com/CDH/hadoop/blob/bef14a39c7616e3b9f437a6fb24fc7a55a676b57/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java#L437]
> method to store the byte values in it. With large dataset it can easily
> happen that we shift the leftmost bit when we are calculating the size of the
> array, which can lead to a negative number as the array size, causing the
> NegativeArraySizeException.
> It would be safer to expand the backing array with a 1.5x factor, and have a
> check not to extend Integer's max value during that.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]