[ https://issues.apache.org/jira/browse/MAPREDUCE-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748188#comment-17748188 ]
ASF GitHub Bot commented on MAPREDUCE-7446: ------------------------------------------- tomicooler commented on code in PR #5895: URL: https://github.com/apache/hadoop/pull/5895#discussion_r1276395283 ########## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java: ########## @@ -433,8 +434,11 @@ public boolean nextRawKey(DataInputBuffer key) throws IOException { } public void nextRawValue(DataInputBuffer value) throws IOException { + long targetSizeLong = currentValueLength + (long) (currentValueLength >> 1); + int targetSize = (int) Math.min(targetSizeLong, ARRAY_MAX_SIZE); + final byte[] valBytes = (value.getData().length < currentValueLength) - ? new byte[currentValueLength << 1] + ? new byte[targetSize] Review Comment: just to be safe, let's keep the doubling but set a max value (Integer.MAX_VALUE - 8) > NegativeArraySizeException when running MR jobs with large data size > -------------------------------------------------------------------- > > Key: MAPREDUCE-7446 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7446 > Project: Hadoop Map/Reduce > Issue Type: Bug > Reporter: Peter Szucs > Assignee: Peter Szucs > Priority: Major > Labels: pull-request-available > > We are using bit shifting to double the byte array in IFile's > [nextRawValue|https://github.infra.cloudera.com/CDH/hadoop/blob/bef14a39c7616e3b9f437a6fb24fc7a55a676b57/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java#L437] > method to store the byte values in it. With large dataset it can easily > happen that we shift the leftmost bit when we are calculating the size of the > array, which can lead to a negative number as the array size, causing the > NegativeArraySizeException. > It would be safer to expand the backing array with a 1.5x factor, and have a > check not to extend Integer's max value during that. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org