[ 
https://issues.apache.org/jira/browse/YARN-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17747879#comment-17747879
 ] 

Peter Szucs commented on YARN-11542:
------------------------------------

Moving this to mapreduce project.

> NegativeArraySizeException when running MR jobs with large data size
> --------------------------------------------------------------------
>
>                 Key: YARN-11542
>                 URL: https://issues.apache.org/jira/browse/YARN-11542
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: yarn
>            Reporter: Peter Szucs
>            Assignee: Peter Szucs
>            Priority: Major
>              Labels: pull-request-available
>
> We are using bit shifting to double the byte array in IFile's 
> [nextRawValue|https://github.infra.cloudera.com/CDH/hadoop/blob/bef14a39c7616e3b9f437a6fb24fc7a55a676b57/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java#L437]
>  method to store the byte values in it. With large dataset it can easily 
> happen that we shift the leftmost bit when we are calculating the size of the 
> array, which can lead to a negative number as the array size, causing the 
> NegativeArraySizeException.
> It would be safer to expand the backing array with a 1.5x factor, and have a 
> check not to extend Integer's max value during that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to