[
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14036910#comment-14036910
]
John commented on MAPREDUCE-5649:
---------------------------------
$ find . -name "*.java" | xargs grep -Eni "Runtime.*getRuntime.*maxMemory"
$ find . -name "*.java" | grep "mapreduce" | xargs grep -Eni -A16 -B16
"Integer.MAX_VALUE" > result
> Reduce cannot use more than 2G memory for the final merge
> ----------------------------------------------------------
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: mrv2
> Affects Versions: trunk
> Reporter: stanley shi
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in
> the finalMerge method:
> int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>
> This means no matter how much memory user has, reducer will not retain more
> than 2G data in memory before the reduce phase starts.
--
This message was sent by Atlassian JIRA
(v6.2#6252)