[ 
https://issues.apache.org/jira/browse/YARN-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza resolved YARN-1476.
------------------------------

    Resolution: Duplicate

> Container out of memery
> -----------------------
>
>                 Key: YARN-1476
>                 URL: https://issues.apache.org/jira/browse/YARN-1476
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.2.0
>         Environment: mapreduce.reduce.java.opts=-Xmx4000m 
> mapreduce.reduce.shuffle.merge.percent=0.4
> mapreduce.reduce.shuffle.parallelcopies=5
> mapreduce.reduce.shuffle.input.buffer.percent=0.6
> mapreduce.reduce.shuffle.memory.limit.percent=0.17
>            Reporter: zhoujunkun
>
> when I input 60G of random word, I run wordcount job, the stage of shuffle is 
> error. the reduce is run 13%.
> Container [pid=21073,containerID=container_1385657333160_0001_01_000073] is 
> running beyond physical memory limits. Current usage: 4.0 GB of 4 GB physical 
> memory used; 5.5 GB of 13 GB virtual memory used. Killing container. 
> why do it need so much memory?



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to