Hey

The running nutch job fails due to the following error:


Container [pid=6179,containerID=container_1473334555047_0003_01_000015] is running beyond physical memory limits. Current usage: 4.1 GB of 4 GB physical memory used; 8.4 GB of 8.4 GB virtual memory used. Killing container.

The *configurations of mapred-site.xml* file used are:
    <configuration>
<property>
 <name>mapreduce.map.log.level</name>
 <value>ERROR</value>
</property>
<property>
 <name>mapreduce.reduce.log.level</name>
 <value>ERROR</value>
</property>


<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>3572</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx3765m</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx3265m</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>1228</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.command-opts</name>
    <value>-Xmx983m</value>
</property>


The*properties of yarn-site.xml *are as follows:

<property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>5120</value>
   <description>maximum memory allcated to containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>4</value>
 </property>
<property>
   <name>yarn.nodemanager.resource.memory-mb</name>
   <value>12288</value>
<description>max memory allcated to nodemanager.</description>
</property>
<property>
 <name>yarn.nodemanager.vmem-pmem-ratio</name>
 <value>2.1</value>
</property>

Available memory is 8 GB RAM.

What are the best configurations that can be done to avoid this error.

--

Shubham Gupta

Reply via email to