> is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
>physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>container.

You need to change the yarn.nodemanager.vmem-check-enabled=false on
*every* machine on your cluster & restart all NodeManagers.

The VMEM check made a lot of sense in the 32 bit days when the CPU forced
a maximum of 4Gb of VMEM per process (even with PAE).

Similarly it was a way to punish processes which swap out to disk, since
the pmem only tracks the actual RSS.

In the large RAM 64bit world, vmem is not a significant issue yet - I
think the addressing limit is 128 TB per process.

> <property>
> <name>mapreduce.reduce.memory.mb</name>
> <value>4096</value>
> </property>
...
 
> <property>
> <name>mapreduce.reduce.java.opts</name>
> <value>-Xmx6144m</value>
> </property>
 

That's the next failure point. 4Gb container with 6Gb limits. To produce
an immediate failure when checking configs, add

-XX:+AlwaysPreTouch -XX:+UseNUMA

to the java.opts.

Cheers,
Gopal
 


Reply via email to