Hi,ALL


We get a problem about enable UseNUMA flag for my hadoop framework.

We've tried to specify JVM flags during hadoop daemon's starts,
e.g. export HADOOP_NAMENODE_OPTS="-XX:+UseNUMA -Dcom.sun.management.jmxremote 
$HADOOP_NAMENODE_OPTS",
export HADOOP_SECONDARYNAMENODE_OPTS="-XX:+UseNUMA 
-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS", etc.
But the ratio between local and remote memory access is 2:1, just remains as 
same as before.

Then we find that hadoop MapReduce start child JVM processes to run task in 
containers. So we passes -XX:+UseNUMA to JVMs by set theting configuration 
parameter child.java.opts. But hadoop starts to throw 
ExitCodeExceptionException (exitCode=1), seems that hadoop does not support 
this JVM parameter.

What should we do to enable UseNUMA flag for my hadoop? Or what should we do to 
decrease the local/remote memory access in NUMA framework? Should we just 
change Hadoop script or resorts to source code? And how to do it?

The hadoop version is 2.6.0.

Best Regards.

Dacai

Reply via email to