Hi,

I am trying to run a giraph application (computing betweenness centrality) in 
the XSEDE comet cluster. But everytime I get some error relating to container 
launch. Either the virtual memory or physical memory is running out.  

The avoid this, it looks like that the following parameters have to be set.

i) The maximum memory yarn can utilize on every node

ii) Breakup of total resources available into containers

iii) Physical RAM limit for each Map And Reduce task

iv) The JVM heap size limit for each task

v) The amount of virtual memory each task will get

If I were to use **N nodes** for computation, and I want to use **W workers**, 
what should the following parameters be? 

In mapred-site.xml

mapreduce.map.memory.mb

mapreduce.reduce.memory.mb

mapreduce.map.cpu.vcores

mapreduce.reduce.cpu.vcores

In yarn-site.xml

yarn.nodemanager.resource.memory-mb

yarn.scheduler.minimum-allocation-mb

yarn.scheduler.minimum-allocation-vcores

yarn.scheduler.maximum-allocation-vcores

yarn.nodemanager.resource.

Reply via email to