Hi Prabhu,
thanks for your explanation. It makes sense, but I wonder YARN allows you
to define 'yarn.nodemanager.resource.memory-mb' higher then node physical
memory w/out logging any entry under resourcemanager log.
Are you aware of any job syntax to tune the 'container physical memory
usage'
But he didn't say he had a "5120MB Available Node Size." He said he had
a 512MiB (i.e., half a GiB) of RAM per node.
On 8/15/19 7:50 AM, Prabhu Josephraj wrote:
YARN allocates based on the configuration
(yarn.nodemanager.resource.memory-mb) user has configured. It has
allocated
the AM
YARN allocates based on the configuration
(yarn.nodemanager.resource.memory-mb) user has configured. It has allocated
the AM Container of size 1536MB as it can fit in 5120MB Available Node
Size.
yarn.nodemanager.pmem-check-enabled will kill the container if the physical
memory usage of the
Correct: I set 'yarn.nodemanager.resource.memory-mb' ten times the node
physical memory (512MB) and I was able to successfully execute a 'pi 1 10'
mapreduce job.
Since default 'yarn.app.mapreduce.am.resource.mb' value is 1536MB I
expected the job to never start / be allocated and I have no
Jeff, Available node size for YARN is the value of
yarn.nodemanager.resource.memory-mb which is set ten times of 512MB.
Guido, Did not get the below question, can you explain the same.
Are you aware of any job syntax to tune the 'container physical
memory usage' to 'force' job kill/log?
Prabhu,
I reformulate my question:
I successfully run following job: yarn jar
/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar pi 3
10
and noticed that highest node physical memory usage was alway <512MB during
job duration; else job completed (see details below)
1. Easy way to reproduce container to exceed configured physical memory
limit is by configuring the Heap Size (500MB) of
container above the Container Size (100MB).
yarn-site.xml: yarn.scheduler.minimum-allocation-mb 100
mapred-site.xml: yarn.app.mapreduce.am.resource.mb 100
Unsubscribe