Prabhu,
You fully addressed my question and I'll follow your instructions.
Many thanks and have a nice day.
Guido
On Thu, Aug 15, 2019 at 8:19 PM Prabhu Josephraj
wrote:
> 1. Easy way to reproduce container to exceed configured physical memory
> limit is by configuring the Heap Size (500MB)
1. Easy way to reproduce container to exceed configured physical memory
limit is by configuring the Heap Size (500MB) of
container above the Container Size (100MB).
yarn-site.xml: yarn.scheduler.minimum-allocation-mb 100
mapred-site.xml: yarn.app.mapreduce.am.resource.mb 100
Prabhu,
I reformulate my question:
I successfully run following job: yarn jar
/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar pi 3
10
and noticed that highest node physical memory usage was alway <512MB during
job duration; else job completed (see details below)
Jeff, Available node size for YARN is the value of
yarn.nodemanager.resource.memory-mb which is set ten times of 512MB.
Guido, Did not get the below question, can you explain the same.
Are you aware of any job syntax to tune the 'container physical
memory usage' to 'force' job kill/log?
Hi Prabhu,
thanks for your explanation. It makes sense, but I wonder YARN allows you
to define 'yarn.nodemanager.resource.memory-mb' higher then node physical
memory w/out logging any entry under resourcemanager log.
Are you aware of any job syntax to tune the 'container physical memory
usage'
But he didn't say he had a "5120MB Available Node Size." He said he had
a 512MiB (i.e., half a GiB) of RAM per node.
On 8/15/19 7:50 AM, Prabhu Josephraj wrote:
YARN allocates based on the configuration
(yarn.nodemanager.resource.memory-mb) user has configured. It has
allocated
the AM
YARN allocates based on the configuration
(yarn.nodemanager.resource.memory-mb) user has configured. It has allocated
the AM Container of size 1536MB as it can fit in 5120MB Available Node
Size.
yarn.nodemanager.pmem-check-enabled will kill the container if the physical
memory usage of the
Correct: I set 'yarn.nodemanager.resource.memory-mb' ten times the node
physical memory (512MB) and I was able to successfully execute a 'pi 1 10'
mapreduce job.
Since default 'yarn.app.mapreduce.am.resource.mb' value is 1536MB I
expected the job to never start / be allocated and I have no
To make sure I understand...you've allocated /ten times/ your physical
RAM for containers? If so, I think that's your issue.
For reference, under Hadoop 3.x I didn't have a cluster that would
really do anything until its worker nodes had at least 8GiB.
On 8/14/19 12:10 PM, . . wrote:
Hi