Hi all,

I am running a EMR cluster with 1 master node and 10 core nodes.

When I go to the dashboard of the hadoop cluster, I each container only has
11.25 GB memory available where as the instance that I use for
it(r3.xlarge) has 30.5 GB of memory.

may I ask, how is this possible and why? Also is it possible to fully
utilise these resources.
I am able to change the settings to utilise the 11.25 GB available memory
but I am wondering about the remainder of the 30.5GB that r3.xlarge offers?
------------------------------
HEAP=9216
-Dmapred.child.java.opts=-Xmx${HEAP}m \
-Dmapred.job.map.memory.mb=${HEAP} \
-Dyarn.app.mapreduce.am.resource.mb=1024 \
-Dmapred.cluster.map.memory.mb=${HEAP} \
------------------------------
Please see the link of the cluster screenshot. http://imgur.com/a/zFvyw

Reply via email to