-------- Mensaje reenviado --------
Asunto:         Re: WELCOME to [email protected]
Fecha:  Thu, 04 Jun 2015 15:48:19 +0200
De:     paco <[email protected]>
Para:   [email protected]



Hello,

Recently I have increased my physical cluster. I have two kind of nodes:

Type 1:
    RAM: 24 GB
    12 cores

Type 2:
    RAM: 64 GB
    12 cores

Theses nodes are in the same physical rack. I would like to configure it
to use 12 container per node, in nodes of type 1 each mapper has 1.8GB
(22GB / 12 cores = 1.8GB), in nodes of kind 2 each mapper will has 5.3GB
(60/12). Is it possible?

I have configured so:

nodes type 1(slaves):
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
                <value>22000</value>
</property>

<property>
                <name>mapreduce.map.memory.mb</name>
                <value>1800</value>
</property>
    <property>
        <name>mapred.map.child.java.opts</name>
        <value>-Xmx1800m</value>
    </property>



nodes type 2(slaves):
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
                <value>60000</value>
</property>

<property>
                <name>mapreduce.map.memory.mb</name>
                <value>5260</value>
</property>
    <property>
        <name>mapred.map.child.java.opts</name>
        <value>-Xmx5260m</value>
    </property>



Hadoop is creating mapper with 1 GB of memory like:

Nodes of kind 1:
20GB/1GB = 20 container which it is executing with -Xmx1800

Nodes of kind 2:
60GB/1GB = 60 container which it is executing with -Xmx5260


Thanks!




Reply via email to