hadoop will not use/hold on to memory unless its needed.

you push load on the cluster and the stats will automatically grow

On Tue, Jul 24, 2012 at 2:52 PM, Kamil Rogoń
<kamil.ro...@cantstopgames.com> wrote:
> Hello,
>
> Reading on the Internet best practices for selecting hardware for Hadoop I
> noticed there are always many RAM memory. On my Hadoop environment I have
> 16GB memory on all hardware, but I am worried about small utilization of it:
>
> $ free -m
>              total       used       free     shared buffers     cached
> Mem:         15997      15907         90          0 287      15064
> -/+ buffers/cache:        555      15442
> Swap:        15258        150      15108
>
> $ free -m
>              total       used       free     shared buffers     cached
> Mem:         16029      15937         92          0 228      14320
> -/+ buffers/cache:       1388      14641
> Swap:        15258       1017      14240
>
> As you see "buffers used" is below 10%. What options should I look closer? I
> changed "Heap Size" of Cluster, but utilization doesn't grow (Heap Size is
> 70.23 MB / 3.47 GB (1%)).
>
> Current config which can impact on memory:
>
> <name>fs.inmemory.size.mb</name>
> <value>200</value>
>
> <name>io.sort.mb</name>
> <value>200</value>
>
> <name>io.file.buffer.size</name>
> <value>131072</value>
>
> <name>dfs.block.size</name>
> <value>134217728</value>
>
> <name>mapred.child.java.opts</name>
> <value>-Xmx1024M</value>
>
> export HADOOP_HEAPSIZE=4000
>
>
> Thanks for your reply,
> K.R.
>



-- 
Nitin Pawar

Reply via email to