[ 
https://issues.apache.org/jira/browse/HADOOP-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12566177#action_12566177
 ] 

Joydeep Sen Sarma commented on HADOOP-2764:
-------------------------------------------

> The JVM doesn't generally use more heap unless it has to. Are you seeing 
> datanodes and tasktrackers that use a lot of memory?

No - i don't think the tasktrackers and datanodes consume a lot of memory. i am 
just being paranoid - from what i have read and seen - the resident memory is 
affected by the heap size setting. Garbage collection is less aggressive if 
there is a lot of headroom in the heap. So in general it seems like a good 
thing to set the heap size close to what's required. 

> specify different heap size for namenode/jobtracker vs. tasktracker/datanodes
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-2764
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2764
>             Project: Hadoop Core
>          Issue Type: New Feature
>    Affects Versions: 0.15.3
>            Reporter: Joydeep Sen Sarma
>            Priority: Minor
>
> tasktrackers/datanodes should be run with low memory settings. theres a lot 
> of competition for memory on slave nodes and these tasks don't need much 
> memory anyway and best to keep heap setting low.
> namenode needs higher memory and there's usually lots to spare on separate 
> box.
> hadoop-env.sh can provide different heap settings for central vs. slave 
> daemons.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to