[ 
https://issues.apache.org/jira/browse/HADOOP-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715909#action_12715909
 ] 

Steve Loughran commented on HADOOP-5945:
----------------------------------------

>only one datanode server per node can not fully utilize the node's resources.

The datanode shouldn't be using that much CPU/memory if you can help it, as 
that is for the tasks the task tracker starts. You are free to increase the 
number of slots for task trackers to use up all the spare RAM, CPU time you 
have.

-If the server has spare storage, then you can add more directories to the list 
of storage dirs for the datanode to use

@Hairong -yes, separate VMs is best. There are a few places where System.exit() 
can be called. and unless you are running under a security manager, a single VM 
will shut down without warning.


> Support running multiple DataNodes/TaskTrackers simultaneously in a single 
> node
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-5945
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5945
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: He Yongqiang
>
> We should support multiple datanodes/tasktrackers running at a same node, 
> only if they do not share same port/local fs dir etc. I think Hadoop can be 
> easily adapted to meet this.  
> I guess at first and the major step is that we should modify the script to 
> let it support startting multiple datanode/tasktracker daemons in a same node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to