Thanks Ravi, it helped. BTW, only the first trick worked :
hadoop dfsadmin -report | grep "Name:" | cut -d":" -f2
2nd one may not be applicable as I need to automate this ( hence need
a commandline utility )
3rd approach didnt work, as the commands are getting ecxecuted only on
the local slave-n
There are several ways to get slave ip address. ( Not sure if you can use all
of these on Ec2 )
1. hadoop dfsadmin -report shows you list of nodes and there status
2. Name node slaves page displays information about live nodes.
3. You can execute commands on slaves nodes using bin/slaves.s
I am using ec2 and dont see the slaves in $HADOOP_HOME/conf/slaves file.
On Sat, Mar 6, 2010 at 9:33 PM, Ted Yu wrote:
> check conf/slaves file on master:
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29#conf.2Fslaves_.28master_only.29
>
> On Fri, Mar 5,
check conf/slaves file on master:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29#conf.2Fslaves_.28master_only.29
On Fri, Mar 5, 2010 at 7:13 PM, prasenjit mukherjee <
pmukher...@quattrowireless.com> wrote:
> Is there any way ( like hadoop-commandline or f