Hi,

      When I run command *bin/hadoop dfsadmin -report *it shows that 2
datanodes are alive but when i try to http://hadoopmster:50070/ but the
problem is that it opens doesnot opne
http://hadoopmaster:50070/dfshealth.jsp page and throws *error HTTP: 404 .
So why is't happening like this?
*
Regards,
Ashish Pareek


 On Wed, Jun 17, 2009 at 10:06 AM, Sugandha Neaolekar <
sugandha....@gmail.com> wrote:

> Well, You just have to specify the address in the URL address bar as::
> http://hadoopmaster:50070 U'll be able to see the web UI..!
>
>
> On Tue, Jun 16, 2009 at 7:17 PM, ashish pareek <pareek...@gmail.com>wrote:
>
>> HI Sugandha,
>>                    Hmmm........ your suggestion helped and Now I am able
>> to run two datanode ....one on the same machine as name node and other on
>> the different machine ........Thanks a lot :)
>>
>>                  But the problem is now I am not able to see web UI .....
>> for  both datanode and as well as name node....
>> should I have to consider some more things in the site.xml ? if so please
>> help...
>>
>> Thanking you again,
>> regards,
>> Ashish Pareek.
>>
>> On Tue, Jun 16, 2009 at 3:10 PM, Sugandha Naolekar <
>> sugandha....@gmail.com> wrote:
>>
>>> hi,,!
>>>
>>>
>>> First of all, get your concepts clear of hadoop.
>>> You can refer to the following
>>>
>>> site::
>>> http://www.google.co.in/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)&ei=lGU3Spv2FZbLjAe19KmiDQ&usg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzA&sig2=t2AJ_nf24SFtveN4PHS_TA<http://www.google.co.in/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29&ei=lGU3Spv2FZbLjAe19KmiDQ&usg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzA&sig2=t2AJ_nf24SFtveN4PHS_TA>
>>>
>>>
>>> I have small doubt whether in the mater.xml and slave.xml we can have
>>> same port numbers to both of them like
>>>
>>>
>>> for slave :::::
>>>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://hadoopslave:
>>>>
>>>> 9000</value>
>>>>   </property>
>>>>
>>>>
>>>>  for master:::
>>>>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://hadoopmaster:9000</value>
>>>>   </property>
>>>>
>>>>
>>>
>>> Well, any  two daemons or services can run on the same port unless, they
>>> are not run on the same machine.If you wish to run DN and NN on the same
>>> machine, their port numbers have to be different.
>>>
>>>
>>>
>>>
>>> On Tue, Jun 16, 2009 at 2:55 PM, ashish pareek <pareek...@gmail.com>wrote:
>>>
>>>> HI sugandha,
>>>>
>>>>
>>>>
>>>> and one more thing can we have in slave:::
>>>>
>>>> <property>
>>>>   <name>dfs.datanode.address</
>>>>>
>>>>> name>
>>>>>   <value>hadoopmaster:9000</value>
>>>>>     <value>hadoopslave:9001</value>
>>>>>   </property>
>>>>>
>>>>
>>>
>>> Also, fs,default.name is the tag which specifies the default filesystem.
>>> And generaLLY, it is run on namenode. So, it;s value has to be a namenode's
>>> address only and not slave's.
>>>
>>>
>>>>
>>>> Else if you have complete procedure for installing and running Hadoop in
>>>> cluster can you please send it to me ...... I need to step up hadoop with 
>>>> in
>>>> two days and show it to my guide.Currently I am doing my masters.
>>>>
>>>> Thanks for your spending time
>>>
>>>
>>> Try for the above, and this should work!
>>>
>>>>
>>>>
>>>> regards,
>>>> Ashish Pareek
>>>>
>>>>
>>>> On Tue, Jun 16, 2009 at 2:33 PM, Sugandha Naolekar <
>>>> sugandha....@gmail.com> wrote:
>>>>
>>>>> Following changes are to be done::
>>>>>
>>>>> Under master folder::
>>>>>
>>>>> -> put slaves address as well under the values of
>>>>> tag(dfs.datanode.address)
>>>>>
>>>>> -> You want to make namenode as datanode as well. As per your config
>>>>> file, you have specified hadoopmaster  in your slave file. If you don't 
>>>>> want
>>>>> that, remove ti from slaves file.
>>>>>
>>>>> UNder slave folder::
>>>>>
>>>>> -> put only slave's (the m/c where you intend to run your datanode)'s
>>>>> address.under datanode.address tag. Else
>>>>> it should go as such::
>>>>>
>>>>> <property>
>>>>>   <name>dfs.datanode.address</name>
>>>>>   <value>hadoopmaster:9000</value>
>>>>>     <value>hadoopslave:9001</value>
>>>>>   </property>
>>>>>
>>>>> Also, your port numbers hould be different. the daemons NN,DN,JT,TT
>>>>> should run independently on different ports.
>>>>>
>>>>>
>>>>> On Tue, Jun 16, 2009 at 2:05 PM, Sugandha Naolekar <
>>>>> sugandha....@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: ashish pareek <pareek...@gmail.com>
>>>>>> Date: Tue, Jun 16, 2009 at 2:00 PM
>>>>>> Subject: Re: org.apache.hadoop.ipc.client : trying connect to server
>>>>>> failed
>>>>>> To: Sugandha Naolekar <sugandha....@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 16, 2009 at 1:58 PM, ashish pareek 
>>>>>> <pareek...@gmail.com>wrote:
>>>>>>
>>>>>>> HI ,
>>>>>>>      I am sending .tar.gz format containing both master and datanode
>>>>>>> config files .......
>>>>>>>
>>>>>>> Regards,
>>>>>>> Ashish Pareek
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jun 16, 2009 at 1:47 PM, Sugandha Naolekar <
>>>>>>> sugandha....@gmail.com> wrote:
>>>>>>>
>>>>>>>> can u pls send me a zip or a tar file? I don't have windows systems
>>>>>>>> but, linux
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 16, 2009 at 1:19 PM, ashish pareek <pareek...@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> HI Sungandha ,
>>>>>>>>>                       Thanks for your reply .... I am sending you
>>>>>>>>> master and slave configuration files if you can go through it and 
>>>>>>>>> tell me
>>>>>>>>> where I am going wrong it would be helpful.
>>>>>>>>>
>>>>>>>>>                         Hope to get a reply soon ........... Thanks
>>>>>>>>> again!
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Ashish Pareek
>>>>>>>>>
>>>>>>>>> On Tue, Jun 16, 2009 at 11:12 AM, Sugandha Naolekar <
>>>>>>>>> sugandha....@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Ashish!
>>>>>>>>>>
>>>>>>>>>> Try for the following things::
>>>>>>>>>>
>>>>>>>>>> -> Check the config file(hadoop-site.xml) of namenode.
>>>>>>>>>> -> Make sure, the tag(dfs.datanode.addres)'s value you have given
>>>>>>>>>> correctly
>>>>>>>>>> it's IP,and the name of that machine.
>>>>>>>>>> -> Also, check for the name added in /etc/hosts file.
>>>>>>>>>> -> Check for the ssh keys of datanodes present in namenode's
>>>>>>>>>> known_hosts
>>>>>>>>>> file
>>>>>>>>>> -> check for the value of dfs.datanode.addres on datanode's config
>>>>>>>>>> file.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Jun 16, 2009 at 10:58 AM, ashish pareek <
>>>>>>>>>> pareek...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> > HI ,
>>>>>>>>>> >     I am trying to step up a hadoop cluster on 3GB machine and
>>>>>>>>>> using hadoop
>>>>>>>>>> > 0.18.3 and  have followed procedure given in  apache hadoop site
>>>>>>>>>> for hadoop
>>>>>>>>>> > cluster.
>>>>>>>>>> >     In conf/slaves I have added two datanode i.e including the
>>>>>>>>>> namenode
>>>>>>>>>> > vitrual machine and other machine virtual machine (datanode)
>>>>>>>>>>  ..... and
>>>>>>>>>> > have
>>>>>>>>>> > set up passwordless ssh between both virtual machines ..... But
>>>>>>>>>> now problem
>>>>>>>>>> > is when I run command :
>>>>>>>>>> >
>>>>>>>>>> > bin/hadoop start-all.sh
>>>>>>>>>> >
>>>>>>>>>> > It start only one datanode on the same namenode vitrual machine
>>>>>>>>>> but it
>>>>>>>>>> > doesn't start the datanode on other machine.....
>>>>>>>>>> >
>>>>>>>>>> > in logs/hadoop-datanode.log  i get message
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> >  INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>>>>>> >  connect to server: hadoop1/192.168.1.28:9000. Already
>>>>>>>>>> >
>>>>>>>>>> >  tried 1 time(s).
>>>>>>>>>> >
>>>>>>>>>> >  2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client:
>>>>>>>>>> Retrying
>>>>>>>>>> >  connect to server: hadoop1/192.168.1.28:9000. Already tried 2
>>>>>>>>>> time(s).
>>>>>>>>>> >
>>>>>>>>>> >  2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client:
>>>>>>>>>> Retrying
>>>>>>>>>> >  connect to server: hadoop1/192.168.1.28:9000. Already tried 3
>>>>>>>>>> time(s).
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> >
>>>>>>>>>> > .
>>>>>>>>>> > .
>>>>>>>>>> >
>>>>>>>>>> > .
>>>>>>>>>> >
>>>>>>>>>> > I have tried formatting and start the cluster again .....but
>>>>>>>>>> still I
>>>>>>>>>> > get the same error.
>>>>>>>>>> >
>>>>>>>>>> > So can any one help in solving this problem. :)
>>>>>>>>>> >
>>>>>>>>>> > Thanks
>>>>>>>>>> >
>>>>>>>>>> > Regards
>>>>>>>>>> >
>>>>>>>>>> > Ashish Pareek
>>>>>>>>>> >
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards!
>>>>>>>>>> Sugandha
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards!
>>>>>>>> Sugandha
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards!
>>>>>> Sugandha
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards!
>>>>> Sugandha
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards!
>>> Sugandha
>>>
>>
>>
>
>
> --
> Regards!
> Sugandha
>

Reply via email to