namenode is stopping automatically!!
On Tue, Jun 23, 2009 at 10:29 PM, bharath vissapragada <
bharathvissapragada1...@gmail.com> wrote:
> It worked fine when i updated /etc/hosts file (of all the slaves) and
> writing fully qualified domain name in the hadoop-site.xml.
>
> It worked fine for some
It worked fine when i updated /etc/hosts file (of all the slaves) and
writing fully qualified domain name in the hadoop-site.xml.
It worked fine for sometime .. then started giving new error
09/06/23 22:21:49 INFO ipc.Client: Retrying connect to server: master/
10.2.24.21:54310. Already tried 0 t
Raghu Angadi wrote:
This is at RPC client level and there is requirement for fully qualified
I meant to say "there is NO requirement ..."
hostname. May be "." at the end of "10.2.24.21" causing the problem?
btw, in 0.21 even fs.default.name does not need to be fully qualified
that fix is
I encountered this problem before. If t you can ping the machine using its
name, but cannot ping it using its IP address.
then what you have to do is add the mapping into /etc/hosts
-Original Message-
From: bharathvissapragada1...@gmail.com
[mailto:bharathvissapragada1...@gmail.com] O
This is at RPC client level and there is requirement for fully qualified
hostname. May be "." at the end of "10.2.24.21" causing the problem?
btw, in 0.21 even fs.default.name does not need to be fully qualified
name.. anything that resolves to an ipaddress is fine (at least for
common/FS an
fs.default.name in your hadoop-site.xml needs to be set to a fully-
qualified domain name (instead of an IP address)
-Matt
On Jun 23, 2009, at 6:42 AM, bharath vissapragada wrote:
when i try to execute the command bin/start-dfs.sh , i get the
following
error . I have checked the hadoop-site