I believe the debug logs location is still specified in hadoop-env.sh (I just 
read the 0.19.0 doc). I think you have to shut down all nodes first (stop-all), 
then format the namenode, and then restart (start-all) and make sure that 
NameNode comes up too. We are using a very old version, 0.12.3, and are 
upgrading.
-TCK



--- On Wed, 2/4/09, Mithila Nagendra <[email protected]> wrote:
From: Mithila Nagendra <[email protected]>
Subject: Re: Bad connection to FS.
To: [email protected], [email protected]
Date: Wednesday, February 4, 2009, 6:30 PM

@TCK: Which version of hadoop have you installed?
@Amandeep: I did tried reformatting the namenode, but it hasn't helped me out 
in anyway.
Mithila


On Wed, Feb 4, 2009 at 4:18 PM, TCK <[email protected]> wrote:



Mithila, how come there is no NameNode java process listed by your jps command? 
I would check the hadoop namenode logs to see if there was some startup problem 
(the location of those logs would be specified in hadoop-env.sh, at least in 
the version I'm using).


-TCK







--- On Wed, 2/4/09, Mithila Nagendra <[email protected]> wrote:

From: Mithila Nagendra <[email protected]>

Subject: Bad connection to FS.

To: "[email protected]" <[email protected]>, 
"[email protected]" <[email protected]>


Date: Wednesday, February 4, 2009, 6:06 PM



Hey all



When I try to copy a folder from the local file system in to the HDFS using

the command hadoop dfs -copyFromLocal, the copy fails and it gives an error

which says "Bad connection to FS". How do I get past this? The

following is

the output at the time of execution:



had...@renweiyu-desktop:/usr/local/hadoop$ jps

6873 Jps

6299 JobTracker

6029 DataNode

6430 TaskTracker

6189 SecondaryNameNode

had...@renweiyu-desktop:/usr/local/hadoop$ ls

bin          docs                        lib          README.txt

build.xml    hadoop-0.18.3-ant.jar       libhdfs      src

c++          hadoop-0.18.3-core.jar      librecordio  webapps

CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt

conf         hadoop-0.18.3-test.jar      logs

contrib      hadoop-0.18.3-tools.jar     NOTICE.txt

had...@renweiyu-desktop:/usr/local/hadoop$ cd ..

had...@renweiyu-desktop:/usr/local$ ls

bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore

include  lib  man  sbin  share  src

had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal

gutenberg gutenberg

09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 0 time(s).

09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 1 time(s).

09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 2 time(s).

09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 3 time(s).

09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 4 time(s).

09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 5 time(s).

09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 6 time(s).

09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 7 time(s).

09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 8 time(s).

09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 9 time(s).

Bad connection to FS. command aborted.



The commmand jps shows that the hadoop system s up and running. So I have no

idea whats wrong!



Thanks!

Mithila







      




      

Reply via email to