Yes thank you very much. It is now OK.

$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps

$ rm -rf /data/mahmood/nutch-test/filesystem/name/*
$ rm -rf /data/mahmood/nutch-test/filesystem/name/*

$ ./bin/hadoop namenode -format
Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or N) Y

$ ./bin/start-all.sh
starting namenode, logging to 
/data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
localhost: starting datanode, logging to 
/data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
localhost: starting secondarynamenode, logging to 
/data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
starting jobtracker, logging to 
/data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
localhost: starting tasktracker, logging to 
/data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out

$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
13490 JobTracker
13810 Jps
13074 DataNode
12801 NameNode
13396 SecondaryNameNode
13740 TaskTracker

$ ./bin/hadoop dfs -put urlsdir/urllist.txt urlsdir

$ ./bin/hadoop dfs -ls

Found 1 items
-rw-r--r--   1 mahmood supergroup         25 2014-04-05 08:23 
/user/mahmood/urlsdir


 
Regards,
Mahmood


On Saturday, April 5, 2014 8:14 AM, Jitendra Yadav <[email protected]> 
wrote:
 
>Shutdown all the hadoop processes and then remove every thing from 
>/data/mahmood/nutch-test/filesystem/name/ >& 
>data/mahmood/nutch-test/filesystem/data/ and then format namenode, now you can 
>start the cluster as normal.
>

>Note:- Make sure you take all the backup of your critical data before cleaning 
>the directories (if any).
>

>Thanks
>Jitendra

Reply via email to