Hello:

I did more test again but now I noticed that only 3 nodes have datanodes while 
the others do not. I ran the admin report tool and the result is below. Where 
do i configure the capacity?

 bin/hadoop dfsadmin -report


Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 161.74.12.36:50010
Decommission Status : Normal
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Sat Aug 13 02:39:39 BST 2011

Thanks,
A Df



>________________________________
>From: A Df <[email protected]>
>To: "[email protected]" <[email protected]>; Harsh J 
><[email protected]>
>Sent: Saturday, 13 August 2011, 0:19
>Subject: Hadoop Cluster setup - no datanode
>
>Hello Mates:
>
>Thanks to everyone for their help so far. I have learnt a lot and have now 
>done single and pseudo mode. I have a hadoop cluster but I ran jps on the 
>master node and slave node but not all process are started
>
>master:
>22160 NameNode
>22716 Jps
>22458 JobTracker
>
>slave:
>32195 Jps
>
>I also checked the logs and I see files for all the datanodes, jobtracker, 
>namenode, secondarynamenode, and tasktracker. The tasktracker has one slave 
>node log missing. The namenode formatted correctly. I set the values for below 
>so I'm not sure if I need more. My cluster is 11 nodes (1 master, 10 slaves). 
>I do not have permission to access root only my directory so hadoop is 
>installed in there. I can ssh to the slaves properly.
>    * 
>fs.default.name, dfs.name.dir, dfs.data.dir, mapred.job.tracker, mapred.system.dir
>
>
>It also gave errors regarding:
>    * it cannot find the hadoop-daemon.sh file but I can see it
>
>/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 40: cd: 
>/home/my-user/hadoop-0.20.2_cluster/bin: No such file or directory
>
>    * it has the wrong path for the hadoop-config.sh so which parameter sets 
>this field??
>
>/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 42: 
>/home/my-user/hadoop-0.20.2_cluster/hadoop-config.sh: No such file or directory
>
>    * not being able to create the log directory on the same slave node that 
>doesn't have its tasktracker, which parameters should be used to set the log 
>directory?
>
>The same slave node which is giving problems also has:
> Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] 
>(start|stop) <hadoop-command> <args...>
>
>
>Thanks for your help.
>
>Cheers,
>Tamara
>
>
>

Reply via email to