Fixed! Yes, I have my namenode location bad in all the slaves
Thanks Harsh!

On 28/04/2011, at 19:23, Harsh J wrote:

> Some quick checks:
> - Are configuration files consistent across your DNs (w.r.t. NameNode
> location [fs.default.name], primarily)?
> - Do you have a firewall running that may be blocking out connections
> from a slave to the master over the specified IPC ports?
> 
> On Thu, Apr 28, 2011 at 10:14 PM, Fabio Souto <[email protected]> wrote:
>> Hello,
>> 
>> I'm having some problems setting up my datanodes, I have a 4 node cluster 
>> (all of them are datanodes), if I run
>> 
>> sudo -u hdfs hadoop dfsadmin -report
>> 
>> 
>> Configured Capacity: 112231907328 (104.52 GB)
>> Present Capacity: 54121451520 (50.4 GB)
>> DFS Remaining: 54120173568 (50.4 GB)
>> DFS Used: 1277952 (1.22 MB)
>> DFS Used%: 0%
>> Under replicated blocks: 1
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>> 
>> -------------------------------------------------
>> Datanodes available: 1 (1 total, 0 dead)
>> 
>> Name: <my-ip>:50010
>> Decommission Status : Normal
>> Configured Capacity: 112231907328 (104.52 GB)
>> DFS Used: 1277952 (1.22 MB)
>> Non DFS Used: 58110455808 (54.12 GB)
>> DFS Remaining: 54120173568(50.4 GB)
>> DFS Used%: 0%
>> DFS Remaining%: 48.22%
>> Last contact: Thu Apr 28 18:39:28 CEST 2011
>> 
>> 
>> The report only show 1 datanode! Checking the logs of the slaves I found 
>> this:
>> 
>> 
>>  2011-04-28 18:26:09,587 INFO org.apache.hadoop.ipc.Client: Retrying connect 
>> to server: slave/<ip>:54310. Already tried 9 time(s).
>>  2011-04-28 18:26:09,588 INFO org.apache.hadoop.ipc.RPC: Server at 
>> slave/<ip>:54310 not available yet, Zzzzz...
>> 
>> 
>> I don't know what to do....Should I configure passwordless ssh between the 
>> servers?
>> 
>> Thanks
> 
> 
> 
> -- 
> Harsh J

Reply via email to