I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped -
Even when I stop everything, reformat and start it back again, I get this
error whenever trying to use dfs -put.


Jason Venner-2 wrote:
> 
> This happens to me, when the dfs has gotten into an inconsistent state.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> What I hae to do, is stop dfs, remove the contents of the dfs 
> directories on all the machines, hadoop namenode -format on the 
> controller, then restart dfs.
> That consistently fixes the problem for me. This may be serious overkill 
> but it works.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> jerrro wrote:
>> I am trying to install/configure hadoop on a cluster with several
>> computers.
>> I followed exactly the instructions in the hadoop website for configuring
>> multiple slaves, and when I run start-all.sh I get no errors - both
>> datanode
>> and tasktracker are reported to be running (doing ps awux | grep hadoop
>> on
>> the slave nodes returns two java processes). Also, the log files are
>> empty -
>> nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
>> I get the following error:
>>
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be
>> replicated
>> to 0 nodes, instead of 1
>>
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
>>
>> I couldn't find much information about this error, but I did manage to
>> see
>> somewhere it might mean that there are no datanodes running. But as I
>> said,
>> start-all does not give any errors. Any ideas what could be problem?
>>
>> Thanks.
>>
>> Jerr.
>>   
> 
> 

-- 
View this message in context: 
http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14176525
Sent from the Hadoop Users mailing list archive at Nabble.com.

Reply via email to