I get the same error when doing a put and my cluster is running ok

i.e. has capacity and all nodes are live. 
Error message is
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/test/test.txt could only be replicated to 0 nodes, instead of 1
        at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
        at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
        at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

        at org.apache.hadoop.ipc.Client.call(Client.java:512)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
I would appreciate any help/suggestions

Thanks


jerrro wrote:
> 
> I am trying to install/configure hadoop on a cluster with several
> computers. I followed exactly the instructions in the hadoop website for
> configuring multiple slaves, and when I run start-all.sh I get no errors -
> both datanode and tasktracker are reported to be running (doing ps awux |
> grep hadoop on the slave nodes returns two java processes). Also, the log
> files are empty - nothing is printed there. Still, when I try to use
> bin/hadoop dfs -put,
> I get the following error:
> 
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be
> replicated to 0 nodes, instead of 1
> 
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I
> said, start-all does not give any errors. Any ideas what could be problem?
> 
> Thanks.
> 
> Jerr.
> 

-- 
View this message in context: 
http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p17124514.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Reply via email to