[
https://issues.apache.org/jira/browse/HDFS-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070778#comment-14070778
]
Allen Wittenauer commented on HDFS-177:
---------------------------------------
We should verify that this is fixed.
I've seen an issue similar to this under 2.2.0 and I'm beginning to wonder if
it isn't the same problem.
> Error when 2 of 3 DataNodes are full: "Could only be replicated to 0 nodes,
> instead of 1"
> -----------------------------------------------------------------------------------------
>
> Key: HDFS-177
> URL: https://issues.apache.org/jira/browse/HDFS-177
> Project: Hadoop HDFS
> Issue Type: Bug
> Environment: * 3 machines, 2 of them with only 80GB of space, and 1
> with 1.5GB
> * Two clients are copying files all the time (one of them is the 1.5GB
> machine)
> * The replication is set on 2
> Reporter: Stas Oskin
>
> I let the space on 2 smaller machines to end, to test the behavior.
> Now, one of the clients (the one located on 1.5GB) works fine, and the other
> one - the external, unable to copy and displays the error + the exception
> below:
> 10:51:03 WARN dfs.DFSClient: NotReplicatedYetException sleeping
> /test/test.bin retries left 1
> 09/05/21 10:51:06 WARN dfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /test/test.bin could only be replicated to 0 nodes, instead of 1
> at
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1123)
> at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:716)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2450)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2333)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1745)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1922)
>
> 09/05/21 10:51:06 WARN dfs.DFSClient: Error Recovery for block null bad
> datanode[0]
> java.io.IOException: Could not get block locations. Aborting...
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2153)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1745)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1899)
--
This message was sent by Atlassian JIRA
(v6.2#6252)