Arpit Agarwal created HDFS-12436:
------------------------------------
Summary: TestClientProtocolForPipelineRecovery fails in trunk
Key: HDFS-12436
URL: https://issues.apache.org/jira/browse/HDFS-12436
Project: Hadoop HDFS
Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Arpit Agarwal
Priority: Blocker
Fails consistently in trunk with the following exception:
{code}
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 71.317 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
testZeroByteBlockRecovery(org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery)
Time elapsed: 11.422 sec <<< ERROR!
java.io.IOException: Failed to replace a bad datanode on the existing pipeline
due to no more good datanodes being available to try. (Nodes:
current=[DatanodeInfoWithStorage[127.0.0.1:63722,DS-9befc828-8ff7-4284-8fba-a6c55627ab3d,DISK]],
original=[DatanodeInfoWithStorage[127.0.0.1:63722,DS-9befc828-8ff7-4284-8fba-a6c55627ab3d,DISK]]).
The current failed datanode replacement policy is ALWAYS, and a client may
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy'
in its configuration.
at
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1321)
at
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1387)
at
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1586)
at
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1487)
at
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469)
at
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1273)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)
{code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]