Yongjun Zhang created HDFS-10333:
------------------------------------

             Summary: Intermittent org.apache.hadoop.hdfs.TestFileAppend 
failure in trunk
                 Key: HDFS-10333
                 URL: https://issues.apache.org/jira/browse/HDFS-10333
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs
            Reporter: Yongjun Zhang


Java8 (I used JAVA_HOME=/opt/toolchain/jdk1.8.0_25):

{code}
------------------------------------------------------
 T E S T S
-------------------------------------------------------
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 27.75 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 3.674 
sec  <<< ERROR!
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
        at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)


{code}

However, when I run with Java1.7, the test is sometimes successful, and it 
sometimes fails with 
{code}
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 41.32 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 9.099 
sec  <<< ERROR!
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
        at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)

{code}


The failure of this test is intermittent, but it fails pretty often.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to