[ https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251830#comment-13251830 ]
Hudson commented on HDFS-3179: ------------------------------ Integrated in Hadoop-Mapreduce-trunk-Commit #2067 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2067/]) HDFS-3179. Improve the exception message thrown by DataStreamer when it failed to add a datanode. (Revision 1324892) Result = ABORTED szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1324892 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java > Improve the error message: DataStreamer throw an exception, "nodes.length != > original.length + 1" on single datanode cluster > ---------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-3179 > URL: https://issues.apache.org/jira/browse/HDFS-3179 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client > Affects Versions: 0.23.2 > Reporter: Zhanwei.Wang > Assignee: Tsz Wo (Nicholas), SZE > Fix For: 2.0.0 > > Attachments: h3179_20120403.patch > > > Create a single datanode cluster > disable permissions > enable webhfds > start hdfs > run the test script > expected result: > a file named "test" is created and the content is "testtest" > the result I got: > hdfs throw an exception on the second append operation. > {code} > ./test.sh > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed > to add a datanode: nodes.length != original.length + 1, > nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}} > {code} > Log in datanode: > {code} > 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer > Exception > java.io.IOException: Failed to add a datanode: nodes.length != > original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to > close file /test > java.io.IOException: Failed to add a datanode: nodes.length != > original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > {code} > test.sh > {code} > #!/bin/sh > echo "test" > test.txt > curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE" > curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" > curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira