[ 
https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3179:
-----------------------------------------

    Attachment: h3179_20120403.patch

h3179_20120403.patch:
- updates the error message as below;
- adds Zhanwei's test.

----
2012-04-03 17:59:07,624 ERROR hdfs.DFSClient 
(DFSClient.java:closeAllFilesBeingWritten(586)) - Failed to close file 
/TestReplaceDatanodeOnFailure/testAppend
java.io.IOException: Failed to add a datanode.  User may turn off this feature 
by setting dfs.client.block.write.replace-datanode-on-failure.policy in 
configuration, where the current policy is DEFAULT.  (Nodes: 
current=[127.0.0.1:51791], original=[127.0.0.1:51791])
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

                
> failed to append data, DataStreamer throw an exception, "nodes.length != 
> original.length + 1" on single datanode cluster
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3179
>                 URL: https://issues.apache.org/jira/browse/HDFS-3179
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.23.2
>            Reporter: Zhanwei.Wang
>            Priority: Critical
>         Attachments: h3179_20120403.patch
>
>
> Create a single datanode cluster
> disable permissions
> enable webhfds
> start hdfs
> run the test script
> expected result:
> a file named "test" is created and the content is "testtest"
> the result I got:
> hdfs throw an exception on the second append operation.
> {code}
> ./test.sh 
> {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed
>  to add a datanode: nodes.length != original.length + 1, 
> nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}}
> {code}
> Log in datanode:
> {code}
> 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> java.io.IOException: Failed to add a datanode: nodes.length != 
> original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
> close file /test
> java.io.IOException: Failed to add a datanode: nodes.length != 
> original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> {code}
> test.sh
> {code}
> #!/bin/sh
> echo "test" > test.txt
> curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE";
> curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND";
> curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND";
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to