[
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741206#comment-15741206
]
Yuanbo Liu commented on HDFS-11195:
-----------------------------------
[~xiaochen] Thanks for your response.
{quote}
>From a quick look I doubt using the correct......
{quote}
I'm afraid not. The pipeline of transform data is:
{code}
webhdfs-client ->(1) webhdfs server in datanode ->(2) hdfs block.
{code}
And {{exceptionCaught}} only takes effect when pipeline(1) encounter
exceptions. The exception of this issue happens in pipeline(2).
What's more, pipeline(2) relies on {{DataStreamer}}, it's a asynchronous data
transfer with buffer in another thread. Only when closing {{OutputStream}} and
flushing data , we have the chance to catch the exception.
It's not elegant to ignore the exception in my code change by the way, I'll
bring up a new way to fix it.
{quote}
For example, would MiniDFSCluster#shutdownDataNodes right..
{quote}
Good suggestion, It will certainly work in this test case. I will apply it in
my next patch.
> When appending files by webhdfs rest api fails, it returns 200
> --------------------------------------------------------------
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Yuanbo Liu
> Assignee: Yuanbo Liu
> Attachments: HDFS-11195.001.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T <LOCAL_FILE>
> "http://<DATANODE>:<PORT>/webhdfs/v1/<PATH>?op=APPEND"
> {code}
> it returns 200, even though append operation fails.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]