[
https://issues.apache.org/jira/browse/HDFS-951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HDFS-951.
--------------------------------------
Resolution: Not a Problem
I guess that this is not a problem anymore. Please feel free to reopen this if
I am wrong. Resolving ...
> DFSClient should handle all nodes in a pipeline failed.
> -------------------------------------------------------
>
> Key: HDFS-951
> URL: https://issues.apache.org/jira/browse/HDFS-951
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: He Yongqiang
>
> processDatanodeError-> setupPipelineForAppendOrRecovery will set
> streamerClosed to be true if all nodes in the pipeline failed in the past,
> and just return.
> Back to run() in data streammer, the logic
> if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning) {
> continue;
> }
> will just let set closed=true in closeInternal().
> And DataOutputStream will not get a chance to clean up. The DataOutputStream
> will throw exception or return null for following write/close.
> It will leave the file in writing in incomplete state.
--
This message was sent by Atlassian JIRA
(v6.2#6252)