[
https://issues.apache.org/jira/browse/HDFS-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12991839#comment-12991839
]
Tsz Wo (Nicholas), SZE commented on HDFS-1606:
----------------------------------------------
A straightforward approach is to
{panel}
\(i) start \(\*) right after #1 and stall #2 until \(*) is done.
{panel}
If we feel comfortable, we may
{panel}
(ii) start \(*) right after #1 in a separated thread and start #2 concurrently.
Once #3 is done, join the thread and then combine the old data with the new
data before #4.
{panel}
Depending on the block size, a partial block may have several hundreds
megabytes. So \(*) is an expensive operation which may potentially take a long
time (in the order of seconds). (ii) has a lower latency but \(i) is a simpler
solution. How about we have \(i) in the first implementation and have (ii) as
a future improvement?
> Provide a stronger data guarantee in the write pipeline
> -------------------------------------------------------
>
> Key: HDFS-1606
> URL: https://issues.apache.org/jira/browse/HDFS-1606
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: data-node, hdfs client
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: Tsz Wo (Nicholas), SZE
>
> In the current design, if there is a datanode/network failure in the write
> pipeline, DFSClient will try to remove the failed datanode from the pipeline
> and then continue writing with the remaining datanodes. As a result, the
> number of datanodes in the pipeline is decreased. Unfortunately, it is
> possible that DFSClient may incorrectly remove a healthy datanode but leave
> the failed datanode in the pipeline because failure detection may be
> inaccurate under erroneous conditions.
> We propose to have a new mechanism for adding new datanodes to the pipeline
> in order to provide a stronger data guarantee.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira