[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15355381#comment-15355381
 ] 

Yongjun Zhang commented on HDFS-6937:
-------------------------------------

Hi [~jojochuang],

Thanks for your continued work here.

{quote}
If there is indeed a checksum error at the middle (second) node of pipeline, 
the tail node will detect it, sending ERROR_CHECKSUM code back to client and 
terminate the connection. This should effectively remove the middle node in the 
pipeline.
{quote}
About the above statement, the initial issue I observed was, when the tail node 
detects corruption, the implementation will find a replacement DN, and tries to 
copy from the middle DN to the new DN again. Then checksum error happens again. 
And this process repeats.

Why you think "This should effectively remove the middle node in the pipeline"?

On the other hand, I think HDFS-10587 is very relevant here and let's dig 
deeper there. My current thinking is, if HDFS-10587 is fixed, then there is 
less chance for a replica to get corrupted; however, when the replica on the 
middle DN does get corrupted, HDFS-6937 would help (waiting for block scanner 
is not a real solution).

Thanks.


> Another issue in handling checksum errors in write pipeline
> -----------------------------------------------------------
>
>                 Key: HDFS-6937
>                 URL: https://issues.apache.org/jira/browse/HDFS-6937
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs-client
>    Affects Versions: 2.5.0
>            Reporter: Yongjun Zhang
>            Assignee: Wei-Chiu Chuang
>         Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch, 
> HDFS-6937.003.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to