[
https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972209#comment-14972209
]
Zhe Zhang commented on HDFS-9289:
---------------------------------
[~lichangleo] Thanks for reporting the issue.
bq. but the file complete with the old block genStamp.
How did that happen? So the client somehow had an old GS? IIUC the
{{updatePipeline}} protocol is below (using {{client_GS}}, {{DN_GS}}, and
{{NN_GS}} to denote the 3 copies of GS):
# Client asks for new GS from NN through {{updateBlockForPipeline}}. After
this, {{client_GS}} is new, both {{DN_GS}} and {{NN_GS}} are old
# Client calls {{createBlockOutputStream}} to update DN's GS. After this, both
{{client_GS}} and {{DN_GS}} are new, {{NN_GS}} is old
# Client calls {{updatePipeline}}. After this, all 3 GSes should be new
Maybe step 3 failed, and then client tried to complete the file? It'd be ideal
if you could extend the unit test to reproduce the error without the fix (or
paste the error log). Thanks!
> check genStamp when complete file
> ---------------------------------
>
> Key: HDFS-9289
> URL: https://issues.apache.org/jira/browse/HDFS-9289
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Chang Li
> Assignee: Chang Li
> Priority: Critical
> Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch
>
>
> we have seen a case of corrupt block which is caused by file complete after a
> pipelineUpdate, but the file complete with the old block genStamp. This
> caused the replicas of two datanodes in updated pipeline to be viewed as
> corrupte. Propose to check genstamp when commit block
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)