[
https://issues.apache.org/jira/browse/HDFS-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13018433#comment-13018433
]
Tsz Wo (Nicholas), SZE commented on HDFS-29:
--------------------------------------------
By 0.20, I guess you mean 0.20-append. Since 0.20-append and 0.21 have
different append design and codes, the patch here won't apply without a
significant change. We also have to carefully verify the design difference.
So, how about we create another jira for 0.20-append if a similar problem exist?
> In Datanode, update block may fail due to length inconsistency
> --------------------------------------------------------------
>
> Key: HDFS-29
> URL: https://issues.apache.org/jira/browse/HDFS-29
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.21.0
>
> Attachments: h29_20091012.patch, h29_20091012b.patch,
> h29_20091012c.patch
>
>
> When a primary datanode tries to recover a block. It calls
> getBlockMetaDataInfo(..) to obtains information like block length from each
> datanode. Then, it calls updateBlock(..).
> The block length returned in getBlockMetaDataInfo(..) may be obtained from a
> unclosed local block file F. However, in updateBlock(..), it first closes F
> (if F is open) and then gets the length. These two lengths may be different.
> In such case, updateBlock(..) throws an exception.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira