[
https://issues.apache.org/jira/browse/HADOOP-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12707769#action_12707769
]
dhruba borthakur commented on HADOOP-5741:
------------------------------------------
The getBlockMetaDataInfo() should first terminate all open connections to the
block and then return the size of the block. Then it is guaranteed that nobody
can change the length of the block between the getBlockMetaDataInfo() and
updateBlock() calls. This is documented in Step 5 in
https://issues.apache.org/jira/browse/HADOOP-4663?focusedCommentId=12674490&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12674490
will this solve your problem?
> In Datanode, update block may fail due to length inconsistency
> --------------------------------------------------------------
>
> Key: HADOOP-5741
> URL: https://issues.apache.org/jira/browse/HADOOP-5741
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Tsz Wo (Nicholas), SZE
>
> When a primary datanode tries to recover a block. It calls
> getBlockMetaDataInfo(..) to obtains information like block length from each
> datanode. Then, it calls updateBlock(..).
> The block length returned in getBlockMetaDataInfo(..) may be obtained from a
> unclosed local block file F. However, in updateBlock(..), it first closes F
> (if F is open) and then gets the length. These two lengths may be different.
> In such case, updateBlock(..) throws an exception.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.