[ 
https://issues.apache.org/jira/browse/HDFS-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13018346#comment-13018346
 ] 

Uma Maheswara Rao G commented on HDFS-29:
-----------------------------------------

Looks this scenario applicable for 20version also.
patch available for 20 version?

> In Datanode, update block may fail due to length inconsistency
> --------------------------------------------------------------
>
>                 Key: HDFS-29
>                 URL: https://issues.apache.org/jira/browse/HDFS-29
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>             Fix For: 0.21.0
>
>         Attachments: h29_20091012.patch, h29_20091012b.patch, 
> h29_20091012c.patch
>
>
> When a primary datanode tries to recover a block.  It calls 
> getBlockMetaDataInfo(..) to obtains information like block length from each 
> datanode.  Then, it calls updateBlock(..).
> The block length returned in getBlockMetaDataInfo(..) may be obtained from a 
> unclosed local block file F.   However, in updateBlock(..), it first closes F 
> (if F is open) and then gets the length.  These two lengths may be different. 
>  In such case, updateBlock(..) throws an exception.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to