[ 
https://issues.apache.org/jira/browse/HADOOP-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12599659#action_12599659
 ] 

dhruba borthakur commented on HADOOP-3310:
------------------------------------------

Hi Nicholas, the more I think of this, the more it sounds logical to make 
FSDataset.updateBlock work correctly if the block is either in the volumeMap or 
in the ongoingCreates. 

Even when "append" is supported, It makes sense to keep the blocks that are 
currently being written to in the tmpdir. This ensures that a block report will 
not report these blocks. It also ensures that the periodic block scanner will 
not operate on these blocks. It is also an indirect persistence representation 
of blocks that need recovery if the datanode restarts. Can this be done?

> Lease recovery for append
> -------------------------
>
>                 Key: HADOOP-3310
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3310
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>         Attachments: 3310_20080514.patch, 3310_20080516b.patch, 
> 3310_20080516c.patch, 3310_20080519.patch, 3310_20080519b.patch, 
> 3310_20080520.patch, 3310_20080521.patch, 3310_20080522b.patch, 
> 3310_20080522c.patch, 3310_20080523.patch, 3310_20080524_dhruba.patch
>
>
> In order to support file append, a GenerationStamp is associated with each 
> block.  Lease recovery will be performed when there is a possibility that the 
> replicas of a block in a lease may have different GenerationStamp values.
> For more details, see the documentation in HADOOP-1700.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to