[ 
https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15154002#comment-15154002
 ] 

GAO Rui commented on HDFS-7661:
-------------------------------

[~zhz] good idea to use {{FsDatasetImpl#truncateBlock}} to overwrite parity 
block.  When trying to keep the data safety of flushed data. I found that if we 
failed in the second flush in the same stripe, that means we have less than 
{{NumDataBlk}} living dns. So, both the second flush and file write client 
failed. In this kind of scenario, we could not keep the data safety of flushed 
data in the first flush anyway.   Is that make sense to you? [~zhz], 
[~walter.k.su], [~liuml07].   If it makes sense, the key point becomes the data 
consistency issue between readers and writers, right?

I attached a rough wip patch to demo adding {{overwrite}} and 
{{blockGroupLength}}, and also the position to implement {{overwrite}} file 
operation, although I am still working on how to implement {{overwrite}}.

> Erasure coding: support hflush and hsync
> ----------------------------------------
>
>                 Key: HDFS-7661
>                 URL: https://issues.apache.org/jira/browse/HDFS-7661
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: GAO Rui
>         Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, 
> HDFS-7661-unitTest-wip-trunk.patch, HDFS-7661-wip.01.patch, 
> HDFS-EC-file-flush-sync-design-version1.1.pdf, 
> HDFS-EC-file-flush-sync-design-version2.0.pdf
>
>
> We also need to support hflush/hsync and visible length. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to