[ https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
GAO Rui updated HDFS-7661: -------------------------- Attachment: HDFS-7661-unitTest-wip-trunk.patch Hi [~szetszwo], I have witten a unitTest to illustrate the goal of hflush() for Erasure Coding file. Could you review it and see if it make sense to you when you got some time. In the unit test code, I write some of my concerns in the code comment. Currently, I think EC file hflush() is similar to replica file hflush(), only we need to flush all the data in {{FSOutputSummer#buf}} into the {{dataQueue}} of nine(in R-S-6-3) {{StripedDataStreamer}} instead of one {{DataStreamer}}, and wait for the ack. On the other hand, in the case of the user/client write again into the Output Stream, we just over write the last block group to the new length. Dose this make sense? > Support read when a EC file is being written > -------------------------------------------- > > Key: HDFS-7661 > URL: https://issues.apache.org/jira/browse/HDFS-7661 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Tsz Wo Nicholas Sze > Assignee: GAO Rui > Attachments: HDFS-7661-unitTest-wip-trunk.patch > > > We also need to support hflush/hsync and visible length. -- This message was sent by Atlassian JIRA (v6.3.4#6332)