[ 
https://issues.apache.org/jira/browse/HDFS-895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931245#action_12931245
 ] 

Todd Lipcon commented on HDFS-895:
----------------------------------

Ah, I see what you're saying... I think that would work in theory, but given 
we've had a lot of production testing of the patch as is, I'm a little nervous 
to make that change at this point and lose some of that confidence.

> Allow hflush/sync to occur in parallel with new writes to the file
> ------------------------------------------------------------------
>
>                 Key: HDFS-895
>                 URL: https://issues.apache.org/jira/browse/HDFS-895
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs client
>    Affects Versions: 0.22.0
>            Reporter: dhruba borthakur
>            Assignee: Todd Lipcon
>             Fix For: 0.22.0
>
>         Attachments: 895-delta-for-review.txt, hdfs-895-0.20-append.txt, 
> hdfs-895-20.txt, hdfs-895-review.txt, hdfs-895-trunk.txt, hdfs-895.txt, 
> hdfs-895.txt, hdfs-895.txt
>
>
> In the current trunk, the HDFS client methods writeChunk() and hflush./sync 
> are syncronized. This means that if a hflush/sync is in progress, an 
> applicationn cannot write data to the HDFS client buffer. This reduces the 
> write throughput of the transaction log in HBase. 
> The hflush/sync should allow new writes to happen to the HDFS client even 
> when a hflush/sync is in progress. It can record the seqno of the message for 
> which it should receice the ack, indicate to the DataStream thread to star 
> flushing those messages, exit the synchronized section  and just wai for that 
> ack to arrive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to