[ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=518607&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518607
 ]

ASF GitHub Bot logged work on HADOOP-17404:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Dec/20 19:13
            Start Date: 01/Dec/20 19:13
    Worklog Time Spent: 10m 
      Work Description: snvijaya opened a new pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509


   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   This commit enables Flush to be piggybacked onto append call for such short 
write scenarios. This is guarded with config 
"fs.azure.write.enableappendwithflush" which is set to off by default as it 
needs a relevant change in backend to propogate.
   
   Tests asserting number of requests made, request data sizes, file sizes post 
append+flush and file content checks for various combinations of 
append/flush/close sets with and without the small write optimization is added. 
   Existing tests in ITestAbfsNetworkStatistics asserting Http stats were 
rewritten for easy readability.
   
   (Test results published in end of PR conversation tab.)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 518607)
    Remaining Estimate: 0h
            Time Spent: 10m

> ABFS: Piggyback flush on Append calls for short writes
> ------------------------------------------------------
>
>                 Key: HADOOP-17404
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17404
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Sneha Vijayarajan
>            Assignee: Sneha Vijayarajan
>            Priority: Major
>             Fix For: 3.3.1
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to