[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657095#comment-17657095
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-----------------------------------------

anmolanmol1234 commented on PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#issuecomment-1378276653

   @steveloughran, I have addressed all the comments and added tests for the 
different failure scenarios as well.
   
   We need to update the bytes sent for failed as well as passed cases. The 
current change will not swallow any exceptions.
   The handling for various status code with 100 continue enabled is as follows
   Case 1 :- getOutputStream doesn't throw any exception, response is processed 
and it gives status code of 200, no retry is needed and hence the request 
succeeds.
   Case 2:- getOutputSteam throws exception, we return to the caller and in 
processResponse in this.connection.getResponseCode() it gives status code of 
404 (user error), exponential retry is not needed. We retry without 100 
continue enabled.
   Case 3:- getOutputSteam throws exception, we return to the caller and in 
processResponse it gives status code of 503,
   which shows throttling so we backoff accordingly with exponential retry. 
Since each append request waits for 100 continue response, the stress on the 
server gets reduced.
   
   Requesting your review for the same, thanks.




> ABFS: Add changes for expect hundred continue header with append requests
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-18146
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18146
>             Project: Hadoop Common
>          Issue Type: Sub-task
>    Affects Versions: 3.3.1
>            Reporter: Anmol Asrani
>            Assignee: Anmol Asrani
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
>  Heavy load from a Hadoop cluster lead to high resource utilization at FE 
> nodes. Investigations from the server side indicate payload buffering at 
> Http.Sys as the cause. Payload of requests that eventually fail due to 
> throttling limits are also getting buffered, as its triggered before FE could 
> start request processing.
> Approach: Client sends Append Http request with Expect header, but holds back 
> on payload transmission until server replies back with HTTP 100. We add this 
> header for all append requests so as to reduce.
> We made several workload runs with and without hundred continue enabled and 
> the overall observation is that :-
>  # The ratio of TCP SYN packet count with and without expect hundred continue 
> enabled is 0.32 : 3 on average.
>  #  The ingress into the machine at TCP level is almost 3 times lesser with 
> hundred continue enabled which implies a lot of bandwidth save.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to