[ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14081779#comment-14081779
 ] 

Arpit Agarwal commented on HADOOP-10597:
----------------------------------------

Hi [~mingma], just looked at the attached doc. If I understand correctly, the 
server tells the client which backoff policy to use. The backoff policy can 
just be a client side configuration since the server's suggestion is just 
advisory, unless the server has a way to penalize clients that fail to follow 
the suggestion.

Also you have probably seen the RPC Congestion Control work under HADOOP-9460. 
Is there any overlap?

> Evaluate if we can have RPC client back off when server is under heavy load
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-10597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10597
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Ming Ma
>            Assignee: Ming Ma
>         Attachments: HADOOP-10597-2.patch, HADOOP-10597.patch, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to