[
https://issues.apache.org/jira/browse/HADOOP-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12588860#action_12588860
]
Hairong Kuang commented on HADOOP-2910:
---------------------------------------
Wow, so many discussions on this jira while I was on vacation. :-) I am OK with
Raghu's proposal. I plan to use an exponential back off algorithm when a
connect operation timeouts. A client retries connection for 10 minutes.
> Throttle IPC Client/Server during bursts of requests or server slowdown
> -----------------------------------------------------------------------
>
> Key: HADOOP-2910
> URL: https://issues.apache.org/jira/browse/HADOOP-2910
> Project: Hadoop Core
> Issue Type: Improvement
> Components: ipc
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.18.0
>
> Attachments: callQueue.patch, callQueue1.patch, callQueue2.patch,
> callQueue3.patch, TestBacklog.java, TestBacklog.java,
> TestBacklogWithPool.java, throttleClient.patch
>
>
> I propose the following to avoid an IPC server being swarmed by too many
> requests and connections
> 1. Limit call queue length or limit the amount of memory used in the call
> queue. This can be done by including the size of a request in the header and
> storing unmarshaled requests in the call queue.
> 2. If the call queue is full or queue buffer is full, stop reading requests
> from sockets. So requests stay at the server's system buffer or at the client
> side and thus eventually throttle the client.
> 3. Limit the total number of connections. Do not accept new connections if
> the connection limit is exceeded. (Note: this solution is unfair to new
> connections.)
> 4. If receive out of memory exception, close the current connection.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.