[
https://issues.apache.org/jira/browse/HADOOP-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12580154#action_12580154
]
Raghu Angadi commented on HADOOP-2910:
--------------------------------------
Note that removing connect does not mean connect will wait as long as server is
up. It just makes the timeout to whatever TCP has (around 5 min?). So if the
client is very busy and does not accept (i.e. ignores syns), client will still
fail (which I think is not that bad).
One thing I am not sure is if there is a possibility where connect() will hang
forever.. which would be strange to the user.. there is no equivalent of ping
as in HADOOP-2188.
> Throttle IPC Client/Server during bursts of requests or server slowdown
> -----------------------------------------------------------------------
>
> Key: HADOOP-2910
> URL: https://issues.apache.org/jira/browse/HADOOP-2910
> Project: Hadoop Core
> Issue Type: Improvement
> Components: ipc
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.17.0
>
> Attachments: callQueue.patch, callQueue1.patch, callQueue2.patch
>
>
> I propose the following to avoid an IPC server being swarmed by too many
> requests and connections
> 1. Limit call queue length or limit the amount of memory used in the call
> queue. This can be done by including the size of a request in the header and
> storing unmarshaled requests in the call queue.
> 2. If the call queue is full or queue buffer is full, stop reading requests
> from sockets. So requests stay at the server's system buffer or at the client
> side and thus eventually throttle the client.
> 3. Limit the total number of connections. Do not accept new connections if
> the connection limit is exceeded. (Note: this solution is unfair to new
> connections.)
> 4. If receive out of memory exception, close the current connection.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.