[
https://issues.apache.org/jira/browse/HADOOP-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587772#action_12587772
]
Raghu Angadi commented on HADOOP-2910:
--------------------------------------
Yes. Handling fd limit the server is necessary only to handle some out of
control applications creating many clients by mistake. Now, it will essentially
bring down the NameNode. Of course, even when we handle the limit, NameNode
will still be severely affected. In that sense it is not so urgent.
> Throttle IPC Client/Server during bursts of requests or server slowdown
> -----------------------------------------------------------------------
>
> Key: HADOOP-2910
> URL: https://issues.apache.org/jira/browse/HADOOP-2910
> Project: Hadoop Core
> Issue Type: Improvement
> Components: ipc
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.18.0
>
> Attachments: callQueue.patch, callQueue1.patch, callQueue2.patch,
> callQueue3.patch, TestBacklog.java, TestBacklog.java,
> TestBacklogWithPool.java, throttleClient.patch
>
>
> I propose the following to avoid an IPC server being swarmed by too many
> requests and connections
> 1. Limit call queue length or limit the amount of memory used in the call
> queue. This can be done by including the size of a request in the header and
> storing unmarshaled requests in the call queue.
> 2. If the call queue is full or queue buffer is full, stop reading requests
> from sockets. So requests stay at the server's system buffer or at the client
> side and thus eventually throttle the client.
> 3. Limit the total number of connections. Do not accept new connections if
> the connection limit is exceeded. (Note: this solution is unfair to new
> connections.)
> 4. If receive out of memory exception, close the current connection.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.