[
https://issues.apache.org/jira/browse/HADOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12572170#action_12572170
]
Doug Cutting commented on HADOOP-2870:
--------------------------------------
> allow two proxies to share a client when they talk to the same server and
> have the same socket factory
Yes, that sounds reasonable. The cache keys would be
<host,port,socketFactory>, right?
Each IPC client currently has a connectionCuller thread, so creating a client
per host will roughly double the number of RPC-related threads (each also has a
thread per open connection, to read responses asynchronously). We could
perhaps make the culler static, rather than per-client if this is thought to be
too costly.
Why does the culler have to be a separate thread anyway? Couldn't the
connection thread itself simply exit when reading a response times out and the
idle time has been exceeded?
> Datanode.shutdown() and Namenode.stop() should close all rpc connections
> ------------------------------------------------------------------------
>
> Key: HADOOP-2870
> URL: https://issues.apache.org/jira/browse/HADOOP-2870
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.17.0
>
>
> Currently this two cleanup methods do not close all existing rpc connections.
> If a mini dfs cluster gets shutdown and then restarted as we do in
> TestFileCreation, RPCs in second mini cluster reuse the unclosed connections
> opened in the first run but there is no server running to serve the request.
> So the client get stuck waiting for the response forever if client side
> timeout gets removed as suggested by hadoop-2811.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.