[
https://issues.apache.org/jira/browse/HADOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12572163#action_12572163
]
Hairong Kuang commented on HADOOP-2870:
---------------------------------------
I feel that it makes more sense to make the key to be the remote server
address. Then an IPC client caches all connections to a server.
> Datanode.shutdown() and Namenode.stop() should close all rpc connections
> ------------------------------------------------------------------------
>
> Key: HADOOP-2870
> URL: https://issues.apache.org/jira/browse/HADOOP-2870
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.17.0
>
>
> Currently this two cleanup methods do not close all existing rpc connections.
> If a mini dfs cluster gets shutdown and then restarted as we do in
> TestFileCreation, RPCs in second mini cluster reuse the unclosed connections
> opened in the first run but there is no server running to serve the request.
> So the client get stuck waiting for the response forever if client side
> timeout gets removed as suggested by hadoop-2811.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.