[
https://issues.apache.org/jira/browse/HADOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571518#action_12571518
]
Hairong Kuang commented on HADOOP-2870:
---------------------------------------
I am looking at how to close the client-side connections. It looks that an IPC
client is an abstract of SocketFactory. Different proxies share the same IPC
client as long as they use the same socket factory even though they may talk to
different IPC servers. Is this true? What's the motivation of the design?
> Datanode.shutdown() and Namenode.stop() should close all rpc connections
> ------------------------------------------------------------------------
>
> Key: HADOOP-2870
> URL: https://issues.apache.org/jira/browse/HADOOP-2870
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.17.0
>
>
> Currently this two cleanup methods do not close all existing rpc connections.
> If a mini dfs cluster gets shutdown and then restarted as we do in
> TestFileCreation, RPCs in second mini cluster reuse the unclosed connections
> opened in the first run but there is no server running to serve the request.
> So the client get stuck waiting for the response forever if client side
> timeout gets removed as suggested by hadoop-2811.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.