[ 
https://issues.apache.org/jira/browse/HADOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571565#action_12571565
 ] 

Doug Cutting commented on HADOOP-2870:
--------------------------------------

> What's the motivation of the design?

Conservation of connections was the original motivation.  Connections are 
pooled in the IPC client, so sharing an IPC client permits multiple RPC proxy 
instances to share connections.  We could achieve this in other ways, for 
example, RPC could cache IPC clients by port too.  But, unless we cache proxy 
instances, we shouldn't create a new IPC client per proxy or we'd lose 
connection pooling.

> Datanode.shutdown() and Namenode.stop() should close all rpc connections
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2870
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2870
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.17.0
>
>
> Currently this two cleanup methods do not close all existing rpc connections. 
> If a mini dfs cluster gets shutdown and then restarted as we do in 
> TestFileCreation, RPCs in second mini cluster reuse the unclosed connections 
> opened in the first run but there is no server running to serve the request. 
> So the client get stuck waiting for the response forever if client side 
> timeout gets removed as suggested by hadoop-2811.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to