[ 
https://issues.apache.org/jira/browse/HADOOP-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12572290#action_12572290
 ] 

Hairong Kuang commented on HADOOP-2870:
---------------------------------------

> Why does the culler have to be a separate thread anyway? Couldn't the 
> connection thread itself simply exit when reading a response times out and 
> the idle time has been exceeded?
Yes, absolutely. I removed the culler thread in the patch that 's submitted to 
hadoop-2811.

> The cache keys would be <host,port,socketFactory>, right?
I am thinking to make the client cache key to be host:port. So a Client maps to 
a server. It maintains all connections to a server. If the key is <host:port, 
socketFactory>, then a Client is more or less equal to a connection. Does this 
make sense?

> Datanode.shutdown() and Namenode.stop() should close all rpc connections
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-2870
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2870
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.17.0
>
>
> Currently this two cleanup methods do not close all existing rpc connections. 
> If a mini dfs cluster gets shutdown and then restarted as we do in 
> TestFileCreation, RPCs in second mini cluster reuse the unclosed connections 
> opened in the first run but there is no server running to serve the request. 
> So the client get stuck waiting for the response forever if client side 
> timeout gets removed as suggested by hadoop-2811.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to