[ 
https://issues.apache.org/jira/browse/HDFS-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4911:
---------------------------------------

    Attachment: HDFS-4911.001.patch

* reduce DFS_CLIENT_SOCKET_CACHE_EXPIRY_MSEC_DEFAULT to 3 seconds

* bump up DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_DEFAULT to 4 seconds

* PeerCache now checks for expired entries when you try to get an entry out of 
it, and will retry if so.

* TestDataTransferKeepalive: rename testKeepaliveTimeouts to 
testDatanodeRespectsKeepAliveTimeout and create 
testClientResponsesKeepAliveTimeout.  De-globalize dfsClient to allow us to use 
clients with different settings.

> Reduce PeerCache timeout to be commensurate with 
> dfs.datanode.socket.reuse.keepalive
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-4911
>                 URL: https://issues.apache.org/jira/browse/HDFS-4911
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HDFS-4911.001.patch
>
>
> The default timeout for the client's PeerCache is much longer than the 
> DataNode would possibly keep the socket open.  Specifically, 
> {{dfs.client.socketcache.expiryMsec}} defaults to 2 *
> 60 * 1000 (2 minutes), whereas {{dfs.datanode.socket.reuse.keepalive}}
> defaults to 1000 (1 second).  We should make these more similar to minimize 
> situations where the client tries to use sockets which have gone stale.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to