[ 
https://issues.apache.org/jira/browse/HDFS-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892997#comment-13892997
 ] 

Andrew Wang commented on HDFS-4911:
-----------------------------------

+1 pending jenkins. See also the discussion on hdfs-dev here on dropping these 
timeouts:

http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201306.mbox/%3CCA%2BqbEUPxKA3W_pgY3D5K8vGYsgmPzgrKY4zuQLP4bNQ9RUO8iQ%40mail.gmail.com%3E

Hopefully this means less spam about closed sockets.

> Reduce PeerCache timeout to be commensurate with 
> dfs.datanode.socket.reuse.keepalive
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-4911
>                 URL: https://issues.apache.org/jira/browse/HDFS-4911
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HDFS-4911.001.patch
>
>
> The default timeout for the client's PeerCache is much longer than the 
> DataNode would possibly keep the socket open.  Specifically, 
> {{dfs.client.socketcache.expiryMsec}} defaults to 2 *
> 60 * 1000 (2 minutes), whereas {{dfs.datanode.socket.reuse.keepalive}}
> defaults to 1000 (1 second).  We should make these more similar to minimize 
> situations where the client tries to use sockets which have gone stale.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to