[
https://issues.apache.org/jira/browse/HDFS-9520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058753#comment-15058753
]
Colin Patrick McCabe commented on HDFS-9520:
--------------------------------------------
Right. {{dfs.client.socketcache.capacity}} is the capacity of the socket
cache, not the number of distinct datanodes it holds.
The right size for {{dfs.client.socketcache.capacity}} depends on a few things.
The more HDFS input streams you have open at once, the more sockets you will
use at once. The more datanodes you have in your cluster, the larger you may
want the cache to be, so that you get a better hit rate.
We could certainly raise the default value for this. Probably it should be at
least 32 or 64.
> PeerCache evicts too frequently causing connection restablishments
> ------------------------------------------------------------------
>
> Key: HDFS-9520
> URL: https://issues.apache.org/jira/browse/HDFS-9520
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Rajesh Balamohan
> Attachments: HDFS-9520.png
>
>
> Env: 20 node setup
> dfs.client.socketcache.capacity = 16
> Issue:
> ======
> Monitored PeerCache and it was evicting lots of connections during close. Set
> "dfs.client.socketcache.capacity=20" and tested again. Evictions still
> happened. Screenshot of profiler is attached in the JIRA.
> Workaround:
> ===========
> Temp fix was to set "dfs.client.socketcache.capacity=1000" to prevent
> eviction.
> Added more debug logs revealed that multimap.size() was 40 instead of 20.
> LinkedListMultimap returns the total values instead of key size causing lots
> of evictions.
> {code}
> if (capacity == multimap.size()) {
> evictOldest();
> }
> {code}
> Should this be (capacity == multimap.keySet().size()) or is it expected that
> the "dfs.client.socketcache.capacity" be set to very high value?
> \cc [~gopalv], [~sseth]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)