[ https://issues.apache.org/jira/browse/HDFS-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13803606#comment-13803606 ]
Brandon Li commented on HDFS-5364: ---------------------------------- The first patch used guava cache and it turned out to be a bad choice because we can't control the eviction policy. Ideally we should evict the idlest stream. However, our idleness means "write access", while guava cache's idleness is "cache r/w access". Also, we can't use entry auto-expiration feature provided by guava cache to replace StreamMonitor thread. This is because guava cache uses lazy eviction. When an entry is expired, guava cache evicts it (and then invokes removal-listener to do cleanup) when the data structure is touched again. Uploaded a new patch which uses a simple map to implement the cache. > Add OpenFileCtx cache > --------------------- > > Key: HDFS-5364 > URL: https://issues.apache.org/jira/browse/HDFS-5364 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs > Reporter: Brandon Li > Assignee: Brandon Li > Attachments: HDFS-5364.001.patch, HDFS-5364.002.patch > > > NFS gateway can run out of memory when the stream timeout is set to a > relatively long period(e.g., >1 minute) and user uploads thousands of files > in parallel. Each stream DFSClient creates a DataStreamer thread, and will > eventually run out of memory by creating too many threads. > NFS gateway should have a OpenFileCtx cache to limit the total opened files. -- This message was sent by Atlassian JIRA (v6.1#6144)