[
https://issues.apache.org/jira/browse/HDFS-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152218#comment-14152218
]
Colin Patrick McCabe commented on HDFS-6919:
--------------------------------------------
I like the idea of using the cache pool concept to control how much space we
should use for HDFS-6581. One reason is that cache pools can be modified at
runtime through {{hdfs cacheadmin}}, whereas changing configuration keys
normally requires a restart. Another reason is because HDFS-4949 is managed
via cache pools, it just makes more sense to use them for HDFS-6581 as well.
I have to think about what the best way to do this is. One obvious way to go
is to make a cache pool named "lazyPersist" and have this determine how much
space we use for HDFS-6581.
> Enforce a single limit for RAM disk usage and replicas cached via locking
> -------------------------------------------------------------------------
>
> Key: HDFS-6919
> URL: https://issues.apache.org/jira/browse/HDFS-6919
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Arpit Agarwal
> Assignee: Colin Patrick McCabe
> Priority: Blocker
>
> The DataNode can have a single limit for memory usage which applies to both
> replicas cached via CCM and replicas on RAM disk.
> See comments
> [1|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106025&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106025],
>
> [2|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106245&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106245]
> and
> [3|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106575&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106575]
> for discussion.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)