[
https://issues.apache.org/jira/browse/HDFS-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158406#comment-14158406
]
Chris Nauroth commented on HDFS-6919:
-------------------------------------
bq. The simplest way to solve this, and something that I think might work
pretty well in practice, is to have the write cache shrink by the size of the
read cache on the local DataNode.
+1 for the proposal. In fact, I'd even go as far as saying that new incoming
HDFS-4949 caching requests may trigger eviction of HDFS-6581 writes from RAM.
Conceptually, I think this fits perfectly with the overall analogy that
HDFS-4949 is similar to {{mlock}} (an explicit demand to keep something in RAM)
and HDFS-6581 is similar to a virtual memory write (a best effort is made to
keep the write in RAM, but it still might incur "paging").
> Enforce a single limit for RAM disk usage and replicas cached via locking
> -------------------------------------------------------------------------
>
> Key: HDFS-6919
> URL: https://issues.apache.org/jira/browse/HDFS-6919
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Arpit Agarwal
> Assignee: Colin Patrick McCabe
> Priority: Blocker
>
> The DataNode can have a single limit for memory usage which applies to both
> replicas cached via CCM and replicas on RAM disk.
> See comments
> [1|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106025&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106025],
>
> [2|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106245&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106245]
> and
> [3|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106575&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106575]
> for discussion.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)