[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542776#comment-14542776
 ] 

Xiaoyu Yao commented on HDFS-8157:
----------------------------------

Thanks Arpit for working on this. Patch v2 looks good to me. 

{code}
-      if (v.isTransientStorage()) 
{cacheManager.releaseRoundDown(replicaInfo.getOriginalBytesReserved() - 
replicaInfo.getNumBytes());
+     if (v.isTransientStorage())  {
+     releaseLockedMemory(replicaInfo.getOriginalBytesReserved() - 
replicaInfo.getNumBytes(), false);
        ramDiskReplicaTracker.addReplica(bpid, replicaInfo.getBlockId(), v, 
replicaInfo.getNumBytes());
        datanode.getMetrics().addRamDiskBytesWrite(replicaInfo.getNumBytes());
      }
{code}

One question: This patch coordinates the maximum memory usage of HDFS cache. 
But it does not prevent the same replica block from being cached by both CCM 
(mmap) and Write Cache (ramdisk). For example, based on the block id, we may 
not want to mmap the same block that has just been written to the ramdisk. 



> Writes to RAM DISK reserve locked memory for block files
> --------------------------------------------------------
>
>                 Key: HDFS-8157
>                 URL: https://issues.apache.org/jira/browse/HDFS-8157
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to