[
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543200#comment-14543200
]
Xiaoyu Yao commented on HDFS-8157:
----------------------------------
Thanks [~arpitagarwal] for pointing that. +1 if you could update the patch that
fixes the issue below in FsDatasetImpl.java and the Jenkins issues.
{code}
- if (v.isTransientStorage())
{cacheManager.releaseRoundDown(replicaInfo.getOriginalBytesReserved() -
replicaInfo.getNumBytes());
+ if (v.isTransientStorage()) {
+ releaseLockedMemory(replicaInfo.getOriginalBytesReserved() -
replicaInfo.getNumBytes(), false);
ramDiskReplicaTracker.addReplica(bpid, replicaInfo.getBlockId(), v,
replicaInfo.getNumBytes());
datanode.getMetrics().addRamDiskBytesWrite(replicaInfo.getNumBytes());
}
{code}
> Writes to RAM DISK reserve locked memory for block files
> --------------------------------------------------------
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode
> Reporter: Arpit Agarwal
> Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will
> reserve locked memory via the FsDatasetCache.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)