[ https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260434#comment-15260434 ]
Arpit Agarwal commented on HDFS-9902: ------------------------------------- Hi [~brahmareddy], thank you for reporting this. The fix lgtm. The unit test can be done more simply without MiniDFSCluster. Just instantiate "FsVolumeImpl" objects with different storage types and check that the value of {{#reserved}}. Also could you please update the documentation of {{dfs.datanode.du.reserved}}? > dfs.datanode.du.reserved should be difference between StorageType DISK and > RAM_DISK > ----------------------------------------------------------------------------------- > > Key: HDFS-9902 > URL: https://issues.apache.org/jira/browse/HDFS-9902 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode > Affects Versions: 2.7.2 > Reporter: Pan Yuxuan > Assignee: Brahma Reddy Battula > Attachments: HDFS-9902-02.patch, HDFS-9902.patch > > > Now Hadoop support different storage type for DISK, SSD, ARCHIVE and > RAM_DISK, but they share one configuration dfs.datanode.du.reserved. > The DISK size may be several TB and the RAM_DISK size may be only several > tens of GB. > The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same > DN, and I set dfs.datanode.du.reserved values 10GB, this will waste a lot of > RAM_DISK size. > Since the usage of RAM_DISK can be 100%, so I don't want > dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs. > So can we make a new configuration for RAM_DISK or just skip this > configuration for RAM_DISK? -- This message was sent by Atlassian JIRA (v6.3.4#6332)