[
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arpit Agarwal updated HDFS-9902:
--------------------------------
Summary: Support different values of dfs.datanode.du.reserved per storage
type (was: dfs.datanode.du.reserved should be difference between StorageType
DISK and RAM_DISK)
> Support different values of dfs.datanode.du.reserved per storage type
> ---------------------------------------------------------------------
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 2.7.2
> Reporter: Pan Yuxuan
> Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same
> DN, and I set dfs.datanode.du.reserved values 10GB, this will waste a lot of
> RAM_DISK size.
> Since the usage of RAM_DISK can be 100%, so I don't want
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this
> configuration for RAM_DISK?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)