[
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269902#comment-15269902
]
Hudson commented on HDFS-9902:
------------------------------
FAILURE: Integrated in Hadoop-trunk-Commit #9709 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/9709/])
HDFS-9902. Support different values of dfs.datanode.du.reserved per (arp: rev
6d77d6eab7790ed7ae2cad5b327ba5d1deb485db)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
> Support different values of dfs.datanode.du.reserved per storage type
> ---------------------------------------------------------------------
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 2.7.2
> Reporter: Pan Yuxuan
> Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902-02.patch, HDFS-9902-03.patch,
> HDFS-9902-04.patch, HDFS-9902-05.patch, HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same
> DN, and I set dfs.datanode.du.reserved values 10GB, this will waste a lot of
> RAM_DISK size.
> Since the usage of RAM_DISK can be 100%, so I don't want
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this
> configuration for RAM_DISK?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]