[
https://issues.apache.org/jira/browse/HDFS-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14181770#comment-14181770
]
Colin Patrick McCabe commented on HDFS-6988:
--------------------------------------------
Hi [~xyao], it seems like this patch changes
{{dfs.datanode.ram.disk.low.watermark.percent}} so that 100% = 1.0. I think
people expect a "percent" to be between 0 and 100. While it's sort of elegant
to use 0.0-1.0, this violates people's expectations of what a percentage is.
It's also incompatible because existing configurations (i.e.
{{dfs.datanode.ram.disk.low.watermark.percent = 20}}) will suddenly stop
working.
I think a "fraction" is expected to be between 0 and 1, but a "percentage" is
expected to be between 0 and 100. Since the config key says "percent" it
should be the latter, I think.
{code}
public static final String DFS_DATANODE_RAM_DISK_LOW_WATERMARK_BYTES =
"dfs.datanode.ram.disk.low.watermark.replicas";
{code}
Should this be changed to {{"dfs.datanode.ram.disk.low.watermark.bytes"}}? I
don't think this can be a compatible change even if we kept the old name, since
previous config values like "5" would get mapped to 5 bytes, which doesn't seem
reasonable. So I think we should just change the name.
> Add configurable limit for percentage-based eviction threshold
> --------------------------------------------------------------
>
> Key: HDFS-6988
> URL: https://issues.apache.org/jira/browse/HDFS-6988
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Arpit Agarwal
> Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-6988.01.patch, HDFS-6988.02.patch,
> HDFS-6988.03.patch
>
>
> Per feedback from [~cmccabe] on HDFS-6930, we can make the eviction
> thresholds configurable. The hard-coded thresholds may not be appropriate for
> very large RAM disks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)