[
https://issues.apache.org/jira/browse/HDFS-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145202#comment-14145202
]
Arpit Agarwal commented on HDFS-6988:
-------------------------------------
Thanks for taking a look at the patch. They are integers as they are replica
counts - to be multiplied by the default block length at runtime.
A single default simply won't work for a range of ram disk sizes. It will force
every administrator to configure one more setting. This way we have reasonable
default behavior for most drive sizes, from a few GB up to 100GB.
> Add configurable limit for percentage-based eviction threshold
> --------------------------------------------------------------
>
> Key: HDFS-6988
> URL: https://issues.apache.org/jira/browse/HDFS-6988
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode
> Affects Versions: HDFS-6581
> Reporter: Arpit Agarwal
> Assignee: Arpit Agarwal
> Fix For: HDFS-6581
>
> Attachments: HDFS-6988.01.patch, HDFS-6988.02.patch
>
>
> Per feedback from [~cmccabe] on HDFS-6930, we can make the eviction
> thresholds configurable. The hard-coded thresholds may not be appropriate for
> very large RAM disks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)