[
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765174#comment-15765174
]
Andrew Wang commented on HDFS-11257:
------------------------------------
If the goal is to keep some free space around for performance, you can specify
reserved space at the FS level:
http://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why
The point of {{du.reserved}} is to soft-partition the disks to leave space for
MR shuffle space. It seems weird for HDFS to move data to always leave 100GB
free, since that impacts HDFS performance and the root cause is some other app
that's filling up the disk.
This probably doesn't come up much since most HDFS clusters run with some
headroom, but IMO this config should really be more like a {{df.reserved}} like
what Linux does, rather than a {{du.reserved}}.
> Evacuate DN when the remaining is negative
> ------------------------------------------
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 2.7.3
> Reporter: Inigo Goiri
>
> Datanodes have a maximum amount of disk they can use. This is set using
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN
> and later other processes (e.g., logs or co-located services) start to use
> the disk space, the remaining space will go to a negative and the used
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both
> approaches require administrator intervention while this is a situation that
> violates the settings. Note that decommisioning, would be too extreme as it
> would evacuate all the data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]