[
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15755365#comment-15755365
]
Inigo Goiri commented on HDFS-11257:
------------------------------------
The proposal would be for the {{BlockManager}} to check for this situation and
leverage the code in {{blockHasEnoughRacks()}} to mark blocks as needing
replicas in other nodes. Once that's done, the block placement policy would
mark the blocks in machine with {{getRemaining()<0}} for deletion.
> Evacuate DN when the remaining is negative
> ------------------------------------------
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 2.7.3
> Reporter: Inigo Goiri
>
> Datanodes have a maximum amount of disk they can use. This is set using
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN
> and later other processes (e.g., logs or co-located services) start to use
> the disk space, the remaining space will go to a negative and the used
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both
> approaches require administrator intervention while this is a situation that
> violates the settings. Note that decommisioning, would be too extreme as it
> would evacuate all the data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]