[ 
https://issues.apache.org/jira/browse/HDFS-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535827#comment-14535827
 ] 

Suresh Srinivas commented on HDFS-8193:
---------------------------------------

bq. Empirically, most customer clusters do not run even close to near disk 
capacity
You would be surprised what you find in the field. I have seen many customers 
running at 90% plus and scrambling to find unnecessary files and deleting them 
to free up space.

bq. The configured delay window should also be enforced under the constraint of 
available space (e.g., don't delay deletion when available disk space < 10%)
The problem with this approach is, the protection mechanism that is intended 
works or does not work depending on space and many other factors we may add in 
the future. That means, really when this feature is needed, the data may not be 
there. The approaches you are talking about overwriting delayed deletion 
replicas will run into the same set of issues.

A user would need more consistent behavior than that.

How does one restore the blocks or expedite deletion of blocks to free up the 
storage?


> Add the ability to delay replica deletion for a period of time
> --------------------------------------------------------------
>
>                 Key: HDFS-8193
>                 URL: https://issues.apache.org/jira/browse/HDFS-8193
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: namenode
>    Affects Versions: 2.7.0
>            Reporter: Aaron T. Myers
>            Assignee: Zhe Zhang
>
> When doing maintenance on an HDFS cluster, users may be concerned about the 
> possibility of administrative mistakes or software bugs deleting replicas of 
> blocks that cannot easily be restored. It would be handy if HDFS could be 
> made to optionally not delete any replicas for a configurable period of time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to