[
https://issues.apache.org/jira/browse/HADOOP-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14726693#comment-14726693
]
Weiwei Yang commented on HADOOP-12371:
--------------------------------------
[~xyao] thanks.. however .. what if I simply want to wipe out trash ? On my
testing cluster, with rarely small storage, I am running jobs generating some
data and deleting them over and over again, the trash easily blows up the
storage and my HDFS becomes unavailable. Does it make sense to provide an force
option to clean up trash ? E.g
hadoop fs -expunge -f
> hadoop fs -expunge is not able to remove trash
> ----------------------------------------------
>
> Key: HADOOP-12371
> URL: https://issues.apache.org/jira/browse/HADOOP-12371
> Project: Hadoop Common
> Issue Type: Bug
> Components: trash
> Affects Versions: 2.7.1
> Reporter: Weiwei Yang
> Labels: namenode, trash
>
> After HADOOP-8689, hadoop fs -expunge seems not work anymore. It returns
> following message from shell output : The configured checkpoint interval is 0
> minutes. Using an interval of 360 minutes that is used for deletion instead.
> Never removing the trash files.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)