[
https://issues.apache.org/jira/browse/HADOOP-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14718131#comment-14718131
]
Xiaoyu Yao commented on HADOOP-12358:
-------------------------------------
Thanks [~aw] for the feedback.
bq. doing this will break large jobs that remove their working directories
prior to execution.
Only admins who want to use this feature would consider this aspect and by
default it is disabled. Also, the feature is exposed only via FSShell, so MR
jobs using delete API will not be impacted.
bq. I can't think of a single other file system that has this limitation across
the multitude of operating systems I've worked on. It will definitely surprise
users. Given that there are plenty of other ways to protect against users
making a mistake (snapshots, trash) and the countless ways to work around it
even when it is turned on, the risk/reward isn't really there.
There is no risk because of the above two reasons. It is a useful feature
because it will reduce the occurrences of cases where admins deleted large
amount of data inadvertently. It doesn't prevent all mistakes users can
make..but it will prevent against some mistakes and that itself is worth a
reward.
> FSShell should prompt before deleting directories bigger than a configured
> size
> -------------------------------------------------------------------------------
>
> Key: HADOOP-12358
> URL: https://issues.apache.org/jira/browse/HADOOP-12358
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Reporter: Xiaoyu Yao
> Assignee: Xiaoyu Yao
> Attachments: HADOOP-12358.00.patch, HADOOP-12358.01.patch,
> HADOOP-12358.02.patch, HADOOP-12358.03.patch
>
>
> We have seen many cases with customers deleting data inadvertently with
> -skipTrash. The FSShell should prompt user if the size of the data or the
> number of files being deleted is bigger than a threshold even though
> -skipTrash is being used.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)