[ 
https://issues.apache.org/jira/browse/HADOOP-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717862#comment-14717862
 ] 

Allen Wittenauer commented on HADOOP-12358:
-------------------------------------------

Actually, now that I think about it, doing this will break large jobs that 
remove their working directories prior to execution.

At this point, I'm leaning towards just a flat out -1.  I can't think of a 
single other file system that has this limitation across the multitude of 
operating systems I've worked on.  It will definitely surprise users.  Given 
that there are plenty of other ways to protect against users making a mistake 
(snapshots, trash) and the countless ways to work around it even when it is 
turned on, the risk/reward isn't really there. 

> FSShell should prompt before deleting directories bigger than a configured 
> size
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-12358
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12358
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>            Reporter: Xiaoyu Yao
>            Assignee: Xiaoyu Yao
>         Attachments: HADOOP-12358.00.patch, HADOOP-12358.01.patch, 
> HADOOP-12358.02.patch, HADOOP-12358.03.patch
>
>
> We have seen many cases with customers deleting data inadvertently with 
> -skipTrash. The FSShell should prompt user if the size of the data or the 
> number of files being deleted is bigger than a threshold even though 
> -skipTrash is being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to