[ 
https://issues.apache.org/jira/browse/HADOOP-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717403#comment-14717403
 ] 

Xiaoyu Yao commented on HADOOP-12358:
-------------------------------------

bq. Finally, to tie it back to your comment, right now there is no OOM (or 
partial delete) since the client just calls the single RPC and does not need to 
enumerate the directory. With this patch, it would. This would be a regression 
where a client with a small heap now cannot delete a large directory.

[~andrew.wang], this is not the case for HDFS. The default 
FileSystem#getContentSummary does recursion on client side. But the HDFS 
implementation in DistributedFileSystem#getContentSummary does not. It is also 
a single RPC like DistributedFileSystem#delete which has recursion on NN side.  

> FSShell should prompt before deleting directories bigger than a configured 
> size
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-12358
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12358
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>            Reporter: Xiaoyu Yao
>            Assignee: Xiaoyu Yao
>         Attachments: HADOOP-12358.00.patch, HADOOP-12358.01.patch, 
> HADOOP-12358.02.patch, HADOOP-12358.03.patch
>
>
> We have seen many cases with customers deleting data inadvertently with 
> -skipTrash. The FSShell should prompt user if the size of the data or the 
> number of files being deleted is bigger than a threshold even though 
> -skipTrash is being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to