[ 
https://issues.apache.org/jira/browse/HDFS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771084#action_12771084
 ] 

gary murry commented on HDFS-740:
---------------------------------

Steps to Reproduce:
1) Make sure fs.trash.interval in core-site.xml is set to some positive number
2) Copy a large file to your hdfs
3) Set a low size quota for your hdfs
4) Do a rm of the large file

Result: 
A message comes up saying, that the directory could not be created in the 
trash, but that the file was deleted.  

Expected:
If the file fails to move to trash, then it should not be deleted.

> rm and rmr can accidently delete user's data
> --------------------------------------------
>
>                 Key: HDFS-740
>                 URL: https://issues.apache.org/jira/browse/HDFS-740
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>            Reporter: gary murry
>
> With trash turned on, if a user is over his quota and does a rm (or rmr), the 
> file is deleted without a copy being placed in the trash.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to