[
https://issues.apache.org/jira/browse/HDFS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969501#action_12969501
]
Alan Gates commented on HDFS-1495:
----------------------------------
This seems wrong to me. The fact that rm is implemented as a move underneath
is not important to the user. The user expects certain semantics from rm.
HDFS has claimed that it follows POSIX semantics, which as far as I can tell,
makes no allowance for whether the data is actually removed or moved to a trash
directory. Further, the fact that rm requires different permissions depending
on whether you are using a trash directory is a broken and confusing semantic.
> HDFS does not properly check permissions of files in a directory when doing
> rmr
> -------------------------------------------------------------------------------
>
> Key: HDFS-1495
> URL: https://issues.apache.org/jira/browse/HDFS-1495
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.2
> Reporter: Alan Gates
>
> In POSIX file semantics, the ability to remove an entry a file is determined
> by whether the user has write permissions on the directory containing the
> file. However, to delete recursively (rm -r) the user must have write
> permissions in all directories being removed. Thus if you have a directory
> structure like /a/b/c and a user has write permissions on a but not on b,
> then he is not allowed to do 'rm -r b'. This is because he does not have
> permissions to remove c, so the rm of b fails, even though he has permission
> to remove b.
> However, 'hadoop fs -rmr b' removes both b and c in this case. It should
> instead fail and return an error message saying the user does not have
> permission to remove c. 'hadoop fs -rmr c' correctly fails.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.