[
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946049#comment-15946049
]
David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---------------------------------------------------------------
Indeed, I am able to reproduce with an internal Snakebite-like client and with
the regular client:
{code:none}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020,
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020,
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in
get_listing
self.invoke('getListing', request, response)
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in
invoke
blob = self.channel.receive()
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission
denied: user=nobody, access=READ_EXECUTE,
inode="/test/empty":nobody:supergroup:d-w--w--w-
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>> client.delete('/test/empty', can_recurse=True)
True
{code}
{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx - yarn hadoop 0 2017-03-27 10:20 /app-logs
drwxr-xr-x - hdfs hdfs 0 2017-03-27 10:20 /apps
drwxr-xr-x - yarn hadoop 0 2017-03-27 10:20 /ats
drwxr-xr-x - hdfs hdfs 0 2017-03-27 10:20 /hdp
drwxr-xr-x - mapred hdfs 0 2017-03-27 10:20 /mapred
drwxrwxrwx - mapred hadoop 0 2017-03-27 10:20 /mr-history
drwxrwxrwx - hdfs hdfs 0 2017-03-28 14:55 /test
drwxrwxrwx - hdfs hdfs 0 2017-03-28 09:21 /tmp
drwxr-xr-x - hdfs hdfs 0 2017-03-28 09:21 /user
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE,
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved:
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash
at:
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$
{code}
was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with
the regular client:
{code}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020,
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020,
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in
get_listing
self.invoke('getListing', request, response)
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in
invoke
blob = self.channel.receive()
File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission
denied: user=nobody, access=READ_EXECUTE,
inode="/test/empty":nobody:supergroup:d-w--w--w-
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>> client.delete('/test/empty', can_recurse=True)
True
{code}
{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx - yarn hadoop 0 2017-03-27 10:20 /app-logs
drwxr-xr-x - hdfs hdfs 0 2017-03-27 10:20 /apps
drwxr-xr-x - yarn hadoop 0 2017-03-27 10:20 /ats
drwxr-xr-x - hdfs hdfs 0 2017-03-27 10:20 /hdp
drwxr-xr-x - mapred hdfs 0 2017-03-27 10:20 /mapred
drwxrwxrwx - mapred hadoop 0 2017-03-27 10:20 /mr-history
drwxrwxrwx - hdfs hdfs 0 2017-03-28 14:55 /test
drwxrwxrwx - hdfs hdfs 0 2017-03-28 09:21 /tmp
drwxr-xr-x - hdfs hdfs 0 2017-03-28 09:21 /user
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE,
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved:
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash
at:
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$
{code}
> Empty directories may be recursively deleted without being listable
> -------------------------------------------------------------------
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 2.7.3
> Reporter: David Tucker
> Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True.
> Note that the delete succeeds even though the client is unable to check for
> emptiness and, therefore, cannot otherwise know that any/all children are
> deletable.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]