[ https://issues.apache.org/jira/browse/HDFS-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332209#comment-16332209 ]
Hajime Osako commented on HDFS-13005: ------------------------------------- Somehow this can't be reproduced with newer HDP, like below: [root@sandbox-hdp ~]# hadoop version Hadoop 2.7.3.2.6.3.0-235 Subversion g...@github.com:hortonworks/hadoop.git -r 45bfd33bba8acadfa0e6024c80981c023b28d454 Compiled by jenkins on 2017-10-30T02:35Z Compiled with protoc 2.5.0 >From source with checksum cd1a4a466ef450f547c279989f3aa3 This command was run using /usr/hdp/2.6.3.0-235/hadoop/hadoop-common-2.7.3.2.6.3.0-235.jar > HttpFs checks subdirectories ACL status when LISTSTATUS is used > --------------------------------------------------------------- > > Key: HDFS-13005 > URL: https://issues.apache.org/jira/browse/HDFS-13005 > Project: Hadoop HDFS > Issue Type: Bug > Components: httpfs > Affects Versions: 2.7.3 > Reporter: Hajime Osako > Priority: Minor > > HttpFs LISTSTATUS call fails if a subdirectory is using ACL because in > org.apache.hadoop.fs.http.server.FSOperations.StatusPairs#StatusPairs, it > gets the list of child objects and checks those ACL status one by one, rather > than checking the target directory ACL. > Would like to know if this is intentional. > {code} > /* > * For each FileStatus, attempt to acquire an AclStatus. If the > * getAclStatus throws an exception, we assume that ACLs are turned > * off entirely and abandon the attempt. > */ > boolean useAcls = true; // Assume ACLs work until proven otherwise > ... > {code} > Reproduce steps: > {code} > # NOTE: The test user "admin" has full access to /acltest > [root@sandbox ~]# hdfs dfs -ls -R /acltest > drwxrwx---+ - hdfs test 0 2018-01-09 08:44 /acltest/subdir > -rwxrwx--- 1 hdfs test 647 2018-01-09 08:44 /acltest/subdir/derby.log > drwxr-xr-x - hdfs test 0 2018-01-09 09:15 /acltest/subdir2 > [root@sandbox ~]# hdfs dfs -getfacl /acltest/subdir > # file: /acltest/subdir > # owner: hdfs > # group: test > user::rwx > user:hdfs:rw- > group::r-x > mask::rwx > other::--- > # WebHDFS works > [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname > -f`:50070/webhdfs/v1/acltest?op=LISTSTATUS" > {"FileStatuses":{"FileStatus":[ > {"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":79057,"group":"test","length":0,"modificationTime":1515487493078,"owner":"hdfs","pathSuffix":"subdir","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"}, > {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":79059,"group":"test","length":0,"modificationTime":1515489337849,"owner":"hdfs","pathSuffix":"subdir2","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"} > ]}} > # Bat not via HttpFs > [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname > -f`:14000/webhdfs/v1/acltest?op=LISTSTATUS" > {"RemoteException":{"message":"Permission denied: user=admin, access=EXECUTE, > inode=\"\/acltest\/subdir\":hdfs:test:drwxrwx---","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}} > # HDFS audit log > [root@sandbox ~]# tail /var/log/hadoop/hdfs/hdfs-audit.log | grep -w admin > 2018-01-09 23:09:24,362 INFO FSNamesystem.audit: allowed=true ugi=admin > (auth:KERBEROS) ip=/172.18.0.2 cmd=listStatus src=/acltest > dst=null perm=null proto=webhdfs > 2018-01-09 23:09:31,937 INFO FSNamesystem.audit: allowed=true ugi=admin > (auth:PROXY) via httpfs/sandbox.hortonworks....@example.com (auth:KERBEROS) > ip=/172.18.0.2 cmd=listStatus src=/acltest dst=null perm=null > proto=rpc > 2018-01-09 23:09:31,978 INFO FSNamesystem.audit: allowed=false ugi=admin > (auth:PROXY) via httpfs/sandbox.hortonworks....@example.com (auth:KERBEROS) > ip=/172.18.0.2 cmd=getAclStatus src=/acltest/subdir dst=null > perm=null proto=rpc > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org