[
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17040380#comment-17040380
]
Hudson commented on HDFS-15165:
-------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17968 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/17968/])
HDFS-15165. In Du missed calling getAttributesProvider. Contributed by
(inigoiri: rev ec7507162c7e23c0cd251e09b6be0030a500f1ca)
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
> In Du missed calling getAttributesProvider
> ------------------------------------------
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15165.00.patch, HDFS-15165.01.patch,
> example-test.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used
> inode.getAttributes(). But when attribute provider class is configured, we
> should call attribute provider configured object to get InodeAttributes and
> use the returned InodeAttributes during checkPermission.
> So, if we see after HDFS-12130, code is changed as below.
>
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()};
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
> fsOwner, supergroup, callerUgi,
> iNodeAttr, // single inode attr in the array
> new INode[]{inode}, // single inode in the array
> localComponents, snapshotId,
> null, -1, // this will skip checkTraverse() because
> // not checking ancestor here
> false, null, null,
> access, // the target access to be checked against the inode
> null, // passing null sub access avoids checking children
> false);
> {code}
>
> If we observe 2nd line it is missing the check if attribute provider class is
> configured use that to get InodeAttributeProvider. Because of this when hdfs
> path is managed by sentry, and InodeAttributeProvider class is configured
> with SentryINodeAttributeProvider, it does not get
> SentryInodeAttributeProvider object and not using AclFeature from that if any
> Acl’s are set. This has caused the issue of AccessControlException when du
> command is run against hdfs path managed by Sentry.
>
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE,
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]