[
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039643#comment-17039643
]
Bharat Viswanadham commented on HDFS-15165:
-------------------------------------------
Thank You [~sodonnell] for the review.
{quote}I believe you need to pass the iNodeAttr into the AccessControlException
as the first parameter rather than inode at the bottom of the method.
Otherwise, if there is an access exception, the log message will contain the
HDFS permissions for the inode, rather than the provider permissions, which
would be confusing eg:
{quote}
Yes, I agree, Done.
Used the test case provided by you, thanks for the test.
> In Du missed calling getAttributesProvider
> ------------------------------------------
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Attachments: HDFS-15165.00.patch, HDFS-15165.01.patch,
> example-test.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used
> inode.getAttributes(). But when attribute provider class is configured, we
> should call attribute provider configured object to get InodeAttributes and
> use the returned InodeAttributes during checkPermission.
> So, if we see after HDFS-12130, code is changed as below.
>
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()};
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
> fsOwner, supergroup, callerUgi,
> iNodeAttr, // single inode attr in the array
> new INode[]{inode}, // single inode in the array
> localComponents, snapshotId,
> null, -1, // this will skip checkTraverse() because
> // not checking ancestor here
> false, null, null,
> access, // the target access to be checked against the inode
> null, // passing null sub access avoids checking children
> false);
> {code}
>
> If we observe 2nd line it is missing the check if attribute provider class is
> configured use that to get InodeAttributeProvider. Because of this when hdfs
> path is managed by sentry, and InodeAttributeProvider class is configured
> with SentryINodeAttributeProvider, it does not get
> SentryInodeAttributeProvider object and not using AclFeature from that if any
> Acl’s are set. This has caused the issue of AccessControlException when du
> command is run against hdfs path managed by Sentry.
>
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE,
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]