[
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Bharat Viswanadham updated HDFS-15165:
--------------------------------------
Description:
HDFS-12130 changed the behavior of DU command.
It merged both check permission and computation in to a single step.
During this change, when it is required to getInodeAttributes, it just used
inode.getAttributes(). But when attribute provider class is configured, we
should call attribute provider configured object to get InodeAttributes and use
the returned InodeAttributes during checkPermission.
So, if we see after HDFS-12130, code is changed as below.
{code:java}
byte[][] localComponents = {inode.getLocalNameBytes()};
INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
enforcer.checkPermission(
fsOwner, supergroup, callerUgi,
iNodeAttr, // single inode attr in the array
new INode[]{inode}, // single inode in the array
localComponents, snapshotId,
null, -1, // this will skip checkTraverse() because
// not checking ancestor here
false, null, null,
access, // the target access to be checked against the inode
null, // passing null sub access avoids checking children
false);
{code}
If we observe 2nd line it is missing the check if attribute provider class is
configured use that to get InodeAttributeProvider. Because of this when hdfs
path is managed by sentry, and InodeAttributeProvider class is configured with
SentryINodeAttributeProvider, it does not get SentryInodeAttributeProvider
object and not using AclFeature from that if any Acl’s are set. This has caused
the issue of AccessControlException when du command is run against hdfs path
managed by Sentry.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
du: Permission denied: user=systest, access=READ_EXECUTE,
inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}
was:
HDFS-12130 has changed the behavior of DU.
During that change to getInodeAttributes, it missed calling
getAttributesProvider().getAttributes when it is configured.
Because of this, when sentry is configured for hdfs path, and attributeProvider
class is set. We missed calling this, and AclFeature from Sentry was missing.
Because of this when DU command is run on a sentry managed hdfs path, we are
seeing AccessControlException.
This Jira is to fix this issue.
{code:java}
<property> <name>dfs.namenode.inode.attributes.provider.class</name>
<value>org.apache.sentry.hdfs.SentryINodeAttributesProvider</value>
</property>{code}
> In Du missed calling getAttributesProvider
> ------------------------------------------
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used
> inode.getAttributes(). But when attribute provider class is configured, we
> should call attribute provider configured object to get InodeAttributes and
> use the returned InodeAttributes during checkPermission.
> So, if we see after HDFS-12130, code is changed as below.
>
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()};
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
> fsOwner, supergroup, callerUgi,
> iNodeAttr, // single inode attr in the array
> new INode[]{inode}, // single inode in the array
> localComponents, snapshotId,
> null, -1, // this will skip checkTraverse() because
> // not checking ancestor here
> false, null, null,
> access, // the target access to be checked against the inode
> null, // passing null sub access avoids checking children
> false);
> {code}
>
> If we observe 2nd line it is missing the check if attribute provider class is
> configured use that to get InodeAttributeProvider. Because of this when hdfs
> path is managed by sentry, and InodeAttributeProvider class is configured
> with SentryINodeAttributeProvider, it does not get
> SentryInodeAttributeProvider object and not using AclFeature from that if any
> Acl’s are set. This has caused the issue of AccessControlException when du
> command is run against hdfs path managed by Sentry.
>
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE,
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]