[jira] [Comment Edited] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-14 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037016#comment-17037016
 ] 

Stephen O'Donnell edited comment on HDFS-15165 at 2/14/20 2:47 PM:
---

[~bharat] This is a good find. It is somewhat similar to a similar problem I 
encountered some time back - HDFS-14389. The change makes sense to me.

Looking at your modified method:

{code}
  void checkPermission(INode inode, int snapshotId, FsAction access)
  throws AccessControlException {
try {
  byte[][] localComponents = inode.getPathComponents();
INodeAttributes[] iNodeAttr = {getINodeAttrs(localComponents,
localComponents.length - 1, inode, snapshotId)};

  AccessControlEnforcer enforcer = getAccessControlEnforcer();
  enforcer.checkPermission(
  fsOwner, supergroup, callerUgi,
  iNodeAttr, // single inode attr in the array
  new INode[]{inode}, // single inode in the array
  localComponents, snapshotId,
  null, -1, // this will skip checkTraverse() because
  // not checking ancestor here
  false, null, null,
  access, // the target access to be checked against the inode
  null, // passing null sub access avoids checking children
  false);
} catch (AccessControlException ace) {
  throw new AccessControlException(
  toAccessControlString(inode, inode.getFullPathName(), access));
}
  }
{code}

I believe you need to pass the iNodeAttr into the AccessControlException as the 
first parameter rather than inode at the bottom of the method. Otherwise, if 
there is an access exception, the log message will contain the HDFS permissions 
for the inode, rather than the provider permissions, which would be confusing 
eg:

{code}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=other, access=READ_EXECUTE, 
inode="/user/authz":sodonnell:supergroup:drwx--
{code}

>From that log message, which came from my unit test, the provider permissions 
>should be foo:bar and not sodonnell:supergroup, as they are the HDFS 
>permissions.

It would be good to add a test to reproduce this in TestInodeAttributeProvider. 
In reviewing this change, I created a test to reproduce it, so I will share it 
here as a patch file and you can add it to your patch if you like.


was (Author: sodonnell):
[~bharat] This is a good find. It is somewhat similar to a similar problem I 
encountered some time back - HDFS-14389. The change makes sense to me.

Looking at your modified method:

{code}
  void checkPermission(INode inode, int snapshotId, FsAction access)
  throws AccessControlException {
try {
  byte[][] localComponents = inode.getPathComponents();
INodeAttributes[] iNodeAttr = {getINodeAttrs(localComponents,
localComponents.length - 1, inode, snapshotId)};

  AccessControlEnforcer enforcer = getAccessControlEnforcer();
  enforcer.checkPermission(
  fsOwner, supergroup, callerUgi,
  iNodeAttr, // single inode attr in the array
  new INode[]{inode}, // single inode in the array
  localComponents, snapshotId,
  null, -1, // this will skip checkTraverse() because
  // not checking ancestor here
  false, null, null,
  access, // the target access to be checked against the inode
  null, // passing null sub access avoids checking children
  false);
} catch (AccessControlException ace) {
  throw new AccessControlException(
  toAccessControlString(inode, inode.getFullPathName(), access));
}
  }
{code}

I believe you need to pass the iNodeAttr into the AccessControlException rather 
than inode at the bottom of the method. Otherwise, if there is an access 
exception, the log message will contain the HDFS permissions for the inode, 
rather than the provider permissions, which would be confusing eg:

{code}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=other, access=READ_EXECUTE, 
inode="/user/authz":sodonnell:supergroup:drwx--
{code}

>From that log message, the provider permissions should be foo:bar and not 
>sodonnell:supergroup, as they are the HDFS permissions.

It would be good to add a test to reproduce this in TestInodeAttributeProvider. 
In reviewing this change, I created a test to reproduce it, so I will share it 
here as a patch file and you can add it to your patch if you like.

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: 

[jira] [Comment Edited] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036441#comment-17036441
 ] 

Bharat Viswanadham edited comment on HDFS-15165 at 2/13/20 7:12 PM:


With Suggested fix tried on the cluster.

Now hdfs path managed with sentry du command is working.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 805306368 2147483648 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging/9f4f1d1b671d0714_1fbeb6b7
 [root@gg-620-1 ~]# hdfs dfs -du -s 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 1073741824 2684354560 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 [root@gg-620-1 ~]#{code}


was (Author: bharatviswa):
With Suggested fix,

now hdfs path managed with sentry du command is working.
{code:java}
[root@gg-620-1 ~]# hdfs dfs -du 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 805306368 2147483648 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging/9f4f1d1b671d0714_1fbeb6b7
 [root@gg-620-1 ~]# hdfs dfs -du -s 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 1073741824 2684354560 
/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging
 [root@gg-620-1 ~]#{code}

> In Du missed calling getAttributesProvider
> --
>
> Key: HDFS-15165
> URL: https://issues.apache.org/jira/browse/HDFS-15165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-15165.00.patch
>
>
> HDFS-12130 changed the behavior of DU command.
> It merged both check permission and computation in to a single step.
> During this change, when it is required to getInodeAttributes, it just used 
> inode.getAttributes(). But when attribute provider class is configured, we 
> should call attribute provider configured object to get InodeAttributes and 
> use the returned InodeAttributes during checkPermission. 
> So, if we see after HDFS-12130, code is changed as below. 
>  
> {code:java}
> byte[][] localComponents = {inode.getLocalNameBytes()}; 
> INodeAttributes[] iNodeAttr = {inode.getSnapshotINode(snapshotId)};
> enforcer.checkPermission(
>  fsOwner, supergroup, callerUgi,
>  iNodeAttr, // single inode attr in the array
>  new INode[]{inode}, // single inode in the array
>  localComponents, snapshotId,
>  null, -1, // this will skip checkTraverse() because
>  // not checking ancestor here
>  false, null, null,
>  access, // the target access to be checked against the inode
>  null, // passing null sub access avoids checking children
>  false);
> {code}
>  
> If we observe 2nd line it is missing the check if attribute provider class is 
> configured use that to get InodeAttributeProvider. Because of this when hdfs 
> path is managed by sentry, and InodeAttributeProvider class is configured 
> with SentryINodeAttributeProvider, it does not get 
> SentryInodeAttributeProvider object and not using AclFeature from that if any 
> Acl’s are set. This has caused the issue of AccessControlException when du 
> command is run against hdfs path managed by Sentry.
>  
> {code:java}
> [root@gg-620-1 ~]# hdfs dfs -du /dev/edl/sc/consumer/lpfg/str/edf/abc/
> du: Permission denied: user=systest, access=READ_EXECUTE, 
> inode="/dev/edl/sc/consumer/lpfg/str/lpfg_wrk/PRISMA_TO_ICERTIS_OUTBOUND_RM_MASTER/_impala_insert_staging":impala:hive:drwxrwx--x{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org