[ 
https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425383#comment-17425383
 ] 

Stephen O'Donnell commented on HDFS-16259:
------------------------------------------

I think it can be argued both ways. HDFS should have made 
AccessControlException final so it was clear what Ranger should do, but we 
cannot do that now as it will break Ranger, and any other plugins that may use 
this interface.

The HDFS client currently unwraps specific exceptions, so changing as you 
suggested above may need to be made in quite a few places and it could also 
change what the client returns in some circumstances.

To me it seems safer to ensure that the access plugins internal exceptions 
never make it to the client by catching them at the namenode.

There is already some code that does this inFSPermissionChecker:

{code}
  void checkPermission(INode inode, int snapshotId, FsAction access)
      throws AccessControlException {
    byte[][] pathComponents = inode.getPathComponents();
    INodeAttributes nodeAttributes = getINodeAttrs(pathComponents,
        pathComponents.length - 1, inode, snapshotId);
    try {
      INodeAttributes[] iNodeAttr = {nodeAttributes};
      AccessControlEnforcer enforcer = getAccessControlEnforcer();
      String opType = operationType.get();
      if (this.authorizeWithContext && opType != null) {
        INodeAttributeProvider.AuthorizationContext.Builder builder =
            new INodeAttributeProvider.AuthorizationContext.Builder();
        builder.fsOwner(fsOwner)
            .supergroup(supergroup)
            .callerUgi(callerUgi)
            .inodeAttrs(iNodeAttr) // single inode attr in the array
            .inodes(new INode[] { inode }) // single inode attr in the array
            .pathByNameArr(pathComponents)
            .snapshotId(snapshotId)
            .path(null)
            .ancestorIndex(-1)     // this will skip checkTraverse()
                                   // because not checking ancestor here
            .doCheckOwner(false)
            .ancestorAccess(null)
            .parentAccess(null)
            .access(access)        // the target access to be checked against
                                   // the inode
            .subAccess(null)       // passing null sub access avoids checking
                                   // children
            .ignoreEmptyDir(false)
            .operationName(opType)
            .callerContext(CallerContext.getCurrent());

        enforcer.checkPermissionWithContext(builder.build());
      } else {
        enforcer.checkPermission(
            fsOwner, supergroup, callerUgi,
            iNodeAttr, // single inode attr in the array
            new INode[]{inode}, // single inode in the array
            pathComponents, snapshotId,
            null, -1, // this will skip checkTraverse() because
            // not checking ancestor here
            false, null, null,
            access, // the target access to be checked against the inode
            null, // passing null sub access avoids checking children
            false);
      }
    } catch (AccessControlException ace) {
      throw new AccessControlException(
          toAccessControlString(nodeAttributes, inode.getFullPathName(),
              access));
    }
  }
{code}

The enforcer is also called from the method:

{code}
  void checkPermission(INodesInPath inodesInPath, boolean doCheckOwner,
      FsAction ancestorAccess, FsAction parentAccess, FsAction access,
      FsAction subAccess, boolean ignoreEmptyDir)
      throws AccessControlException {
{code}

Which does not catch it, so right now, the behaviour is inconsistent across 
calls to the enforcer.

> Catch and re-throw sub-classes of AccessControlException thrown by any 
> permission provider plugins (eg Ranger)
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16259
>                 URL: https://issues.apache.org/jira/browse/HDFS-16259
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Stephen O'Donnell
>            Assignee: Stephen O'Donnell
>            Priority: Major
>
> When a permission provider plugin is enabled (eg Ranger) there are some 
> scenarios where it can throw a sub-class of an AccessControlException (eg 
> RangerAccessControlException). If this exception is allowed to propagate up 
> the stack, it can give problems in the HDFS Client, when it unwraps the 
> remote exception containing the AccessControlException sub-class.
> Ideally, we should make AccessControlException final so it cannot be 
> sub-classed, but that would be a breaking change at this point. Therefore I 
> believe the safest thing to do, is to catch any AccessControlException that 
> comes out of the permission enforcer plugin, and re-throw an 
> AccessControlException instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to