[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151047#comment-16151047
 ] 

Yongjun Zhang commented on HDFS-12357:
--------------------------------------

Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
      throws FileNotFoundException {
    INode node = FSDirectory.resolveLastINode(iip);
    int snapshot = iip.getPathSnapshotId();
    INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
    if (attributeProvider != null) {
      // permission checking sends the full components array including the
      // first empty component for the root.  however file status
      // related calls are expected to strip out the root component according
      // to TestINodeAttributeProvider.
      byte[][] components = iip.getPathComponents();
      components = Arrays.copyOfRange(components, 1, components.length);
      nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
    }
    return nodeAttrs;
  }
{code}
we already got the attributes from HDFS, then we decide to whether to overwrite 
it with provider's data. The easiest way is to check if the user is a special 
user, then we don't ask for provider's data at all. If we do this in a wrapper 
class, we always have to get some attributes, which maybe from HDFS or not. 
It's not a clear implementation and may incur runtime cost.

2.b
{code}
 @VisibleForTesting
  FSPermissionChecker getPermissionChecker(String fsOwner, String superGroup,
      UserGroupInformation ugi) throws AccessControlException {
    return new FSPermissionChecker(
        fsOwner, superGroup, ugi, attributeProvider);
  }
{code}
Here we need to pass either a null or the external attributeProvider configured 
to permission checker. if we include this logic to the external provider, we 
need have an API in this wrapper class, to return the external provicer or 
null, and pass it to the "attributeProvider" parameter in the above code. like
{code}
    return new FSPermissionChecker(
        fsOwner, superGroup, ugi, attributeProvider.getRealAttributeProvider());
{code}
We need to add this getRealAttibuteProvider() API to the base provider class, 
which is a bit weird because this API is only meaning ful in the wrapper layer.

Thoughts?

Thanks.


> Let NameNode to bypass external attribute provider for special user
> -------------------------------------------------------------------
>
>                 Key: HDFS-12357
>                 URL: https://issues.apache.org/jira/browse/HDFS-12357
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to