[
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14097384#comment-14097384
]
Selvamohan Neethiraj commented on HDFS-6826:
--------------------------------------------
It just make sense to add an external authorization interface to NN and also
provide a default implementation that would validate access based on the
current NN access control implementation.
This will provide flexibility for external modules to provide authorization
based ABAC or RBAC for HDFS resources.
However, the current proposal seems to expose some of the Inode related
structure in the API call.
So, I would like to propose an alternate method in the external HDFS authorizer
with the following method signature:
public void checkPermission(
String requestedPath, FsAction requestedAction,
String pathToTest, FsAction actionToTest,
boolean isFinalPathOnRecursiveCheck,
FsPermission permOnThePath, List<AclEntry>
aclsOnThePath, String ownerUserName, String owningGroupName, boolean
isDirectory) throws AccessControlException ;
Where
requestedPath - The path user
is trying to access (e.g: /apps/data/finance/sample.txt )
requestedAction - The action to be
performed by user (e.g: READ)
pathToTest - The path for
which the access is being validated by recursive check (e.g: /apps)
actionToTest - The action to
be checked on “pathToTest” (e.g: READ_EXECUTE)
isFinalPathOnRecursiveCheck - If this is the final path
requested by the end-user (true when this is the final check by recursive
checker)
permOnThePath - RWX permission for the
pathToTest
aclsOnThePath - ACL available in the
pathToTest
ownerUserName - owner/username of the
pathToTest
owningGroupName - owning group of the
pathToTest
isDirectory - true if the
pathToTest is a directory
This would allow the native HDFS authorizer implementation to have all
information needed to make decision based on the parameters and also, provides
ability for external authorizer to make decision based on other attributes.
> Plugin interface to enable delegation of HDFS authorization assertions
> ----------------------------------------------------------------------
>
> Key: HDFS-6826
> URL: https://issues.apache.org/jira/browse/HDFS-6826
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: security
> Affects Versions: 2.4.1
> Reporter: Alejandro Abdelnur
> Assignee: Alejandro Abdelnur
> Attachments: HDFS-6826-idea.patch,
> HDFSPluggableAuthorizationProposal.pdf
>
>
> When Hbase data, HiveMetaStore data or Search data is accessed via services
> (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce
> permissions on corresponding entities (databases, tables, views, columns,
> search collections, documents). It is desirable, when the data is accessed
> directly by users accessing the underlying data files (i.e. from a MapReduce
> job), that the permission of the data files map to the permissions of the
> corresponding data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the NameNode
> to delegate authorization to an external system that can map HDFS
> files/directories to data entities and resolve their permissions based on the
> data entities permissions.
> I’ll be posting a design proposal in the next few days.
--
This message was sent by Atlassian JIRA
(v6.2#6252)