[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14097666#comment-14097666
 ] 

Alejandro Abdelnur commented on HDFS-6826:
------------------------------------------

[~sneethiraj], 

The current proposal (and POC code) only externalizes the source of truth for 
authorization information (user/group/permission/ACLs), it does not allow 
changing the behavior of checking permissions. IMO, doing this is safer than 
allowing a plugin to externalize the authorization assertion logic (which is 
not simple) and being exposed to unexpected behavior. In order words, with the 
current approach, the plugin only allows to change the data used to assert 
authorization, not how the authorization is asserted.

Regarding exposing the INode, good point, we should create an interface with 
methods the plugin should see (INode woudl implement this new interface). 
Something like:

{code}
  public interface INodeAuthorizationInfo {
    public String getFullPath();
    public void setUser(String user);
    public String getUser(int snapshot);
    public void setGroup(String group);
    public String getGroup(int snapshot);
    public void setPermission(long permission);
    public long getPermission(int snapshot);
    public void setAcls(List<AclEntry> acls);
    public List<AclEntry> getAcls(int snaphot);  
  }

Also, keep in mind that the plugin is not only used for authorization 
assertions, it also has to be used for producing the right 
authorization/ownership info back to the user via methods like getFileStatus() 
and getAcls(). Finally, a plugin could chose to implement changing authz info 
(user/group/permissions/acls) via the HDFS FS API (thing that is possible with 
the attached POC).



> Plugin interface to enable delegation of HDFS authorization assertions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-6826
>                 URL: https://issues.apache.org/jira/browse/HDFS-6826
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: security
>    Affects Versions: 2.4.1
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>         Attachments: HDFS-6826-idea.patch, 
> HDFSPluggableAuthorizationProposal.pdf
>
>
> When Hbase data, HiveMetaStore data or Search data is accessed via services 
> (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
> permissions on corresponding entities (databases, tables, views, columns, 
> search collections, documents). It is desirable, when the data is accessed 
> directly by users accessing the underlying data files (i.e. from a MapReduce 
> job), that the permission of the data files map to the permissions of the 
> corresponding data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the NameNode 
> to delegate authorization to an external system that can map HDFS 
> files/directories to data entities and resolve their permissions based on the 
> data entities permissions.
> I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to