[ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908389#comment-16908389
 ] 

Eric Yang commented on HDFS-14375:
----------------------------------

[~Jihyun.Cho] The first log line indicates that ipc Server authenticated 
dn/testhost1....@test1.com to access Datanode in dn/testhost1....@test2.com.  

The problem is the second log line in SecurityAuthorizationManager.  It looks 
like a wrong optimization that happened long time ago [on this 
line|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java#L120].
  The original code was comparing [short 
username|https://github.com/apache/hadoop/commit/c3fdd289cf26fa3bb9c0d2d9f906eba769ddd789#diff-90193e5349be2122d5ed915ba38c957dL123].

The original code ensures dn/testhost1....@test1.com and 
dn/testhost2....@test2.com can both map to the same user in auth_to_local 
rules.  The current implementation compares the raw principals, which skips 
auth_to_local rule mapping and fail authorization incorrectly.  

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-14375
>                 URL: https://issues.apache.org/jira/browse/HDFS-14375
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 3.1.1
>            Reporter: Jihyun Cho
>            Assignee: Jihyun Cho
>            Priority: Major
>         Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>    |                                         |
> NameNode1                                 NameNode2
>    |                                         |
>    ---------- DataNodes (federated) ----------
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test....@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test....@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> <property>
>   <name>dfs.namenode.kerberos.trust-realms</name>
>   <value>TEST1.COM,TEST2.COM</value>
> </property>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to