[
https://issues.apache.org/jira/browse/HADOOP-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13013859#comment-13013859
]
Suresh Srinivas commented on HADOOP-7215:
-----------------------------------------
Here is a way to do that at the RPC client:
# get part2 from principal name of format <part1>/<part2>@realm and ensure
part2 is a host name.
# the address corresponding to this host name belongs to one of the local
network interfaces on that host.
If the above two conditions are satisfied, the bind to the socket to that
address, before making rpc calls. It should address the following cases:
# Principal name is not of format <part1>/<part2>@realm
# part2 is not a valid host name
# part2 is a valid host name, but not that of the client host
Does this sound reasonable?
> RPC clients must connect over a network interface corresponding to the host
> name in the client's kerberos principal key
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-7215
> URL: https://issues.apache.org/jira/browse/HADOOP-7215
> Project: Hadoop Common
> Issue Type: Bug
> Components: security
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Fix For: 0.20.203.0, 0.23.0
>
> Attachments: HADOOP-7215.trunk.patch
>
>
> HDFS-7104 introduced a change where RPC server matches client's hostname with
> the hostname specified in the client's Kerberos principal name. RPC client
> binds the socket to a random local address, which might not match the
> hostname specified in the principal name. This results authorization failure
> of the client at the server.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira