[
https://issues.apache.org/jira/browse/HDFS-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron T. Myers updated HDFS-4448:
---------------------------------
Status: Open (was: Patch Available)
That's a great point, Daryn, and I agree with your analysis. Even though this
patch will allow the NNs to start and function properly when bound to the
wildcard address, clients (or DNs) will not in fact be able to connect on any
interface not contained in the principal name used by the RPC server of the NN.
A proper fix for this is thus somewhat more involved than I had originally
anticipated.
> HA NN does not start with wildcard address configured for other NN when
> security is enabled
> -------------------------------------------------------------------------------------------
>
> Key: HDFS-4448
> URL: https://issues.apache.org/jira/browse/HDFS-4448
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ha, namenode, security
> Affects Versions: 2.0.3-alpha
> Reporter: Aaron T. Myers
> Assignee: Aaron T. Myers
> Attachments: HDFS-4448.patch, HDFS-4448.patch
>
>
> Currently if one tries to configure HA NNs use the wildcard HTTP address when
> security is enabled, the NN will fail to start with an error like the
> following:
> {code}
> java.lang.IllegalArgumentException: java.io.IOException: Cannot use a
> wildcard address with security. Must explicitly set bind address for Kerberos
> {code}
> This is the case even if one configures an actual address for the other NN's
> HTTP address. There's no good reason for this, since we now check for the
> local address being set to 0.0.0.0 and determine the canonical hostname for
> Kerberos purposes using
> {{InetAddress.getLocalHost().getCanonicalHostName()}}, so we should remove
> the restriction.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira