[ https://issues.apache.org/jira/browse/HDFS-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508715#comment-14508715 ]
Hudson commented on HDFS-4448: ------------------------------ FAILURE: Integrated in Hadoop-trunk-Commit #7645 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7645/]) HDFS-4448. Allow HA NN to start in secure mode with wildcard address configured (atm via asuresh) (Arun Suresh: rev baf8bc6c488de170d2caf76d9fa4c99faaa8f1a6) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Allow HA NN to start in secure mode with wildcard address configured > -------------------------------------------------------------------- > > Key: HDFS-4448 > URL: https://issues.apache.org/jira/browse/HDFS-4448 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, namenode, security > Affects Versions: 2.0.3-alpha > Reporter: Aaron T. Myers > Assignee: Aaron T. Myers > Fix For: 2.8.0 > > Attachments: HDFS-4448.2.patch, HDFS-4448.patch, HDFS-4448.patch > > > Currently if one tries to configure HA NNs use the wildcard HTTP address when > security is enabled, the NN will fail to start with an error like the > following: > {code} > java.lang.IllegalArgumentException: java.io.IOException: Cannot use a > wildcard address with security. Must explicitly set bind address for Kerberos > {code} > This is the case even if one configures an actual address for the other NN's > HTTP address. There's no good reason for this, since we now check for the > local address being set to 0.0.0.0 and determine the canonical hostname for > Kerberos purposes using > {{InetAddress.getLocalHost().getCanonicalHostName()}}, so we should remove > the restriction. -- This message was sent by Atlassian JIRA (v6.3.4#6332)