[
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14303259#comment-14303259
]
Kai Zheng commented on HDFS-7731:
---------------------------------
Looks like the error occurred while NN trying to read editlogs from
JournalNodes but rejected due to authentication issue. You configuration looks
correct. Would you check logs on JournalNodes and see if any clues why they
reject NN ?
> Can not start HA namenode with security enabled
> -----------------------------------------------
>
> Key: HDFS-7731
> URL: https://issues.apache.org/jira/browse/HDFS-7731
> Project: Hadoop HDFS
> Issue Type: Task
> Components: ha, journal-node, namenode, security
> Affects Versions: 2.5.2
> Environment: Redhat6.2 Hadoop2.5.2
> Reporter: donhoff_h
> Labels: hadoop, security
>
> I am converting a secure non-HA cluster into a secure HA cluster. After the
> configuration and started all the journalnodes, I executed the following
> commands on the original NameNode:
> 1. hdfs name -initializeSharedEdits #this step succeeded
> 2. hadoop-daemon.sh start namenode # this step failed.
> So the namenode can not be started. I verified that my principals are right.
> And if I change back to the secure non-HA mode, the namenode can be started.
> The namenode log just reported the following errors and I could not find the
> reason according to this log:
> 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage:
> Start loading edits file
> http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
>
> http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
> 2015-02-03 17:42:06,024 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream
> 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
>
> http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
> to transaction ID 68994
> 2015-02-03 17:42:06,024 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream
> 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
> to transaction ID 68994
> 2015-02-03 17:42:06,154 ERROR
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception
> initializing
> http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
> java.io.IOException:
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> GSSException: No valid credentials provided (Mechanism level: Server not
> found in Kerberos database (7) - UNKNOWN_SERVER)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
> at
> org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
> at
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
> at
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
> at
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:676)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)