Does anyone know this issue?

> On Nov 3, 2020, at 2:28 PM, ZongtianHou <zongtian...@icloud.com.INVALID> 
> wrote:
> 
> Hi, everyone
> I am setting up a secure cluster in auto HA mode. I got the following error 
> when I start namenode, it seem the ssl connection to journal node is not 
> configured correctly. I generate keystore with keytool, set path and password 
> of truststore and keystore in ssl-server.xml and ssl-client.xml in each host. 
> I am not familiar with ssl setup. I wonder what i got wrong. thanks very much.
> 
> 2020-11-03 11:33:45,999 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Start loading edits file 
> https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>,
>  
> https://exciting-huor-test1-3node-dev-3:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-3:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>,
>  
> https://exciting-huor-test1-3node-dev-1:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-1:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>
> 2020-11-03 11:33:46,001 INFO 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: 
> Fast-forwarding stream 
> 'https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>,
>  
> https://exciting-huor-test1-3node-dev-3:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass,
>  
> https://exciting-huor-test1-3node-dev-1:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass'
>  
> <https://exciting-huor-test1-3node-dev-1:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass%27>
>  to transaction ID 275
> 2020-11-03 11:33:46,002 INFO 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: 
> Fast-forwarding stream 
> 'https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>'
>  to transaction ID 275
> 2020-11-03 11:33:46,164 ERROR 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
> initializing 
> https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass
>  
> <https://exciting-huor-test1-3node-dev-2:8481/getJournal?jid=oushu1&segmentTxId=275&storageInfo=-63%3A1032620164%3A0%3Ass>
> javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:198)
>   at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1967)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:331)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:325)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1688)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:226)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1082)
>   at sun.security.ssl.Handshaker.process_record(Handshaker.java:1010)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1079)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1388)
>   at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1416)
>   at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1400)
>   at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:167)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:188)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:471)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:465)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:509)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:503)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:464)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
>   at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:179)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
>   at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:179)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:843)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:698)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1016)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:690)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1686)
>   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1754)

Reply via email to