SGITLOGIN opened a new issue, #6777:
URL: https://github.com/apache/kyuubi/issues/6777

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Describe the bug
   
   ## problem
   The kyuubi server is configured with both Kerberos and LDAP authentication. 
HDFS is highly available. When I use dbever to access kyuubi and query hive, I 
always access the namenode corresponding to the previous nn1. When nn1 is the 
master, there is no problem with dbever access. When nn1 is in standby mode, an 
error will be reported. How to deal with this problem?
   
   ## namenode high availability configuration
   <img width="1188" alt="image" 
src="https://github.com/user-attachments/assets/806bcaa1-1433-46c1-8623-ab1f2a58862e";>
   
   ## When nn1 is standby, the following error is reported
   <img width="1037" alt="image" 
src="https://github.com/user-attachments/assets/3e590c03-921a-4538-b2a5-3398a7ff4ce8";>
   
   ## When nn1 is the master, access is normal
   <img width="486" alt="image" 
src="https://github.com/user-attachments/assets/a77ca078-4688-457c-9c73-9975d62b4681";>
   
   
   
   ### Affects Version(s)
   
   1.9.2
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   2024-10-23 16:18:06.021 ERROR KyuubiTBinaryFrontendHandler-Pool: Thread-72 
org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Error getting tables: 
   org.apache.kyuubi.KyuubiSQLException: Error operating GetTables: 
org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
 Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
           at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2107)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1585)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3374)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1216)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1044)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)
   )
           at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:110)
           at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:223)
           at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.databaseExists(ExternalCatalogWithListener.scala:69)
           at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:319)
           at 
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.listNamespaces(V2SessionCatalog.scala:278)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.$anonfun$listAllNamespaces$1(SparkCatalogUtils.scala:113)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.$anonfun$listAllNamespaces$1$adapted(SparkCatalogUtils.scala:112)
           at 
scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)
           at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
           at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
           at 
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
           at 
scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)
           at 
scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)
           at 
scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.listAllNamespaces(SparkCatalogUtils.scala:112)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.listAllNamespaces(SparkCatalogUtils.scala:129)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.listNamespacesWithPattern(SparkCatalogUtils.scala:137)
           at 
org.apache.kyuubi.engine.spark.util.SparkCatalogUtils$.getCatalogTablesOrViews(SparkCatalogUtils.scala:163)
           at 
org.apache.kyuubi.engine.spark.operation.GetTables.runInternal(GetTables.scala:81)
           at 
org.apache.kyuubi.operation.AbstractOperation.run(AbstractOperation.scala:173)
           at 
org.apache.kyuubi.session.AbstractSession.runOperation(AbstractSession.scala:101)
           at 
org.apache.kyuubi.engine.spark.session.SparkSessionImpl.runOperation(SparkSessionImpl.scala:101)
           at 
org.apache.kyuubi.session.AbstractSession.getTables(AbstractSession.scala:162)
           at 
org.apache.kyuubi.service.AbstractBackendService.getTables(AbstractBackendService.scala:94)
           at 
org.apache.kyuubi.service.TFrontendService.GetTables(TFrontendService.scala:329)
           at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$GetTables.getResult(TCLIService.java:1770)
           at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$GetTables.getResult(TCLIService.java:1750)
           at 
org.apache.kyuubi.shaded.thrift.ProcessFunction.process(ProcessFunction.java:38)
           at 
org.apache.kyuubi.shaded.thrift.TBaseProcessor.process(TBaseProcessor.java:38)
           at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:35)
           at 
org.apache.kyuubi.shaded.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
 Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
           at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2107)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1585)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3374)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1216)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1044)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)
   )
           at 
org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1666)
           at 
org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1651)
           at 
org.apache.spark.sql.hive.client.Shim_v0_12.databaseExists(HiveShim.scala:609)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:406)
           at 
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:303)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:234)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:233)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:283)
           at 
org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:406)
           at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:223)
           at 
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
           at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:101)
           ... 33 more
   Caused by: 
MetaException(message:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
 Operation category READ is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
           at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2107)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1585)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3374)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1216)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1044)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)
   )
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result$get_database_resultStandardScheme.read(ThriftHiveMetastore.java:40276)
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result$get_database_resultStandardScheme.read(ThriftHiveMetastore.java:40244)
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result.read(ThriftHiveMetastore.java:40175)
           at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_database(ThriftHiveMetastore.java:1135)
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:1122)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1511)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1506)
           at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
           at com.sun.proxy.$Proxy70.getDatabase(Unknown Source)
           at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773)
           at com.sun.proxy.$Proxy70.getDatabase(Unknown Source)
           at 
org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1662)
           ... 45 more
   ```
   
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   ```yaml
   ## kyuubi-env.sh  
   export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-1.el7_9.x86_64
   export SPARK_HOME=/usr/odp/current/spark3-client/
   export SPARK_CONF_DIR=/etc/spark3/conf/
   export HADOOP_CONF_DIR=/etc/hadoop/conf/
   export YARN_CONF_DIR=/etc/hadoop/conf/
   
   
   
   ## kyuubi-defaults.conf
   kyuubi.authentication                    KERBEROS,LDAP
   kyuubi.kinit.principal hive/_h...@huan.tv
   kyuubi.kinit.keytab /etc/security/keytabs/hive.service.keytab
   kyuubi.authentication.ldap.baseDN=dc=hadoop,dc=com
   kyuubi.authentication.ldap.binddn=cn=Manager,dc=hadoop,dc=com
   kyuubi.authentication.ldap.bindpw=UKWCVRfeAe72hTgr
   kyuubi.authentication.ldap.url=ldap://open-ldap-test:389/
   kyuubi.authentication.ldap.groupClassKey groupOfNames
   kyuubi.authentication.ldap.groupDNPattern CN=%s,OU=Group,DC=hadoop,DC=com
   kyuubi.authentication.ldap.groupMembershipKey memberUid
   kyuubi.authentication.ldap.userDNPattern UID=%s,OU=People,DC=hadoop,DC=com
   kyuubi.frontend.bind.host                ali-odp-test-01.huan.tv
   kyuubi.frontend.protocols                THRIFT_BINARY,REST
   kyuubi.frontend.thrift.binary.bind.port  10009
   kyuubi.frontend.rest.bind.port           10099
   kyuubi.engine.type                       SPARK_SQL
   kyuubi.engine.share.level                USER
   kyuubi.engine.doAs.enabled true
   kyuubi.metadata.store.jdbc.database.schema.init true
   kyuubi.metadata.store.jdbc.database.type MYSQL
   kyuubi.metadata.store.jdbc.driver com.mysql.jdbc.Driver
   kyuubi.metadata.store.jdbc.url 
jdbc:mysql://rm-uf63s1w0quw2ayvn7.mysql.rds.aliyuncs.com:3306/kyuubi
   kyuubi.metadata.store.jdbc.user kyuubi
   kyuubi.metadata.store.jdbc.password Fza4zDXgbGE
   kyuubi.session.engine.initialize.timeout PT30M
   kyuubi.session.check.interval PT1M
   kyuubi.operation.idle.timeout PT1M
   kyuubi.session.idle.timeout PT10M
   kyuubi.session.engine.idle.timeout PT5M
   kyuubi.ha.client.class 
org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient
   kyuubi.ha.addresses                      
ali-odp-test-01:2181,ali-odp-test-02:2181,ali-odp-test-03:2181
   kyuubi.ha.namespace                      kyuubi
   kyuubi.ha.zookeeper.auth.type KERBEROS
   kyuubi.ha.zookeeper.auth.principal zookee...@huan.tv
   kyuubi.ha.zookeeper.auth.keytab /root/zookeeper.keytab
   spark.master yarn
   spark.yarn.queue default
   spark.executor.cores 1
   spark.driver.memory 3g
   spark.executor.memory 3g
   spark.dynamicAllocation.enabled true
   spark.dynamicAllocation.shuffleTracking.enabled true
   spark.dynamicAllocation.minExecutors 1
   spark.dynamicAllocation.maxExecutors 10
   spark.dynamicAllocation.initialExecutors 2
   spark.cleaner.periodicGC.interval 5min
   ```
   
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes. I would be willing to submit a PR with guidance from the Kyuubi 
community to fix.
   - [ ] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org

Reply via email to