zhaorongsheng opened a new issue, #57987:
URL: https://github.com/apache/doris/issues/57987

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   3.1.2
   
   ### What's Wrong?
   
   ```
   RuntimeLogger 2025-11-10 02:41:11,280 WARN (mysql-nio-pool-2|436) 
[Client$Connection$1.run():733] Exception encountered while connecting to the 
server xxx org.apache.hadoop.security.AccessControlException: Client cannot 
authenticate via:[TOKEN, KERBEROS]
           at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:179)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:392) 
~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561) 
~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:347) 
~[hadoop-common-3.3.6.jar:?]
           at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:783) 
~[hadoop-common-3.3.6.jar:?]
           at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:779) 
~[hadoop-common-3.3.6.jar:?]
           at 
java.security.AccessController.doPrivileged(AccessController.java:712) ~[?:?]
           at javax.security.auth.Subject.doAs(Subject.java:439) ~[?:?]
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:779) 
~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:347) 
~[hadoop-common-3.3.6.jar:?]
           at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632) 
~[hadoop-common-3.3.6.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1457) 
~[hadoop-common-3.3.6.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1410) 
~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
 ~[hadoop-common-3.3.6.jar:?]
           at jdk.proxy2.$Proxy128.getBlockLocations(Unknown Source) ~[?:?]
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:334)
 ~[hadoop-hdfs-client-3.3.6.jar:?]
           at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) ~[?:?]
           at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 ~[?:?]
           at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
           at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
 ~[hadoop-common-3.3.6.jar:?]
           at jdk.proxy2.$Proxy129.getBlockLocations(Unknown Source) ~[?:?]
           at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:900) 
~[hadoop-hdfs-client-3.3.6.jar:?]
           at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:889) 
~[hadoop-hdfs-client-3.3.6.jar:?]
           at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:878) 
~[hadoop-hdfs-client-3.3.6.jar:?]
           at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046) 
~[hadoop-hdfs-client-3.3.6.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:343)
 ~[hadoop-hdfs-client-3.3.6.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:339)
 ~[hadoop-hdfs-client-3.3.6.jar:?]
           at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:356)
 ~[hadoop-hdfs-client-3.3.6.jar:?]
           at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:997) 
~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.iceberg.hadoop.HadoopInputFile.newStream(HadoopInputFile.java:183) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.avro.AvroIterable.newFileReader(AvroIterable.java:102) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.avro.AvroIterable.iterator(AvroIterable.java:77) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.avro.AvroIterable.iterator(AvroIterable.java:37) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.relocated.com.google.common.collect.Iterables.addAll(Iterables.java:332)
 ~[iceberg-bundled-guava-1.9.1.jar:?]
           at 
org.apache.iceberg.relocated.com.google.common.collect.Lists.newLinkedList(Lists.java:261)
 ~[iceberg-bundled-guava-1.9.1.jar:?]
           at org.apache.iceberg.ManifestLists.read(ManifestLists.java:42) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.BaseSnapshot.cacheManifests(BaseSnapshot.java:176) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.BaseSnapshot.allManifests(BaseSnapshot.java:194) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.PartitionsTable.planEntries(PartitionsTable.java:190) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.PartitionsTable.partitions(PartitionsTable.java:169) 
~[iceberg-core-1.9.1.jar:?]
           at org.apache.iceberg.PartitionsTable.task(PartitionsTable.java:122) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.PartitionsTable$PartitionsScan.lambda$new$0(PartitionsTable.java:237)
 ~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.iceberg.StaticTableScan.doPlanFiles(StaticTableScan.java:53) 
~[iceberg-core-1.9.1.jar:?]
           at org.apache.iceberg.SnapshotScan.planFiles(SnapshotScan.java:139) 
~[iceberg-core-1.9.1.jar:?]
           at 
org.apache.doris.datasource.iceberg.IcebergUtils.loadIcebergPartition(IcebergUtils.java:1128)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergUtils.loadPartitionInfo(IcebergUtils.java:1106)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergMetadataCache.loadSnapshot(IcebergMetadataCache.java:154)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergMetadataCache.lambda$new$2(IcebergMetadataCache.java:83)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:145)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
com.github.benmanes.caffeine.cache.LocalCache.lambda$statsAware$0(LocalCache.java:139)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916) 
~[?:?]
           at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:56)
 ~[hive-catalog-shade-3.0.1.jar:3.0.1]
           at 
org.apache.doris.datasource.iceberg.IcebergMetadataCache.getSnapshotCache(IcebergMetadataCache.java:98)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergUtils.getIcebergSnapshotCacheValue(IcebergUtils.java:1345)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergUtils.getOrFetchSnapshotCacheValue(IcebergUtils.java:1363)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.iceberg.IcebergUtils.getIcebergSchema(IcebergUtils.java:1352)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.hive.HMSExternalTable.getFullSchema(HMSExternalTable.java:323)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.ExternalTable.getBaseSchema(ExternalTable.java:182) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.handleFieldList(ConnectProcessor.java:540) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.MysqlConnectProcessor.handleFieldList(MysqlConnectProcessor.java:299)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.MysqlConnectProcessor.dispatch(MysqlConnectProcessor.java:263)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.MysqlConnectProcessor.processOnce(MysqlConnectProcessor.java:437)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) 
~[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) 
~[?:?]
           at java.lang.Thread.run(Thread.java:833) ~[?:?]
   ```
   
   ### What You Expected?
   
   fix this issue
   
   ### How to Reproduce?
   
   _No response_
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to