wonderzerg opened a new issue, #32828:
URL: https://github.com/apache/doris/issues/32828

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   2.0.5-rc02 
   
   ### What's Wrong?
   
   Use hms catalog, with hadoop.username set  to 'ocdp'. run an "select * from 
xxx;" get “root Permission denied”.
   
   
![image_115](https://github.com/apache/doris/assets/5991518/8726b8ff-76bc-4fdd-aeee-cc9a6db68f1d)
   
   
   
   ### What You Expected?
   
   The result of select.
   
   ### How to Reproduce?
   
   -- "ocdp" is granted to the only owner of this catalog.
   switch hive;
   -- use one of databases;
   use hivetest;
   select * from hive_test_table;
   
   
   ### Anything Else?
   
   fe.log:
   2024-03-26 16:20:10,943 INFO (Thread-56|116) 
[ReportHandler.taskReport():551] finished to handle task report from backend 
10139, diff task num: 0. cost: 0 ms
   2024-03-26 16:20:10,943 INFO (thrift-server-pool-28|502) 
[ReportHandler.handleReport():198] receive report from be 10139. type: TASK, 
current queue size: 1
   2024-03-26 16:20:10,975 WARN (Routine load task scheduler|56) 
[KafkaUtil.getLatestOffsets():204] failed to get latest offsets.
   org.apache.doris.common.UserException: errCode = 2, detailMessage = failed 
to get latest offsets: [(192.168.12.38)[INTERNAL_ERROR]failed to get latest 
offset for partition: 0, err: Local: Unknown partition]
           at 
org.apache.doris.common.util.KafkaUtil.getLatestOffsets(KafkaUtil.java:192) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.KafkaRoutineLoadJob.hasMoreDataToConsume(KafkaRoutineLoadJob.java:732)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.KafkaTaskInfo.hasMoreDataToConsume(KafkaTaskInfo.java:123)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.scheduleOneTask(RoutineLoadTaskScheduler.java:133)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.process(RoutineLoadTaskScheduler.java:111)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.runAfterCatalogReady(RoutineLoadTaskScheduler.java:84)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.common.util.MasterDaemon.runOneCycle(MasterDaemon.java:58) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.common.util.Daemon.run(Daemon.java:116) 
~[doris-fe.jar:1.2-SNAPSHOT]
   2024-03-26 16:20:10,975 WARN (Routine load task scheduler|56) 
[KafkaRoutineLoadJob.hasMoreDataToConsume():738] failed to get latest partition 
offset. errCode = 2, detailMessage = Failed to get latest offsets of kafka 
topic: test-routine-load-csv. error: errCode = 2, detailMessage = failed to get 
latest offsets: [(192.168.12.38)[INTERNAL_ERROR]failed to get latest offset for 
partition: 0, err: Local: Unknown partition]
   org.apache.doris.common.LoadException: errCode = 2, detailMessage = Failed 
to get latest offsets of kafka topic: test-routine-load-csv. error: errCode = 
2, detailMessage = failed to get latest offsets: 
[(192.168.12.38)[INTERNAL_ERROR]failed to get latest offset for partition: 0, 
err: Local: Unknown partition]
           at 
org.apache.doris.common.util.KafkaUtil.getLatestOffsets(KafkaUtil.java:206) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.KafkaRoutineLoadJob.hasMoreDataToConsume(KafkaRoutineLoadJob.java:732)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.KafkaTaskInfo.hasMoreDataToConsume(KafkaTaskInfo.java:123)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.scheduleOneTask(RoutineLoadTaskScheduler.java:133)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.process(RoutineLoadTaskScheduler.java:111)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.load.routineload.RoutineLoadTaskScheduler.runAfterCatalogReady(RoutineLoadTaskScheduler.java:84)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.common.util.MasterDaemon.runOneCycle(MasterDaemon.java:58) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.common.util.Daemon.run(Daemon.java:116) 
~[doris-fe.jar:1.2-SNAPSHOT]
   2024-03-26 16:20:11,224 INFO (tablet scheduler|48) 
[BeLoadRebalancer.selectAlternativeTabletsForCluster():118] get number of low 
load paths: 2, with medium: HDD
   2024-03-26 16:20:11,224 INFO (tablet scheduler|48) 
[BeLoadRebalancer.selectAlternativeTabletsForCluster():220] select alternative 
tablets, medium: HDD, num: 2, detail: [428586, 415522]
   2024-03-26 16:20:11,224 INFO (tablet scheduler|48) 
[TabletScheduler.addTablet():272] Add tablet to pending queue, tablet id: 
415522, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: LOW, 
tablet size: 0, visible version: -1, committed version: -1
   2024-03-26 16:20:11,225 INFO (tablet scheduler|48) 
[TabletScheduler.addTablet():272] Add tablet to pending queue, tablet id: 
428586, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: LOW, 
tablet size: 0, visible version: -1, committed version: -1
   2024-03-26 16:20:11,225 INFO (tablet scheduler|48) 
[TabletScheduler.removeTabletCtx():1589] remove the tablet tablet id: 415522, 
status: HEALTHY, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: 
LOW, tablet size: 5190778, from backend: 10141, src path hash: 
4207446603508968166, visible version: 2, committed version: 2. err: unable to 
find low backend. because: unable to find low backend
   2024-03-26 16:20:11,225 INFO (tablet scheduler|48) 
[TabletScheduler.removeTabletCtx():1589] remove the tablet tablet id: 428586, 
status: HEALTHY, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: 
LOW, tablet size: 0, from backend: 10057, src path hash: -513894792540911307, 
visible version: 1, committed version: 1. err: unable to find low backend. 
because: unable to find low backend
   2024-03-26 16:20:11,762 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():639] Trying to connect to metastore with URI 
thrift://phy4.asiainfo.com:9083
   2024-03-26 16:20:11,763 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():715] Opened a connection to metastore, current 
connections: 1
   2024-03-26 16:20:11,845 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():779] Connected to metastore.
   2024-03-26 16:20:12,226 INFO (tablet scheduler|48) 
[BeLoadRebalancer.selectAlternativeTabletsForCluster():118] get number of low 
load paths: 2, with medium: HDD
   2024-03-26 16:20:12,226 INFO (tablet scheduler|48) 
[BeLoadRebalancer.selectAlternativeTabletsForCluster():220] select alternative 
tablets, medium: HDD, num: 2, detail: [428576, 459151]
   2024-03-26 16:20:12,226 INFO (tablet scheduler|48) 
[TabletScheduler.addTablet():272] Add tablet to pending queue, tablet id: 
459151, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: LOW, 
tablet size: 0, visible version: -1, committed version: -1
   2024-03-26 16:20:12,226 INFO (tablet scheduler|48) 
[TabletScheduler.addTablet():272] Add tablet to pending queue, tablet id: 
428576, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: LOW, 
tablet size: 0, visible version: -1, committed version: -1
   2024-03-26 16:20:12,226 INFO (tablet scheduler|48) 
[TabletScheduler.removeTabletCtx():1589] remove the tablet tablet id: 459151, 
status: HEALTHY, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: 
LOW, tablet size: 0, from backend: 10173, src path hash: 2700166947358005402, 
visible version: 1, committed version: 1. err: unable to find low backend. 
because: unable to find low backend
   2024-03-26 16:20:12,227 INFO (tablet scheduler|48) 
[TabletScheduler.removeTabletCtx():1589] remove the tablet tablet id: 428576, 
status: HEALTHY, state: PENDING, type: BALANCE, balance: BE_BALANCE, priority: 
LOW, tablet size: 0, from backend: 10141, src path hash: 4207446603508968166, 
visible version: 1, committed version: 1. err: unable to find low backend. 
because: unable to find low backend
   2024-03-26 16:20:12,373 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.close():809] Closed a connection to metastore, current 
connections: 0
   2024-03-26 16:20:12,376 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():639] Trying to connect to metastore with URI 
thrift://phy4.asiainfo.com:9083
   2024-03-26 16:20:12,377 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():715] Opened a connection to metastore, current 
connections: 1
   2024-03-26 16:20:12,377 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.open():779] Connected to metastore.
   2024-03-26 16:20:12,877 INFO (mysql-nio-pool-5|1179) 
[HiveMetaStoreClient.close():809] Closed a connection to metastore, current 
connections: 0
   2024-03-26 16:20:12,877 WARN (mysql-nio-pool-5|1179) 
[StmtExecutor.analyze():1029] Analyze failed. stmt[46, 
97cb294a062a4c43-8317d2cc2c6634c4]
   org.apache.doris.datasource.HMSClientException: failed to get table 
hive_test in db hivetest from hms client. reason: 
org.apache.hadoop.hive.metastore.api.MetaException: 
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=root, access=EXECUTE, 
inode="/warehouse/tablespace/managed/hive":ocdp:ocdp:drwx------
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:496)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:412)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:323)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:239)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:703)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:718)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3368)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1210)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1041)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:592)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:560)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:544)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1077)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
   
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.getTable(PooledHiveMetaStoreClient.java:200)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.catalog.external.HMSExternalTable.makeSureInitialized(HMSExternalTable.java:158)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.catalog.external.HMSExternalTable.isSupportedHmsTable(HMSExternalTable.java:147)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.analysis.Analyzer.resolveTableRef(Analyzer.java:835) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.analysis.FromClause.analyze(FromClause.java:132) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.analysis.SelectStmt.analyze(SelectStmt.java:505) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.StmtExecutor.analyzeAndGenerateQueryPlan(StmtExecutor.java:1103)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:1012) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.StmtExecutor.executeByLegacy(StmtExecutor.java:703) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:475) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:443) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:435) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:584) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:841) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_271]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_271]
           at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_271]
   Caused by: java.lang.RuntimeException
           at 
org.apache.doris.catalog.HiveMetaStoreClientHelper.ugiDoAs(HiveMetaStoreClientHelper.java:945)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.ugiDoAs(PooledHiveMetaStoreClient.java:475)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.getTable(PooledHiveMetaStoreClient.java:194)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           ... 17 more
   Caused by: org.apache.hadoop.hive.metastore.api.MetaException: 
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=root, access=EXECUTE, 
inode="/warehouse/tablespace/managed/hive":ocdp:ocdp:drwx------
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:496)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:412)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:323)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:239)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:703)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:718)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3368)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1210)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1041)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:592)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:560)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:544)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1077)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
   
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result$get_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_table_req_result.read(ThriftHiveMetastore.java)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
shade.doris.hive.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:2079)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:2066)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1799)
 ~[doris-fe.jar:1.0.3]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1788)
 ~[doris-fe.jar:1.0.3]
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_271]
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_271]
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_271]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_271]
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208)
 ~[hive-catalog-shade-1.0.3.jar:1.0.3]
           at com.sun.proxy.$Proxy116.getTable(Unknown Source) ~[?:?]
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.lambda$getTable$7(PooledHiveMetaStoreClient.java:194)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_271]
           at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_271]
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
 ~[hadoop-common-3.3.6.jar:?]
           at 
org.apache.doris.catalog.HiveMetaStoreClientHelper.ugiDoAs(HiveMetaStoreClientHelper.java:940)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.ugiDoAs(PooledHiveMetaStoreClient.java:475)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.datasource.hive.PooledHiveMetaStoreClient.getTable(PooledHiveMetaStoreClient.java:194)
 ~[doris-fe.jar:1.2-SNAPSHOT]
           ... 17 more
   2024-03-26 16:20:12,879 WARN (mysql-nio-pool-5|1179) 
[StmtExecutor.executeByLegacy():810] execute Exception. stmt[46, 
97cb294a062a4c43-8317d2cc2c6634c4]
   org.apache.doris.common.AnalysisException: errCode = 2, detailMessage = 
Unexpected exception: failed to get table hive_test in db hivetest from hms 
client. reason: org.apache.hadoop.hive.metastore.api.MetaException: 
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=root, access=EXECUTE, 
inode="/warehouse/tablespace/managed/hive":ocdp:ocdp:drwx------
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:496)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:412)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:323)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:360)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:239)
           at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:703)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:718)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3368)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1210)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:1041)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:592)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:560)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:544)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1077)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
   
           at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:1030) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.StmtExecutor.executeByLegacy(StmtExecutor.java:703) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:475) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:443) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:435) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:584) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:841) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) 
~[doris-fe.jar:1.2-SNAPSHOT]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_271]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_271]
           at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_271]
   2024-03-26 16:20:13,227 INFO (tablet scheduler|48) 
[BeLoadRebalancer.selectAlternativeTabletsForCluster():118] get number of low 
load paths: 2, with medium: HDD
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to