CrazyBeeline opened a new issue, #6549:
URL: https://github.com/apache/kyuubi/issues/6549

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Describe the bug
   
   ./beeline -u 
"jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi"
 -n root1  --hiveconf kyuubi.engine.type=HIVE_SQL --hiveconf 
kyuubi.engine.hive.deploy.mode=local
   
   
![image](https://github.com/user-attachments/assets/56e61c5f-36cf-4435-8815-cefffc37a827)
    
   
   
   
   
   ### Launching engine Manual(shell):  it works well
   
   /usr/java/jdk1.8.0_351-amd64/bin/java \
        -Xmx1g  \
        -cp 
/usr/lib/kyuubi/externals/engines/hive/kyuubi-hive-sql-engine_2.12-1.9.1.jar:/usr/lib/hive/conf:/etc/hadoop/conf:/usr/lib/hadoop/etc/hadoop:/usr/lib/hive/lib/*:/usr/lib/kyuubi/jars/commons-collections-3.2.2.jar:/usr/lib/kyuubi/jars/hadoop-client-runtime-3.3.6.jar:/usr/lib/kyuubi/jars/hadoop-client-api-3.3.6.jar:
 org.apache.kyuubi.engine.hive.HiveSQLEngine \
        --conf kyuubi.session.user=root1 \
        --conf kyuubi.engine.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf 
hive.engine.name=kyuubi_USER_HIVE_SQL_root1_default_b1313aae-c609-4698-bc70-6581168961f8
 \
        --conf hive.server2.thrift.resultset.default.fetch.size=1000 \
        --conf kyuubi.backend.engine.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.engine.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.engine.exec.pool.size=100 \
        --conf kyuubi.backend.engine.exec.pool.wait.queue.size=100 \
        --conf kyuubi.backend.server.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.server.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.server.exec.pool.size=100 \
        --conf kyuubi.backend.server.exec.pool.wait.queue.size=100 \
        --conf kyuubi.batch.application.check.interval=PT10S \
        --conf kyuubi.batch.application.starvation.timeout=PT3M \
        --conf kyuubi.batch.session.idle.timeout=PT6H \
        --conf kyuubi.client.ipAddress=192.168.1.110 \
        --conf kyuubi.client.version=1.9.1 \
        --conf kyuubi.engine.event.json.log.path=/var/lib/kyuubi/engine/event \
        --conf kyuubi.engine.flink.application.jars= \
        --conf kyuubi.engine.flink.extra.classpath= \
        --conf kyuubi.engine.flink.java.options= \
        --conf kyuubi.engine.flink.memory=1g \
        --conf kyuubi.engine.hive.deploy.mode=local \
        --conf kyuubi.engine.hive.event.loggers=JSON \
        --conf kyuubi.engine.hive.extra.classpath= \
        --conf kyuubi.engine.hive.java.options= \
        --conf kyuubi.engine.hive.memory=1g \
        --conf kyuubi.engine.pool.name=kyuubi-engine-pool \
        --conf kyuubi.engine.pool.selectPolicy=RANDOM \
        --conf kyuubi.engine.pool.size=-1 \
        --conf kyuubi.engine.session.initialize.sql= \
        --conf kyuubi.engine.share.level=USER \
        --conf kyuubi.engine.spark.event.loggers=SPARK \
        --conf kyuubi.engine.submit.time=1721366841647 \
        --conf kyuubi.engine.submit.timeout=PT30S \
        --conf kyuubi.engine.type=HIVE_SQL \
        --conf kyuubi.engine.ui.retainedSessions=200 \
        --conf kyuubi.engine.ui.retainedStatements=200 \
        --conf kyuubi.engine.ui.stop.enabled=true \
        --conf kyuubi.engine.yarn.cores=1 \
        --conf kyuubi.engine.yarn.java.options= \
        --conf kyuubi.engine.yarn.memory=1024 \
        --conf kyuubi.engine.yarn.queue=default \
        --conf kyuubi.engine.yarn.submit.timeout=PT1M \
        --conf kyuubi.event.async.pool.keepalive.time=PT1M \
        --conf kyuubi.event.async.pool.size=8 \
        --conf kyuubi.event.async.pool.wait.queue.size=100 \
        --conf kyuubi.frontend.connection.url.use.hostname=true \
        --conf kyuubi.frontend.max.message.size=104857600 \
        --conf kyuubi.frontend.max.worker.threads=999 \
        --conf kyuubi.frontend.min.worker.threads=9 \
        --conf kyuubi.frontend.protocols=THRIFT_BINARY,REST \
        --conf kyuubi.frontend.proxy.http.client.ip.header=X-Real-IP \
        --conf kyuubi.frontend.rest.jetty.stopTimeout=PT10S \
        --conf kyuubi.frontend.rest.max.worker.threads=999 \
        --conf 
kyuubi.frontend.thrift.binary.ssl.disallowed.protocols=SSLv2,SSLv3 \
        --conf kyuubi.frontend.thrift.binary.ssl.enabled=false \
        --conf kyuubi.frontend.thrift.max.message.size=104857600 \
        --conf kyuubi.frontend.thrift.max.worker.threads=999 \
        --conf kyuubi.frontend.thrift.min.worker.threads=9 \
        --conf kyuubi.frontend.thrift.worker.keepalive.time=PT1M \
        --conf kyuubi.ha.addresses=hadoop01:2181,hadoop02:2181,hadoop03:2181 \
        --conf 
kyuubi.ha.client.class=org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient
 \
        --conf kyuubi.ha.engine.ref.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf kyuubi.ha.namespace=/kyuubi_1.9.1_USER_HIVE_SQL/root1/default \
        --conf kyuubi.ha.zookeeper.acl.enabled=false \
        --conf kyuubi.ha.zookeeper.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.connection.base.retry.wait=1000 \
        --conf kyuubi.ha.zookeeper.connection.max.retries=3 \
        --conf kyuubi.ha.zookeeper.connection.max.retry.wait=30000 \
        --conf kyuubi.ha.zookeeper.connection.retry.policy=EXPONENTIAL_BACKOFF \
        --conf kyuubi.ha.zookeeper.connection.timeout=15000 \
        --conf kyuubi.ha.zookeeper.engine.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.node.creation.timeout=PT2M \
        --conf kyuubi.ha.zookeeper.session.timeout=60000 \
        --conf kyuubi.metadata.cleaner.enabled=true \
        --conf kyuubi.metadata.cleaner.interval=PT30M \
        --conf kyuubi.metadata.max.age=PT128H \
        --conf kyuubi.metadata.recovery.threads=10 \
        --conf kyuubi.metadata.request.async.retry.enabled=true \
        --conf kyuubi.metadata.request.async.retry.queue.size=65536 \
        --conf kyuubi.metadata.request.async.retry.threads=10 \
        --conf kyuubi.metadata.request.retry.interval=PT5S \
        --conf 
kyuubi.metadata.store.class=org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore
 \
        --conf kyuubi.metrics.console.interval=PT20S \
        --conf kyuubi.metrics.enabled=false \
        --conf kyuubi.metrics.reporters= \
        --conf kyuubi.operation.query.timeout=3600000 \
        --conf kyuubi.operation.scheduler.pool=fair \
        --conf kyuubi.server.info.provider=ENGINE \
        --conf kyuubi.server.ipAddress=192.168.1.110 \
        --conf kyuubi.session.check.interval=PT5M \
        --conf kyuubi.session.close.on.disconnect=true \
        --conf kyuubi.session.connection.url=hadoop01:10009 \
        --conf kyuubi.session.engine.alive.timeout=PT2M \
        --conf kyuubi.session.engine.check.interval=PT1M \
        --conf kyuubi.session.engine.idle.timeout=PT30M \
        --conf kyuubi.session.engine.initialize.timeout=PT5M \
        --conf kyuubi.session.engine.launch.async=true \
        --conf kyuubi.session.engine.log.timeout=PT24H \
        --conf kyuubi.session.idle.timeout=PT6H \
        --conf kyuubi.session.real.user=root1 \
        --conf spark.cleaner.periodicGC.interval=5min \
        --conf spark.driver.cores=1 \
        --conf spark.driver.maxResultSize=1g \
        --conf spark.dynamicAllocation.cachedExecutorIdleTimeout=30min \
        --conf spark.dynamicAllocation.enabled=true \
        --conf spark.dynamicAllocation.executorAllocationRatio=0.5 \
        --conf spark.dynamicAllocation.executorIdleTimeout=60s \
        --conf spark.dynamicAllocation.initialExecutors=2 \
        --conf spark.dynamicAllocation.maxExecutors=25 \
        --conf spark.dynamicAllocation.minExecutors=2 \
        --conf spark.dynamicAllocation.schedulerBacklogTimeout=1s \
        --conf spark.dynamicAllocation.shuffleTracking.enabled=false \
        --conf spark.dynamicAllocation.shuffleTracking.timeout=30min \
        --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s \
        --conf spark.hadoop.cacheConf=false \
        --conf spark.io.compression.lz4.blockSize=128kb \
        --conf spark.master=yarn \
        --conf 
spark.scheduler.allocation.file=hdfs:///user/spark/conf/kyuubi-fairscheduler.xml
 \
        --conf spark.scheduler.mode=FAIR \
        --conf spark.shuffle.file.buffer=1m \
        --conf spark.shuffle.io.backLog=8192 \
        --conf spark.shuffle.push.enabled=true \
        --conf spark.shuffle.service.enabled=true \
        --conf spark.shuffle.service.index.cache.size=100m \
        --conf spark.shuffle.service.port=17337 \
        --conf spark.shuffle.service.removeShuffle=false \
        --conf spark.sql.adaptive.advisoryPartitionSizeInBytes=128M \
        --conf spark.sql.adaptive.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.adaptive.coalescePartitions.enabled=true \
        --conf spark.sql.adaptive.coalescePartitions.initialPartitionNum=8192 \
        --conf spark.sql.adaptive.coalescePartitions.minPartitionSize=1MB \
        --conf spark.sql.adaptive.coalescePartitions.parallelismFirst=true \
        --conf spark.sql.adaptive.enabled=true \
        --conf spark.sql.adaptive.forceOptimizeSkewedJoin=false \
        --conf spark.sql.adaptive.localShuffleReader.enabled=true \
        --conf 
spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled=true \
        --conf spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor=0.2 \
        --conf spark.sql.adaptive.skewJoin.enabled=true \
        --conf spark.sql.adaptive.skewJoin.skewedPartitionFactor=5 \
        --conf 
spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256MB \
        --conf spark.sql.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.hive.convertMetastoreOrc=true \
        --conf spark.sql.hive.metastore.jars=/usr/lib/hive/lib/* \
        --conf spark.sql.hive.metastore.version=3.1.3 \
        --conf spark.sql.orc.filterPushdown=true \
        --conf spark.sql.statistics.fallBackToHdfs=true \
        --conf spark.submit.deployMode=client
   
   ### kyuubi.engine.hive.deploy.mode=yarn  it works well 
   
   ./beeline -u 
"jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi"
 -n root1  --hiveconf kyuubi.engine.type=HIVE_SQL --hiveconf 
kyuubi.engine.hive.deploy.mode=yarn
   
   ### Affects Version(s)
   
   1.9.1
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   2024-07-19 13:27:21.651 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.engine.EngineRef: Launching engine:
   /usr/java/jdk1.8.0_351-amd64/bin/java \
        -Xmx1g  \
        -cp 
/usr/lib/kyuubi/externals/engines/hive/kyuubi-hive-sql-engine_2.12-1.9.1.jar:/usr/lib/hive/conf:/etc/hadoop/conf:/usr/lib/hadoop/etc/hadoop:/usr/lib/hive/lib/*:/usr/lib/kyuubi/jars/commons-collections-3.2.2.jar:/usr/lib/kyuubi/jars/hadoop-client-runtime-3.3.6.jar:/usr/lib/kyuubi/jars/hadoop-client-api-3.3.6.jar:
 org.apache.kyuubi.engine.hive.HiveSQLEngine \
        --conf kyuubi.session.user=root1 \
        --conf kyuubi.engine.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf 
hive.engine.name=kyuubi_USER_HIVE_SQL_root1_default_b1313aae-c609-4698-bc70-6581168961f8
 \
        --conf hive.server2.thrift.resultset.default.fetch.size=1000 \
        --conf kyuubi.backend.engine.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.engine.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.engine.exec.pool.size=100 \
        --conf kyuubi.backend.engine.exec.pool.wait.queue.size=100 \
        --conf kyuubi.backend.server.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.server.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.server.exec.pool.size=100 \
        --conf kyuubi.backend.server.exec.pool.wait.queue.size=100 \
        --conf kyuubi.batch.application.check.interval=PT10S \
        --conf kyuubi.batch.application.starvation.timeout=PT3M \
        --conf kyuubi.batch.session.idle.timeout=PT6H \
        --conf kyuubi.client.ipAddress=192.168.1.110 \
        --conf kyuubi.client.version=1.9.1 \
        --conf kyuubi.engine.event.json.log.path=/var/lib/kyuubi/engine/event \
        --conf kyuubi.engine.flink.application.jars= \
        --conf kyuubi.engine.flink.extra.classpath= \
        --conf kyuubi.engine.flink.java.options= \
        --conf kyuubi.engine.flink.memory=1g \
        --conf kyuubi.engine.hive.deploy.mode=local \
        --conf kyuubi.engine.hive.event.loggers=JSON \
        --conf kyuubi.engine.hive.extra.classpath= \
        --conf kyuubi.engine.hive.java.options= \
        --conf kyuubi.engine.hive.memory=1g \
        --conf kyuubi.engine.pool.name=kyuubi-engine-pool \
        --conf kyuubi.engine.pool.selectPolicy=RANDOM \
        --conf kyuubi.engine.pool.size=-1 \
        --conf kyuubi.engine.session.initialize.sql= \
        --conf kyuubi.engine.share.level=USER \
        --conf kyuubi.engine.spark.event.loggers=SPARK \
        --conf kyuubi.engine.submit.time=1721366841647 \
        --conf kyuubi.engine.submit.timeout=PT30S \
        --conf kyuubi.engine.type=HIVE_SQL \
        --conf kyuubi.engine.ui.retainedSessions=200 \
        --conf kyuubi.engine.ui.retainedStatements=200 \
        --conf kyuubi.engine.ui.stop.enabled=true \
        --conf kyuubi.engine.yarn.cores=1 \
        --conf kyuubi.engine.yarn.java.options= \
        --conf kyuubi.engine.yarn.memory=1024 \
        --conf kyuubi.engine.yarn.queue=default \
        --conf kyuubi.engine.yarn.submit.timeout=PT1M \
        --conf kyuubi.event.async.pool.keepalive.time=PT1M \
        --conf kyuubi.event.async.pool.size=8 \
        --conf kyuubi.event.async.pool.wait.queue.size=100 \
        --conf kyuubi.frontend.connection.url.use.hostname=true \
        --conf kyuubi.frontend.max.message.size=104857600 \
        --conf kyuubi.frontend.max.worker.threads=999 \
        --conf kyuubi.frontend.min.worker.threads=9 \
        --conf kyuubi.frontend.protocols=THRIFT_BINARY,REST \
        --conf kyuubi.frontend.proxy.http.client.ip.header=X-Real-IP \
        --conf kyuubi.frontend.rest.jetty.stopTimeout=PT10S \
        --conf kyuubi.frontend.rest.max.worker.threads=999 \
        --conf 
kyuubi.frontend.thrift.binary.ssl.disallowed.protocols=SSLv2,SSLv3 \
        --conf kyuubi.frontend.thrift.binary.ssl.enabled=false \
        --conf kyuubi.frontend.thrift.max.message.size=104857600 \
        --conf kyuubi.frontend.thrift.max.worker.threads=999 \
        --conf kyuubi.frontend.thrift.min.worker.threads=9 \
        --conf kyuubi.frontend.thrift.worker.keepalive.time=PT1M \
        --conf kyuubi.ha.addresses=hadoop01:2181,hadoop02:2181,hadoop03:2181 \
        --conf 
kyuubi.ha.client.class=org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient
 \
        --conf kyuubi.ha.engine.ref.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf kyuubi.ha.namespace=/kyuubi_1.9.1_USER_HIVE_SQL/root1/default \
        --conf kyuubi.ha.zookeeper.acl.enabled=false \
        --conf kyuubi.ha.zookeeper.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.connection.base.retry.wait=1000 \
        --conf kyuubi.ha.zookeeper.connection.max.retries=3 \
        --conf kyuubi.ha.zookeeper.connection.max.retry.wait=30000 \
        --conf kyuubi.ha.zookeeper.connection.retry.policy=EXPONENTIAL_BACKOFF \
        --conf kyuubi.ha.zookeeper.connection.timeout=15000 \
        --conf kyuubi.ha.zookeeper.engine.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.node.creation.timeout=PT2M \
        --conf kyuubi.ha.zookeeper.session.timeout=60000 \
        --conf kyuubi.metadata.cleaner.enabled=true \
        --conf kyuubi.metadata.cleaner.interval=PT30M \
        --conf kyuubi.metadata.max.age=PT128H \
        --conf kyuubi.metadata.recovery.threads=10 \
        --conf kyuubi.metadata.request.async.retry.enabled=true \
        --conf kyuubi.metadata.request.async.retry.queue.size=65536 \
        --conf kyuubi.metadata.request.async.retry.threads=10 \
        --conf kyuubi.metadata.request.retry.interval=PT5S \
        --conf 
kyuubi.metadata.store.class=org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore
 \
        --conf kyuubi.metrics.console.interval=PT20S \
        --conf kyuubi.metrics.enabled=false \
        --conf kyuubi.metrics.reporters= \
        --conf kyuubi.operation.query.timeout=3600000 \
        --conf kyuubi.operation.scheduler.pool=fair \
        --conf kyuubi.server.info.provider=ENGINE \
        --conf kyuubi.server.ipAddress=192.168.1.110 \
        --conf kyuubi.session.check.interval=PT5M \
        --conf kyuubi.session.close.on.disconnect=true \
        --conf kyuubi.session.connection.url=hadoop01:10009 \
        --conf kyuubi.session.engine.alive.timeout=PT2M \
        --conf kyuubi.session.engine.check.interval=PT1M \
        --conf kyuubi.session.engine.idle.timeout=PT30M \
        --conf kyuubi.session.engine.initialize.timeout=PT5M \
        --conf kyuubi.session.engine.launch.async=true \
        --conf kyuubi.session.engine.log.timeout=PT24H \
        --conf kyuubi.session.idle.timeout=PT6H \
        --conf kyuubi.session.real.user=root1 \
        --conf spark.cleaner.periodicGC.interval=5min \
        --conf spark.driver.cores=1 \
        --conf spark.driver.maxResultSize=1g \
        --conf spark.dynamicAllocation.cachedExecutorIdleTimeout=30min \
        --conf spark.dynamicAllocation.enabled=true \
        --conf spark.dynamicAllocation.executorAllocationRatio=0.5 \
        --conf spark.dynamicAllocation.executorIdleTimeout=60s \
        --conf spark.dynamicAllocation.initialExecutors=2 \
        --conf spark.dynamicAllocation.maxExecutors=25 \
        --conf spark.dynamicAllocation.minExecutors=2 \
        --conf spark.dynamicAllocation.schedulerBacklogTimeout=1s \
        --conf spark.dynamicAllocation.shuffleTracking.enabled=false \
        --conf spark.dynamicAllocation.shuffleTracking.timeout=30min \
        --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s \
        --conf spark.hadoop.cacheConf=false \
        --conf spark.io.compression.lz4.blockSize=128kb \
        --conf spark.master=yarn \
        --conf 
spark.scheduler.allocation.file=hdfs:///user/spark/conf/kyuubi-fairscheduler.xml
 \
        --conf spark.scheduler.mode=FAIR \
        --conf spark.shuffle.file.buffer=1m \
        --conf spark.shuffle.io.backLog=8192 \
        --conf spark.shuffle.push.enabled=true \
        --conf spark.shuffle.service.enabled=true \
        --conf spark.shuffle.service.index.cache.size=100m \
        --conf spark.shuffle.service.port=17337 \
        --conf spark.shuffle.service.removeShuffle=false \
        --conf spark.sql.adaptive.advisoryPartitionSizeInBytes=128M \
        --conf spark.sql.adaptive.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.adaptive.coalescePartitions.enabled=true \
        --conf spark.sql.adaptive.coalescePartitions.initialPartitionNum=8192 \
        --conf spark.sql.adaptive.coalescePartitions.minPartitionSize=1MB \
        --conf spark.sql.adaptive.coalescePartitions.parallelismFirst=true \
        --conf spark.sql.adaptive.enabled=true \
        --conf spark.sql.adaptive.forceOptimizeSkewedJoin=false \
        --conf spark.sql.adaptive.localShuffleReader.enabled=true \
        --conf 
spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled=true \
        --conf spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor=0.2 \
        --conf spark.sql.adaptive.skewJoin.enabled=true \
        --conf spark.sql.adaptive.skewJoin.skewedPartitionFactor=5 \
        --conf 
spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256MB \
        --conf spark.sql.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.hive.convertMetastoreOrc=true \
        --conf spark.sql.hive.metastore.jars=/usr/lib/hive/lib/* \
        --conf spark.sql.hive.metastore.version=3.1.3 \
        --conf spark.sql.orc.filterPushdown=true \
        --conf spark.sql.statistics.fallBackToHdfs=true \
        --conf spark.submit.deployMode=client
   2024-07-19 13:27:21.652 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.engine.ProcBuilder: Logging to 
/var/lib/kyuubi/root1/kyuubi-hive-sql-engine.log.2
   2024-07-19 13:27:22.763 INFO Curator-Framework-0 
org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: 
backgroundOperationsLoop exiting
   2024-07-19 13:27:22.872 INFO KyuubiSessionManager-exec-pool: 
Thread-630-EventThread org.apache.kyuubi.shaded.zookeeper.ClientCnxn: 
EventThread shut down for session: 0x200000139453541
   2024-07-19 13:27:22.872 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Session: 0x200000139453541 closed
   2024-07-19 13:27:22.876 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.operation.LaunchEngine: Processing root1's 
query[d093f68e-7596-4c81-a0e0-281b1d9c3c71]: RUNNING_STATE -> ERROR_STATE, time 
taken: 1.344 seconds
   2024-07-19 13:27:23.008 INFO KyuubiTBinaryFrontendHandler-Pool: Thread-571 
org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Received request of 
closing SessionHandle [b1313aae-c609-4698-bc70-6581168961f8]
   2024-07-19 13:27:23.009 INFO KyuubiTBinaryFrontendHandler-Pool: Thread-571 
org.apache.kyuubi.session.KyuubiSessionManager: root1's KyuubiSessionImpl with 
SessionHandle [b1313aae-c609-4698-bc70-6581168961f8] is closed, current opening 
sessions 2
   2024-07-19 13:27:23.009 INFO KyuubiTBinaryFrontendHandler-Pool: Thread-571 
org.apache.kyuubi.operation.LaunchEngine: Processing root1's 
query[d093f68e-7596-4c81-a0e0-281b1d9c3c71]: ERROR_STATE -> CLOSED_STATE, time 
taken: 1.477 seconds
   2024-07-19 13:27:23.013 INFO KyuubiTBinaryFrontendHandler-Pool: Thread-571 
org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Finished closing 
SessionHandle [b1313aae-c609-4698-bc70-6581168961f8]
   ```
   
   
   ### Kyuubi Engine Log Output
   
   ```logtalk
   2024-07-19 13:27:21.651 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.engine.EngineRef: Launching engine:
   /usr/java/jdk1.8.0_351-amd64/bin/java \
        -Xmx1g  \
        -cp 
/usr/lib/kyuubi/externals/engines/hive/kyuubi-hive-sql-engine_2.12-1.9.1.jar:/usr/lib/hive/conf:/etc/hadoop/conf:/usr/lib/hadoop/etc/hadoop:/usr/lib/hive/lib/*:/usr/lib/kyuubi/jars/commons-collections-3.2.2.jar:/usr/lib/kyuubi/jars/hadoop-client-runtime-3.3.6.jar:/usr/lib/kyuubi/jars/hadoop-client-api-3.3.6.jar:
 org.apache.kyuubi.engine.hive.HiveSQLEngine \
        --conf kyuubi.session.user=root1 \
        --conf kyuubi.engine.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf 
hive.engine.name=kyuubi_USER_HIVE_SQL_root1_default_b1313aae-c609-4698-bc70-6581168961f8
 \
        --conf hive.server2.thrift.resultset.default.fetch.size=1000 \
        --conf kyuubi.backend.engine.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.engine.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.engine.exec.pool.size=100 \
        --conf kyuubi.backend.engine.exec.pool.wait.queue.size=100 \
        --conf kyuubi.backend.server.exec.pool.keepalive.time=PT1M \
        --conf kyuubi.backend.server.exec.pool.shutdown.timeout=PT20S \
        --conf kyuubi.backend.server.exec.pool.size=100 \
        --conf kyuubi.backend.server.exec.pool.wait.queue.size=100 \
        --conf kyuubi.batch.application.check.interval=PT10S \
        --conf kyuubi.batch.application.starvation.timeout=PT3M \
        --conf kyuubi.batch.session.idle.timeout=PT6H \
        --conf kyuubi.client.ipAddress=192.168.1.110 \
        --conf kyuubi.client.version=1.9.1 \
        --conf kyuubi.engine.event.json.log.path=/var/lib/kyuubi/engine/event \
        --conf kyuubi.engine.flink.application.jars= \
        --conf kyuubi.engine.flink.extra.classpath= \
        --conf kyuubi.engine.flink.java.options= \
        --conf kyuubi.engine.flink.memory=1g \
        --conf kyuubi.engine.hive.deploy.mode=local \
        --conf kyuubi.engine.hive.event.loggers=JSON \
        --conf kyuubi.engine.hive.extra.classpath= \
        --conf kyuubi.engine.hive.java.options= \
        --conf kyuubi.engine.hive.memory=1g \
        --conf kyuubi.engine.pool.name=kyuubi-engine-pool \
        --conf kyuubi.engine.pool.selectPolicy=RANDOM \
        --conf kyuubi.engine.pool.size=-1 \
        --conf kyuubi.engine.session.initialize.sql= \
        --conf kyuubi.engine.share.level=USER \
        --conf kyuubi.engine.spark.event.loggers=SPARK \
        --conf kyuubi.engine.submit.time=1721366841647 \
        --conf kyuubi.engine.submit.timeout=PT30S \
        --conf kyuubi.engine.type=HIVE_SQL \
        --conf kyuubi.engine.ui.retainedSessions=200 \
        --conf kyuubi.engine.ui.retainedStatements=200 \
        --conf kyuubi.engine.ui.stop.enabled=true \
        --conf kyuubi.engine.yarn.cores=1 \
        --conf kyuubi.engine.yarn.java.options= \
        --conf kyuubi.engine.yarn.memory=1024 \
        --conf kyuubi.engine.yarn.queue=default \
        --conf kyuubi.engine.yarn.submit.timeout=PT1M \
        --conf kyuubi.event.async.pool.keepalive.time=PT1M \
        --conf kyuubi.event.async.pool.size=8 \
        --conf kyuubi.event.async.pool.wait.queue.size=100 \
        --conf kyuubi.frontend.connection.url.use.hostname=true \
        --conf kyuubi.frontend.max.message.size=104857600 \
        --conf kyuubi.frontend.max.worker.threads=999 \
        --conf kyuubi.frontend.min.worker.threads=9 \
        --conf kyuubi.frontend.protocols=THRIFT_BINARY,REST \
        --conf kyuubi.frontend.proxy.http.client.ip.header=X-Real-IP \
        --conf kyuubi.frontend.rest.jetty.stopTimeout=PT10S \
        --conf kyuubi.frontend.rest.max.worker.threads=999 \
        --conf 
kyuubi.frontend.thrift.binary.ssl.disallowed.protocols=SSLv2,SSLv3 \
        --conf kyuubi.frontend.thrift.binary.ssl.enabled=false \
        --conf kyuubi.frontend.thrift.max.message.size=104857600 \
        --conf kyuubi.frontend.thrift.max.worker.threads=999 \
        --conf kyuubi.frontend.thrift.min.worker.threads=9 \
        --conf kyuubi.frontend.thrift.worker.keepalive.time=PT1M \
        --conf kyuubi.ha.addresses=hadoop01:2181,hadoop02:2181,hadoop03:2181 \
        --conf 
kyuubi.ha.client.class=org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient
 \
        --conf kyuubi.ha.engine.ref.id=b1313aae-c609-4698-bc70-6581168961f8 \
        --conf kyuubi.ha.namespace=/kyuubi_1.9.1_USER_HIVE_SQL/root1/default \
        --conf kyuubi.ha.zookeeper.acl.enabled=false \
        --conf kyuubi.ha.zookeeper.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.connection.base.retry.wait=1000 \
        --conf kyuubi.ha.zookeeper.connection.max.retries=3 \
        --conf kyuubi.ha.zookeeper.connection.max.retry.wait=30000 \
        --conf kyuubi.ha.zookeeper.connection.retry.policy=EXPONENTIAL_BACKOFF \
        --conf kyuubi.ha.zookeeper.connection.timeout=15000 \
        --conf kyuubi.ha.zookeeper.engine.auth.type=NONE \
        --conf kyuubi.ha.zookeeper.node.creation.timeout=PT2M \
        --conf kyuubi.ha.zookeeper.session.timeout=60000 \
        --conf kyuubi.metadata.cleaner.enabled=true \
        --conf kyuubi.metadata.cleaner.interval=PT30M \
        --conf kyuubi.metadata.max.age=PT128H \
        --conf kyuubi.metadata.recovery.threads=10 \
        --conf kyuubi.metadata.request.async.retry.enabled=true \
        --conf kyuubi.metadata.request.async.retry.queue.size=65536 \
        --conf kyuubi.metadata.request.async.retry.threads=10 \
        --conf kyuubi.metadata.request.retry.interval=PT5S \
        --conf 
kyuubi.metadata.store.class=org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore
 \
        --conf kyuubi.metrics.console.interval=PT20S \
        --conf kyuubi.metrics.enabled=false \
        --conf kyuubi.metrics.reporters= \
        --conf kyuubi.operation.query.timeout=3600000 \
        --conf kyuubi.operation.scheduler.pool=fair \
        --conf kyuubi.server.info.provider=ENGINE \
        --conf kyuubi.server.ipAddress=192.168.1.110 \
        --conf kyuubi.session.check.interval=PT5M \
        --conf kyuubi.session.close.on.disconnect=true \
        --conf kyuubi.session.connection.url=hadoop01:10009 \
        --conf kyuubi.session.engine.alive.timeout=PT2M \
        --conf kyuubi.session.engine.check.interval=PT1M \
        --conf kyuubi.session.engine.idle.timeout=PT30M \
        --conf kyuubi.session.engine.initialize.timeout=PT5M \
        --conf kyuubi.session.engine.launch.async=true \
        --conf kyuubi.session.engine.log.timeout=PT24H \
        --conf kyuubi.session.idle.timeout=PT6H \
        --conf kyuubi.session.real.user=root1 \
        --conf spark.cleaner.periodicGC.interval=5min \
        --conf spark.driver.cores=1 \
        --conf spark.driver.maxResultSize=1g \
        --conf spark.dynamicAllocation.cachedExecutorIdleTimeout=30min \
        --conf spark.dynamicAllocation.enabled=true \
        --conf spark.dynamicAllocation.executorAllocationRatio=0.5 \
        --conf spark.dynamicAllocation.executorIdleTimeout=60s \
        --conf spark.dynamicAllocation.initialExecutors=2 \
        --conf spark.dynamicAllocation.maxExecutors=25 \
        --conf spark.dynamicAllocation.minExecutors=2 \
        --conf spark.dynamicAllocation.schedulerBacklogTimeout=1s \
        --conf spark.dynamicAllocation.shuffleTracking.enabled=false \
        --conf spark.dynamicAllocation.shuffleTracking.timeout=30min \
        --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s \
        --conf spark.hadoop.cacheConf=false \
        --conf spark.io.compression.lz4.blockSize=128kb \
        --conf spark.master=yarn \
        --conf 
spark.scheduler.allocation.file=hdfs:///user/spark/conf/kyuubi-fairscheduler.xml
 \
        --conf spark.scheduler.mode=FAIR \
        --conf spark.shuffle.file.buffer=1m \
        --conf spark.shuffle.io.backLog=8192 \
        --conf spark.shuffle.push.enabled=true \
        --conf spark.shuffle.service.enabled=true \
        --conf spark.shuffle.service.index.cache.size=100m \
        --conf spark.shuffle.service.port=17337 \
        --conf spark.shuffle.service.removeShuffle=false \
        --conf spark.sql.adaptive.advisoryPartitionSizeInBytes=128M \
        --conf spark.sql.adaptive.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.adaptive.coalescePartitions.enabled=true \
        --conf spark.sql.adaptive.coalescePartitions.initialPartitionNum=8192 \
        --conf spark.sql.adaptive.coalescePartitions.minPartitionSize=1MB \
        --conf spark.sql.adaptive.coalescePartitions.parallelismFirst=true \
        --conf spark.sql.adaptive.enabled=true \
        --conf spark.sql.adaptive.forceOptimizeSkewedJoin=false \
        --conf spark.sql.adaptive.localShuffleReader.enabled=true \
        --conf 
spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled=true \
        --conf spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor=0.2 \
        --conf spark.sql.adaptive.skewJoin.enabled=true \
        --conf spark.sql.adaptive.skewJoin.skewedPartitionFactor=5 \
        --conf 
spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256MB \
        --conf spark.sql.autoBroadcastJoinThreshold=10MB \
        --conf spark.sql.hive.convertMetastoreOrc=true \
        --conf spark.sql.hive.metastore.jars=/usr/lib/hive/lib/* \
        --conf spark.sql.hive.metastore.version=3.1.3 \
        --conf spark.sql.orc.filterPushdown=true \
        --conf spark.sql.statistics.fallBackToHdfs=true \
        --conf spark.submit.deployMode=client
   2024-07-19 13:27:21.652 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.engine.ProcBuilder: Logging to 
/var/lib/kyuubi/root1/kyuubi-hive-sql-engine.log.2
   2024-07-19 13:27:22.763 INFO Curator-Framework-0 
org.apache.kyuubi.shaded.curator.framework.imps.CuratorFrameworkImpl: 
backgroundOperationsLoop exiting
   2024-07-19 13:27:22.872 INFO KyuubiSessionManager-exec-pool: 
Thread-630-EventThread org.apache.kyuubi.shaded.zookeeper.ClientCnxn: 
EventThread shut down for session: 0x200000139453541
   2024-07-19 13:27:22.872 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.shaded.zookeeper.ZooKeeper: Session: 0x200000139453541 closed
   2024-07-19 13:27:22.876 INFO KyuubiSessionManager-exec-pool: Thread-630 
org.apache.kyuubi.operation.LaunchEngine: Processing root1's 
query[d093f68e-7596-4c81-a0e0-281b1d9c3c71]: RUNNING_STATE -> ERROR_STATE, time 
taken: 1.344 seconds
   Error: Could not find or load main class 
   Error: org.apache.kyuubi.KyuubiSQLException: Failed to detect the root 
cause, please check /var/lib/kyuubi/root1/kyuubi-hive-sql-engine.log.2 at 
server side if necessary. The last 10 line(s) of log are:
   Error: Could not find or load main class 
        at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
        at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:277)
        at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:270)
        at 
org.apache.kyuubi.engine.hive.HiveProcessBuilder.getError(HiveProcessBuilder.scala:38)
        at 
org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:236)
        at 
org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:166)
        at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:178)
        at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:183)
        at 
org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:317)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:317)
        at 
org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:159)
        at 
org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:133)
        at 
org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
        at 
org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:133)
        at 
org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
        at 
org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:133)
        at 
org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$1(LaunchEngine.scala:60)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750) (state=,code=0)
   ```
   
   
   ### Kyuubi Server Configurations
   
   ```yaml
   1 ##Authentication
         2 kyuubi.authentication NONE
         3 ## Backend
         4 kyuubi.backend.engine.exec.pool.keepalive.time PT1M
         5 kyuubi.backend.engine.exec.pool.shutdown.timeout PT20S
         6 kyuubi.backend.engine.exec.pool.size 100
         7 kyuubi.backend.engine.exec.pool.wait.queue.size 100
         8 kyuubi.backend.server.exec.pool.keepalive.time PT1M
         9 kyuubi.backend.server.exec.pool.shutdown.timeout PT20S
        10 kyuubi.backend.server.exec.pool.size 100
        11 kyuubi.backend.server.exec.pool.wait.queue.size 100
        12 kyuubi.backend.server.event.json.log.path 
/var/lib/kyuubi/server/event
        13 kyuubi.backend.server.event.loggers JSON
        14 ## Batch
        15 kyuubi.batch.application.check.interval PT10S
        16 kyuubi.batch.application.starvation.timeout PT3M
        17 kyuubi.batch.session.idle.timeout PT6H
        18 ## Engine
        19 kyuubi.engine.event.json.log.path /var/lib/kyuubi/engine/event
        20 #yarn application mode.
        21 kyuubi.engine.flink.application.jars
        22 #yarn session mode.
        23 kyuubi.engine.flink.extra.classpath
        24 kyuubi.engine.flink.java.options
        25 kyuubi.engine.flink.memory 1g
        26 # locla:like hive-cli yarn: Application Maste
        27 kyuubi.engine.hive.deploy.mode LOCAL
        28 kyuubi.engine.hive.event.loggers JSON
        29 kyuubi.engine.hive.extra.classpath
        30 kyuubi.engine.hive.java.options
        31 kyuubi.engine.hive.memory 1g
        32 kyuubi.engine.pool.name kyuubi-engine-pool
        33 kyuubi.engine.pool.selectPolicy RANDOM
        34 kyuubi.engine.pool.size -1
        35 kyuubi.engine.pool.size.threshold 9
        36 kyuubi.engine.session.initialize.sql
        37 kyuubi.engine.share.level USER
        38 kyuubi.engine.spark.event.loggers SPARK
        39 kyuubi.engine.submit.timeout PT30S
        40 kyuubi.engine.type SPARK_SQL
        41 kyuubi.engine.ui.retainedSessions 200
        42 kyuubi.engine.ui.retainedStatements 200
        43 kyuubi.engine.ui.stop.enabled true
        44 kyuubi.engine.yarn.cores 1
        45 kyuubi.engine.yarn.java.options
        46 kyuubi.engine.yarn.memory 1024
        47 kyuubi.engine.yarn.queue default
        48 kyuubi.engine.yarn.submit.timeout PT1M
        49 ## Event
        50 kyuubi.event.async.pool.keepalive.time PT1M
        51 kyuubi.event.async.pool.size 8
        52 kyuubi.event.async.pool.wait.queue.size 100
        53 ##Frontend
        54 kyuubi.frontend.advertised.host hadoop01
        55 kyuubi.frontend.connection.url.use.hostname true
        56 kyuubi.frontend.max.message.size 104857600
        57 kyuubi.frontend.max.worker.threads 999
        58 kyuubi.frontend.min.worker.threads 9
        59 kyuubi.frontend.protocols THRIFT_BINARY,REST
        60 kyuubi.frontend.proxy.http.client.ip.header X-Real-IP
        61 kyuubi.frontend.rest.bind.port 10099
        62 kyuubi.frontend.rest.jetty.stopTimeout PT10S
        63 kyuubi.frontend.rest.max.worker.threads 999
        64 kyuubi.frontend.thrift.binary.bind.port 10009
        65 kyuubi.frontend.thrift.binary.ssl.disallowed.protocols SSLv2,SSLv3
        66 kyuubi.frontend.thrift.binary.ssl.enabled false
        67 kyuubi.frontend.thrift.max.message.size 104857600
        68 kyuubi.frontend.thrift.max.worker.threads 999
        69 kyuubi.frontend.thrift.min.worker.threads 9
        70 kyuubi.frontend.thrift.worker.keepalive.time    PT1M
        71 ## ha
        72 kyuubi.ha.addresses hadoop01:2181,hadoop02:2181,hadoop03:2181
        73 kyuubi.ha.client.class 
org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient
        74 kyuubi.ha.namespace kyuubi
        75 kyuubi.ha.zookeeper.acl.enabled false
        76 kyuubi.ha.zookeeper.auth.type NONE
        77 kyuubi.ha.zookeeper.connection.base.retry.wait 1000
        78 kyuubi.ha.zookeeper.connection.max.retries 3
        79 kyuubi.ha.zookeeper.connection.max.retry.wait 30000
        80 kyuubi.ha.zookeeper.connection.retry.policy EXPONENTIAL_BACKOFF
        81 kyuubi.ha.zookeeper.connection.timeout 15000
        82 kyuubi.ha.zookeeper.engine.auth.type NONE
        83 kyuubi.ha.zookeeper.node.creation.timeout PT2M
        84 kyuubi.ha.zookeeper.session.timeout 60000
        85 ##Metadata
        86 kyuubi.metadata.cleaner.enabled true
        87 kyuubi.metadata.cleaner.interval PT30M
        88 kyuubi.metadata.max.age PT128H
        89 kyuubi.metadata.recovery.threads 10
        90 kyuubi.metadata.request.async.retry.enabled true
        91 kyuubi.metadata.request.async.retry.queue.size 65536
        92 kyuubi.metadata.request.async.retry.threads 10
        93 kyuubi.metadata.request.retry.interval PT5S
        94 kyuubi.metadata.store.class 
org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore
        95 kyuubi.metadata.store.jdbc.database.schema.init true
        96 kyuubi.metadata.store.jdbc.database.type MYSQL
        97 kyuubi.metadata.store.jdbc.driver com.mysql.cj.jdbc.Driver
        98 kyuubi.metadata.store.jdbc.password qishu1@3
        99 kyuubi.metadata.store.jdbc.user root
       100 kyuubi.metadata.store.jdbc.url 
jdbc:mysql://192.168.1.110:13306/kyuubi_metadata
       101 ##Metrics
       102 kyuubi.metrics.console.interval PT20S
       103 kyuubi.metrics.enabled false
       104 kyuubi.metrics.reporters
       105 ##operation
       106 kyuubi.operation.query.timeout 3600000
       107 kyuubi.operation.scheduler.pool fair
   ## server
       109 kyuubi.server.administrators kyuubi
       110 kyuubi.server.info.provider ENGINE
       111 ## session
       112 kyuubi.session.check.interval   PT5M
       113 kyuubi.session.close.on.disconnect true
       114 kyuubi.session.engine.alive.timeout PT2M
       115 kyuubi.session.engine.check.interval PT1M
       116 kyuubi.session.engine.idle.timeout PT30M
       117 kyuubi.session.engine.initialize.timeout PT5M
       118 kyuubi.session.engine.launch.async true
       119 kyuubi.session.engine.log.timeout PT24H
       120 kyuubi.session.idle.timeout PT6H
       121 ##spark
       122 spark.cleaner.periodicGC.interval 5min
       123 spark.driver.cores 1
       124 spark.driver.maxResultSize 1g
       125 spark.dynamicAllocation.cachedExecutorIdleTimeout 30min
       126 spark.dynamicAllocation.enabled true
       127 spark.dynamicAllocation.executorAllocationRatio 0.5
       128 spark.dynamicAllocation.executorIdleTimeout 60s
       129 spark.dynamicAllocation.initialExecutors 2
       130 spark.dynamicAllocation.maxExecutors 25
       131 spark.dynamicAllocation.minExecutors 2
       132 spark.dynamicAllocation.schedulerBacklogTimeout 1s
       133 spark.dynamicAllocation.shuffleTracking.enabled false
       134 spark.dynamicAllocation.shuffleTracking.timeout 30min
       135 spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 1s
       136 spark.hadoop.cacheConf false
       137 spark.io.compression.lz4.blockSize 128kb
       138 spark.master yarn
       139 spark.scheduler.allocation.file 
hdfs:///user/spark/conf/kyuubi-fairscheduler.xml
       140 spark.scheduler.mode FAIR
       141 spark.shuffle.file.buffer 1m
       142 spark.shuffle.io.backLog 8192
       143 spark.shuffle.push.enabled true
       144 spark.shuffle.service.enabled true
       145 spark.shuffle.service.index.cache.size 100m
       146 spark.shuffle.service.port 17337
       147 spark.shuffle.service.removeShuffle false
       148 spark.sql.adaptive.advisoryPartitionSizeInBytes 128M
       149 spark.sql.adaptive.autoBroadcastJoinThreshold 10MB
       150 spark.sql.adaptive.coalescePartitions.enabled true
       151 spark.sql.adaptive.coalescePartitions.initialPartitionNum 8192
       152 spark.sql.adaptive.coalescePartitions.minPartitionSize 1MB
       153 spark.sql.adaptive.coalescePartitions.parallelismFirst true
       154 spark.sql.adaptive.enabled true
       155 spark.sql.adaptive.forceOptimizeSkewedJoin false
       156 spark.sql.adaptive.localShuffleReader.enabled true
       157 spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled true
       158 spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor 0.2
       159 spark.sql.adaptive.skewJoin.enabled true
       160 spark.sql.adaptive.skewJoin.skewedPartitionFactor 5
       161 spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes 256MB
       161 spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes 256MB
       162 spark.sql.autoBroadcastJoinThreshold 10MB
       163 spark.sql.hive.convertMetastoreOrc true
       164 spark.sql.hive.metastore.jars /usr/lib/hive/lib/*
       165 spark.sql.hive.metastore.version 3.1.3
       166 spark.sql.orc.filterPushdown true
       167 spark.sql.statistics.fallBackToHdfs true
       168 spark.submit.deployMode client
   ```
   
   
   ### Kyuubi Engine Configurations
   
   ```yaml
   as Kyuubi Server Configurations
   ```
   
   
   ### Additional context
   
   ./beeline -u 
"jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi"
 -n root  --hiveconf kyuubi.engine.type=FLINK_SQL 
   
   
   flink engine also have  same problem
   
   ### Are you willing to submit PR?
   
   - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi 
community to fix.
   - [X] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org


Reply via email to