ethan7811 opened a new issue #1362:
URL: https://github.com/apache/incubator-kyuubi/issues/1362


   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/incubator-kyuubi/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Describe the bug
   
   we use kyuubi as redash hive backend, and we use pyhive connect to kyuubi to 
get result, some simple sql works fine, but some complex sql always failed to 
get result, but in kyuubi log it seem that sql job has finished, it always 
throws exception as below in process of computation as below in engine log
   ```
   21/11/11 18:05:29 ERROR scheduler.AsyncEventQueue: Listener 
SparkSQLEngineListener threw an exception
   java.lang.NullPointerException
           at 
org.apache.kyuubi.engine.spark.monitor.KyuubiStatementMonitor$.$anonfun$insertJobEndTimeAndResult$2(KyuubiStatementMonitor.scala:133)
           at org.apache.kyuubi.Logging.warn(Logging.scala:60)
           at org.apache.kyuubi.Logging.warn$(Logging.scala:58)
           at 
org.apache.kyuubi.engine.spark.monitor.KyuubiStatementMonitor$.warn(KyuubiStatementMonitor.scala:28)
           at 
org.apache.kyuubi.engine.spark.monitor.KyuubiStatementMonitor$.insertJobEndTimeAndResult(KyuubiStatementMonitor.scala:133)
           at 
org.apache.spark.kyuubi.SparkSQLEngineListener.onJobEnd(SparkSQLEngineListener.scala:79)
           at 
org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:39)
           at 
org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
           at 
org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
           at 
org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
           at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
           at 
org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
           at 
org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
           at 
org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
           at 
scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
           at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
           at 
org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
           at 
org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
           at 
org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1381)
           at 
org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)
   ```
   
   the exception above doesn't interrupt job execution and we finally get log 
like
   ```
   21/11/11 18:05:52 INFO operation.ExecuteStatement: Processing xxx's 
query[f512caa2-0d60-48d1-91f8-f4b3a06c5ee6]: RUNNING_STATE -> FINISHED_STATE, 
statement --
   xxxxxx, time taken: 44.383 seconds
   21/11/11 18:06:08 INFO service.ThriftFrontendService: Received request of 
closing SessionHandle [2ad135fe-a481-45e7-a626-76f0fbbea931]
   21/11/11 18:06:08 INFO session.SparkSQLSessionManager: SessionHandle 
[2ad135fe-a481-45e7-a626-76f0fbbea931] is closed, current opening sessions 0
   21/11/11 18:06:08 INFO service.ThriftFrontendService: Finished closing 
SessionHandle [2ad135fe-a481-45e7-a626-76f0fbbea931]
   21/11/11 18:06:08 ERROR server.TThreadPoolServer: Thrift error occurred 
during processing of message.
   org.apache.thrift.transport.TTransportException
           at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
           at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
           at 
org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:374)
           at 
org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:451)
           at 
org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:433)
           at 
org.apache.thrift.transport.TSaslServerTransport.read(TSaslServerTransport.java:43)
           at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
           at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:425)
           at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:321)
           at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:225)
           at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
           at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
           at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   ```
   and redash show that "Error running query: failed communicating with server. 
Please check your Internet connection and try again."
   
   ### Affects Version(s)
   
   1.3.0
   
   ### Kyuubi Server Log Output
   
   _No response_
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   _No response_
   
   ### Kyuubi Engine Configurations
   
   ```yaml
   kyuubi.authentication=KERBEROS
   kyuubi.frontend.bind.host=xxxx
   kyuubi.frontend.bind.port=10003
   kyuubi.ha.enabled=true
   kyuubi.ha.zookeeper.acl.enabled=false
   kyuubi.ha.zookeeper.client.port=2181
   kyuubi.ha.zookeeper.namespace=kyuubi-ha
   kyuubi.ha.zookeeper.quorum=xxxx
   kyuubi.kinit.keytab=/etc/keytabs/hive.keytab
   kyuubi.kinit.principal=xxxxx
   kyuubi.session.engine.login.timeout=PT30M
   kyuubi.session.idle.timeout=PT30M
   kyuubi.operation.idle.timeout=PT1H
   
   ## Spark
   spark.driver.maxResultSize=1g
   spark.driver.memory=2g
   spark.dynamicAllocation.maxExecutors=10
   spark.executor.cores=3
   spark.executor.memory=12G
   spark.submit.deployMode=client
   ```
   
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to