caoes opened a new issue, #13602:
URL: https://github.com/apache/dolphinscheduler/issues/13602

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### What happened
   
   Here is a question. Is it because of hive or DS。
   The following is the error message:
   [INFO] 2022-09-08 04:25:44.024 org.apache.hive.jdbc.Utils:[318] - Supplied 
authorities: 100.16100.201:10001
   [INFO] 2022-09-08 04:25:44.024 org.apache.hive.jdbc.Utils:[437] - Resolved 
authority: 100.16100.201:10001
   [WARN] 2022-09-08 04:25:45.688 com.zaxxer.hikari.pool.PoolBase:[184] - 
HikariPool-2 - Failed to validate connection 
org.apache.hive.jdbc.HiveConnection@46f96733 
(org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out). Possibly consider using a 
shorter maxLifetime value.
   [ERROR] 2022-09-08 04:25:45.690 
org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient:[152] - 
get oneSessionDataSource Connection fail SQLException: HikariPool-2 - 
Connection is not available, request timed out after 30002ms.
   java.sql.SQLTransientConnectionException: HikariPool-2 - Connection is not 
available, request timed out after 30002ms.
   at 
com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:696)
   at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197)
   at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
   at 
com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128)
   at 
org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient.getConnection(HiveDataSourceClient.java:150)
   at 
org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.getConnection(DataSourceClientProvider.java:66)
   at 
org.apache.dolphinscheduler.plugin.task.sql.SqlTask.executeFuncAndSql(SqlTask.java:183)
   at 
org.apache.dolphinscheduler.plugin.task.sql.SqlTask.handle(SqlTask.java:154)
   at 
org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:191)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
   at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
   at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
   Caused by: java.sql.SQLException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
   at 
org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:317)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250)
   at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:169)
   at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
   ... 14 common frames omitted
   Caused by: org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
   at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
   at 
org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
   at 
org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
   at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
   at 
org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
   
   ### What you expected to happen
   
   When run workflow scheduling periodic, accidental problem, speculation HSQL 
complex, hive cluster resources mission is frequent, lead to submit a timeout, 
I try to modify the timed out parameters, increase the parameters of the 
spring. The datasource. `spring.datasource.connection.timeout=600000` and `     
   
dataSource.setConnectionTimeout(PropertyUtils.getLong(Constants.SPRING_DATASOURCE_CONNECTION_TIMEOUT));
   `  The problem no longer arises, but new problems arise, and new problems 
are episodic problems
   PS:hive connection resource leakage has been resolved. and SQLTask job  
support batch execute sql ,close(rs, ptmt,conn) modify colse(conn);
   
   java.sql.SQLException: org.apache.thrift.transport.TTransportException: SASL 
authentication not complete
   java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
Cannot read from null inputStream  
   java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
   java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
Cannot write to null outputStream 
   java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
java.net.SocketException: Socket closed
   
   ### How to reproduce
   
   no
   
   ### Anything else
   
   no
   
   ### Version
   
   2.0.x
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to