Aidon-lyd opened a new issue, #13720: URL: https://github.com/apache/dolphinscheduler/issues/13720
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened [LOG-PATH]: /opt/install/dolphinscheduler-3.1.1/worker-server/logs/20230310/8831344750656_2-132-250.log, [HOST]: Host{address='172.24.86.98:1234', ip='172.24.86.98', port=1234} [INFO] 2023-03-10 05:16:34.016 +0000 - Begin to pulling task [INFO] 2023-03-10 05:16:34.017 +0000 - Begin to initialize task [INFO] 2023-03-10 05:16:34.017 +0000 - Set task startTime: Fri Mar 10 05:16:34 UTC 2023 [INFO] 2023-03-10 05:16:34.018 +0000 - Set task envFile: /opt/install/dolphinscheduler-3.1.1/worker-server/conf/dolphinscheduler_env.sh [INFO] 2023-03-10 05:16:34.018 +0000 - Set task appId: 132_250 [INFO] 2023-03-10 05:16:34.018 +0000 - End initialize task [INFO] 2023-03-10 05:16:34.018 +0000 - Set task status to TaskExecutionStatus{code=1, desc='running'} [INFO] 2023-03-10 05:16:34.018 +0000 - TenantCode:hdfs check success [INFO] 2023-03-10 05:16:34.019 +0000 - ProcessExecDir:/opt/install/dolphinscheduler-3.1.1/data/exec/process/hdfs/7781792440512/8831344750656_2/132/250 check success [INFO] 2023-03-10 05:16:34.019 +0000 - Resources:{} check success [INFO] 2023-03-10 05:16:34.019 +0000 - Task plugin: SQL create success [INFO] 2023-03-10 05:16:34.019 +0000 - Success initialized task plugin instance success [INFO] 2023-03-10 05:16:34.019 +0000 - Success set taskVarPool: null [INFO] 2023-03-10 05:16:34.019 +0000 - Full sql parameters: SqlParameters{type='HIVE', datasource=2, sql='show tables;', sqlType=0, sendEmail=null, displayRows=10, limit=0, segmentSeparator=, udfs='5', showType='null', connParams='null', groupId='0', title='null', preStatements=[], postStatements=[]} [INFO] 2023-03-10 05:16:34.019 +0000 - sql type : HIVE, datasource : 2, sql : show tables; , localParams : [],udfs : 5,showType : null,connParams : null,varPool : [] ,query max result limit 0 [ERROR] 2023-03-10 05:16:34.020 +0000 - sql task error java.lang.NullPointerException: null at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.lambda$buildJarSql$1(SqlTask.java:500) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.buildJarSql(SqlTask.java:505) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.createFuncs(SqlTask.java:474) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.handle(SqlTask.java:158) at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:49) at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:174) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [ERROR] 2023-03-10 05:16:34.020 +0000 - Task execute failed, due to meet an exception org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute sql task failed at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.handle(SqlTask.java:168) at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:49) at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:174) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException: null at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.lambda$buildJarSql$1(SqlTask.java:500) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.buildJarSql(SqlTask.java:505) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.createFuncs(SqlTask.java:474) at org.apache.dolphinscheduler.plugin.task.sql.SqlTask.handle(SqlTask.java:158) ... 9 common frames omitted [INFO] 2023-03-10 05:16:34.020 +0000 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'} ### What you expected to happen udf of hive process successful ### How to reproduce step1:upload udf of jars to resource center. step2:create hive udf in udf manangment. step3:use udf of step2. step4:always report null...... ### Anything else _No response_ ### Version 3.1.x ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
