sozo4463 opened a new issue, #4336: URL: https://github.com/apache/incubator-seatunnel/issues/4336
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22) and found no similar issues. ### What happened 在使用jdbc-source读取MySQL往kudu表写数据时,如果MySQL的数据有空值,在写如苦读表时会报空指针异常 When using jdbc-source to read mysql and write data to kudu table, if the data of MySQL has a null value, a NullPointerException will be reported when writing such as hard reading table ### SeaTunnel Version 2.3.0 ### SeaTunnel Config ```conf env { spark.app.name = "SeaTunnel" spark.executor.instances = 2 spark.executor.cores = 1 spark.executor.memory = "1g" job.mode = "BATCH" } source { Jdbc { driver = "com.mysql.jdbc.Driver" url = "jdbc:mysql://cdh01:3306/test" query = "select * from test" result_table_name = "t1" user = "root" password = "123456" } } transform { sql { sql = "select *, from_unixtime(unix_timestamp(),'yyyy-MM-dd HH:mm:ss') as loadtime from t1" result_table_name = "res" } } sink { kudu { kudu_master="cdh03:7051" kudu_table="impala::test.test" save_mode="overwrite" source_table_name = "res" } } ``` ### Running Command ```shell ./bin/start-seatunnel-spark-connector-v2.sh --master yarn --deploy-mode cluster --config ./config/test.conf ``` ### Error Exception ```log java.lang.NullPointerException at org.apache.seatunnel.connectors.seatunnel.kudu.kuduclient.KuduOutputFormat.transform(KuduOutputFormat.java:102) at org.apache.seatunnel.connectors.seatunnel.kudu.kuduclient.KuduOutputFormat.upsert(KuduOutputFormat.java:130) at org.apache.seatunnel.connectors.seatunnel.kudu.kuduclient.KuduOutputFormat.write(KuduOutputFormat.java:156) at org.apache.seatunnel.connectors.seatunnel.kudu.sink.KuduSinkWriter.write(KuduSinkWriter.java:53) at org.apache.seatunnel.connectors.seatunnel.kudu.sink.KuduSinkWriter.write(KuduSinkWriter.java:33) at org.apache.seatunnel.translation.spark.sink.SparkDataWriter.write(SparkDataWriter.java:58) at org.apache.seatunnel.translation.spark.sink.SparkDataWriter.write(SparkDataWriter.java:37) at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:118) at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$$anonfun$run$3.apply(WriteToDataSourceV2Exec.scala:116) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1442) at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:146) at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:67) at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec$$anonfun$doExecute$2.apply(WriteToDataSourceV2Exec.scala:66) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### Flink or Spark Version spark-2.4.0-cdh6.3.2 ### Java or Scala Version jdk1.8.0_181 ### Screenshots _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
