GangLiCN opened a new issue, #6514:
URL: https://github.com/apache/seatunnel/issues/6514

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   Spark job failed with "java.lang.IncompatibleClassChangeError" error.
   
   ### SeaTunnel Version
   
   2.3.4
   
   ### SeaTunnel Config
   
   ```conf
   env {
     # You can set SeaTunnel environment configuration here
     parallelism = 2
     job.mode = "BATCH"
     checkpoint.interval = 10000
   }
   
   source {
     # This is a example source plugin **only for test and demonstrate the 
feature source plugin**
     FakeSource {
       parallelism = 2
       result_table_name = "fake"
       row.num = 16
       schema = {
         fields {
           name = "string"
           age = "int"
         }
       }
     }
   
     # If you would like to get more information about how to configure 
SeaTunnel and see full list of source plugins,
     # please go to https://seatunnel.apache.org/docs/category/source-v2
   }
   
   sink {
     Console {
     }
   
     # If you would like to get more information about how to configure 
SeaTunnel and see full list of sink plugins,
     # please go to https://seatunnel.apache.org/docs/category/sink-v2
   }
   ```
   
   
   ### Running Command
   
   ```shell
   start-seatunnel-spark-3-connector-v2.cmd --config v2.batch.config.template 
-m local -e client
   ```
   
   
   ### Error Exception
   
   ```log
   Exception in thread "main" java.lang.IncompatibleClassChangeError: 
Conflicting default methods: 
org/apache/spark/sql/connector/write/BatchWrite.useCommitCoordinator 
org/apache/spark/sql/connector/write/streaming/StreamingWrite.useCommitCoordinator
           at 
org.apache.seatunnel.translation.spark.sink.SeaTunnelBatchWrite.useCommitCoordinator(SeaTunnelBatchWrite.java)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:374)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:359)
           at 
org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:225)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:337)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:336)
           at 
org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:225)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
           at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
           at 
org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142)
           at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:859)
           at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:312)
           at 
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:248)
           at 
org.apache.seatunnel.core.starter.spark.execution.SinkExecuteProcessor.execute(SinkExecuteProcessor.java:155)
           at 
org.apache.seatunnel.core.starter.spark.execution.SparkExecution.execute(SparkExecution.java:71)
           at 
org.apache.seatunnel.core.starter.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:60)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.spark.SeaTunnelSpark.main(SeaTunnelSpark.java:35)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   
   ### Zeta or Flink or Spark Version
   
   _No response_
   
   ### Java or Scala Version
   
   java version "1.8.0_391"
   
   ### Screenshots
   
   
![image](https://github.com/apache/seatunnel/assets/9818642/90402ec7-678c-4378-aba0-5ff576e76b4a)
   
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to