LiJie20190102 opened a new issue, #9204:
URL: https://github.com/apache/seatunnel/issues/9204

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   
![Image](https://github.com/user-attachments/assets/5a1d7d66-3d98-48c3-a48c-3664fb1a39bb)
   
   ### SeaTunnel Version
   
   2.3.2
   
   
   ### SeaTunnel Config
   
   ```conf
   
![Image](https://github.com/user-attachments/assets/aebb3200-ff2d-42db-a1d6-5f1f6d195463)
   ```
   
   ### Running Command
   
   ```shell
   env {
     execution.parallelism = 2
     job.mode = "BATCH"
   }
   
   source{
       Jdbc {
           url = 
"jdbc:mysql://localhost/test_0419?useUnicode=true&characterEncoding=UTF-8"
           driver = "com.mysql.cj.jdbc.Driver"
           connection_check_timeout_sec = 100
           user = "root"
           password = "111111"
           query = "select * from test_0419.aa limit 16"
       }
   }
   
   sink {
       jdbc {
           url = 
"jdbc:mysql://localhost/test_0420?useUnicode=true&characterEncoding=UTF-8"
           driver = "com.mysql.cj.jdbc.Driver"
           user = "root"
           password = "111111"
           # Automatically generate sql statements based on database table names
           generate_sink_sql = true
           database = test_0420
           table = bb
       }
   }
   
   
   
   
   bin/start-seatunnel-spark-3-connector-v2.sh --master yarn --deploy-mode 
client --config ./config/v2.batch.config.template
   ```
   
   ### Error Exception
   
   ```log
   25/04/20 21:43:07 INFO YarnClientSchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
   25/04/20 21:43:07 INFO BlockManagerMasterEndpoint: Registering block manager 
wj-test-003:39475 with 366.3 MiB RAM, BlockManagerId(2, wj-test-003, 39475, 
None)
   25/04/20 21:43:07 INFO AbstractPluginDiscovery: Load SeaTunnelSource Plugin 
from /opt/tmp/seatunnel-2.3.3/connectors/seatunnel
   25/04/20 21:43:07 INFO AbstractPluginDiscovery: Discovery plugin jar: Jdbc 
at: file:/opt/tmp/seatunnel-2.3.3/connectors/seatunnel/connector-jdbc-2.3.2.jar
   25/04/20 21:43:07 INFO AbstractPluginDiscovery: Load plugin: 
PluginIdentifier{engineType='seatunnel', pluginType='source', 
pluginName='Jdbc'} from classpath
   25/04/20 21:43:08 INFO JdbcSource: The partition_column parameter is not 
configured, and the source parallelism is set to 1
   25/04/20 21:43:08 INFO SparkRuntimeEnvironment: register plugins 
:[file:/opt/tmp/seatunnel-2.3.3/connectors/seatunnel/connector-jdbc-2.3.2.jar]
   25/04/20 21:43:08 INFO AbstractPluginDiscovery: Load SeaTunnelTransform 
Plugin from /opt/tmp/seatunnel-2.3.3/lib
   25/04/20 21:43:08 INFO SparkRuntimeEnvironment: register plugins :[]
   25/04/20 21:43:08 INFO AbstractPluginDiscovery: Load SeaTunnelSink Plugin 
from /opt/tmp/seatunnel-2.3.3/connectors/seatunnel
   25/04/20 21:43:08 INFO AbstractPluginDiscovery: Discovery plugin jar: jdbc 
at: file:/opt/tmp/seatunnel-2.3.3/connectors/seatunnel/connector-jdbc-2.3.2.jar
   25/04/20 21:43:08 INFO AbstractPluginDiscovery: Load plugin: 
PluginIdentifier{engineType='seatunnel', pluginType='sink', pluginName='jdbc'} 
from classpath
   25/04/20 21:43:08 INFO SparkRuntimeEnvironment: register plugins 
:[file:/opt/tmp/seatunnel-2.3.3/connectors/seatunnel/connector-jdbc-2.3.2.jar]
   25/04/20 21:43:08 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir.
   25/04/20 21:43:08 INFO SharedState: Warehouse path is 
'file:/opt/tmp/seatunnel-2.3.3/spark-warehouse'.
   25/04/20 21:43:08 INFO ServerInfo: Adding filter to /SQL: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
   25/04/20 21:43:08 INFO ServerInfo: Adding filter to /SQL/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
   25/04/20 21:43:08 INFO ServerInfo: Adding filter to /SQL/execution: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
   25/04/20 21:43:08 INFO ServerInfo: Adding filter to /SQL/execution/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
   25/04/20 21:43:08 INFO ServerInfo: Adding filter to /static/sql: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
   Exception in thread "main" java.lang.AbstractMethodError: Method 
org/apache/seatunnel/connectors/seatunnel/jdbc/sink/JdbcSink.getUserConfigSaveMode()Lorg/apache/seatunnel/api/sink/DataSaveMode;
 is abstract
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSink.getUserConfigSaveMode(JdbcSink.java)
           at 
org.apache.seatunnel.core.starter.spark.execution.SinkExecuteProcessor.execute(SinkExecuteProcessor.java:113)
           at 
org.apache.seatunnel.core.starter.spark.execution.SparkExecution.execute(SparkExecution.java:74)
           at 
org.apache.seatunnel.core.starter.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:60)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.spark.SeaTunnelSpark.main(SeaTunnelSpark.java:35)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   25/04/20 21:43:10 INFO SparkContext: Invoking stop() from shutdown hook
   25/04/20 21:43:10 INFO SparkUI: Stopped Spark web UI at 
http://wj-test-001:4040
   25/04/20 21:43:10 INFO YarnClientSchedulerBackend: Interrupting monitor 
thread
   25/04/20 21:43:10 INFO YarnClientSchedulerBackend: Shutting down all 
executors
   25/04/20 21:43:10 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each 
executor to shut down
   25/04/20 21:43:10 INFO YarnClientSchedulerBackend: YARN client scheduler 
backend Stopped
   25/04/20 21:43:10 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
   25/04/20 21:43:10 INFO MemoryStore: MemoryStore cleared
   25/04/20 21:43:10 INFO BlockManager: BlockManager stopped
   25/04/20 21:43:10 INFO BlockManagerMaster: BlockManagerMaster stopped
   25/04/20 21:43:10 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
   25/04/20 21:43:10 INFO SparkContext: Successfully stopped SparkContext
   25/04/20 21:43:10 INFO ShutdownHookManager: Shutdown hook called
   25/04/20 21:43:10 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-1cd76b5f-5a95-45f2-92b2-a182cef8fe2e
   25/04/20 21:43:10 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-2112e9aa-ef78-426a-9426-8e40584fc1fb
   ```
   
   ### Zeta or Flink or Spark Version
   
   spark 3.3.1
   
   ### Java or Scala Version
   
   java8
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to