isMrH opened a new issue, #2727: URL: https://github.com/apache/incubator-seatunnel/issues/2727
### Search before asking - [X] I had searched in the [feature](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22Feature%22) and found no similar feature requirement. ### Description Adding a configuration item prefixed with `jdbc.` to `Sink-jdbc` did not take effect. No matter how I configure it, the batch size is 1000. The number of processes connecting to tidb is also uncontrollable. I don't see the prefix `jdbc.` configuration instructions in the documentation in sink. I don't know if this is because the configuration is wrong and it doesn't work. ``` env { spark.app.name = "st_export_test" spark.executor.instances = 20 spark.executor.cores = 4 spark.executor.memory = "8g" spark.sql.catalogImplementation = "hive" } source { hive { pre_sql = """ select * from table """ result_table_name = "table" } } transform { } sink { jdbc { driver = "com.mysql.cj.jdbc.Driver" saveMode = "update" url = "jdbc:mysql://10.32.xx.xx:4000/test?&useConfigs=maxPerformance&useServerPrepStmts=true&prepStmtCacheSqlLimit=2048&prepStmtCacheSize=256&rewriteBatchedStatements=true&allowMultiQueries=true" user = "tidb" password = "tidb" dbTable = "table" isolationLevel = "NONE" jdbc.partitionColumn = "id" jdbc.lowerBound = 1 jdbc.upperBound = 1000 jdbc.numPartitions = 10 jdbc.batchsize = 100 source_table_name = "table" } } ``` ### Usage Scenario I want to control the number of processes that sink JDBC writes to tidb and the batch size. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
