hackereye opened a new issue, #10574:
URL: https://github.com/apache/seatunnel/issues/10574

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   # [Bug] [connector-jdbc] Data loss when sink fails and job restores in 
exactly-once mode
   
   ## Summary
   
   When the JDBC sink fails during checkpoint (e.g., schema mismatch), and the 
job automatically restores after fixing the target table, **hundreds of records 
are lost**. The Kafka source offset advances past uncommitted data, causing 
those records to never be re-consumed or written to the database.
   
   **Context**: Job has `exactly_once: true`, but the target database 
(StarRocks) does not support XA transactions, so the JDBC sink runs with 
`is_exactly_once: false`. This may affect source-sink checkpoint coordination 
and contribute to the data loss. The sink failure was **intentionally 
simulated** (manual schema change) to verify data consistency on recovery.
   
   ## Environment
   
   - **SeaTunnel version**: 2.3.12
   - **Deploy mode**: local
   - **Job mode**: STREAMING
   - **env.exactly_once**: true (job-level setting)
   - **sink.is_exactly_once**: false (StarRocks does not support XA/2PC, JDBC 
connector falls back)
   - **Target database**: StarRocks (MySQL protocol, port 19030)
   - **Connectors**: Kafka (source) → Sql (transform) → Jdbc (sink)
   - **Checkpoint interval**: 60000 ms
   
   ## Steps to Reproduce
   
   > **Note**: The sink failure was **intentionally simulated** to test data 
consistency during failure and recovery. The database schema was manually 
altered to trigger an INSERT error; this is not a real production incident.
   
   1. Run a Kafka → JDBC streaming job with `exactly_once: true`
   2. Wait until ~20,000 records are written successfully (checkpoints 1–5 
complete)
   3. **Simulate sink failure**: Manually drop or rename a column in the target 
table (e.g., `accout_id`) so that INSERT fails
   4. Trigger checkpoint 6 – the sink fails with `Unknown column 'accout_id' in 
'student_hzf'`
   5. **Fix the schema**: Restore the column in the target table
   6. Let the job auto-restore (SeaTunnel restores the pipeline automatically)
   7. Compare the total record count in the database with the Kafka topic
   
   ## Data Verification (Actual Test Result)
   
   | Source | Count | Difference |
   |--------|-------|-------------|
   | Kafka produced | 25,000 | - |
   | Database actual | 24,661 | **339 records lost** |
   
   ## Expected Behavior
   
   - On restore, the job should re-read from the last **successfully 
committed** checkpoint (checkpoint 5)
   - Records between checkpoint 5 and the failure should be re-consumed from 
Kafka and written to the database
   - No data loss; final DB count should match Kafka consumption
   
   ## Actual Behavior
   
   - **339 records lost** (Kafka produced 25,000, database has only 24,661)
   - On restore, Kafka seeks to offset **20339** instead of ~20000 (checkpoint 
5)
   - Records at offsets 20000–20338 are skipped and never written to the 
database
   - The Read/Write metrics show a persistent gap of 1 after restore, but the 
actual DB is missing **339 records**
   
   ### Key Log Evidence
   
   **Before failure (11:11:53):**
   ```
   Read Count So Far:  20336
   Write Count So Far: 20336
   ```
   
   **Checkpoint 6 failure:**
   ```
   JDBC executeBatch error, retry times = 0
   java.sql.BatchUpdateException: Unknown column 'accout_id' in 'student_hzf'
   ```
   
   **On restore (11:11:57):**
   ```
   Seeking to offset 20339 for partition test_hzf_test7-0
   ```
   
   **After restore – persistent gap:**
   ```
   Read Count So Far:  20661    Write Count So Far: 20660   (gap: 1)
   Read Count So Far:  24999    Write Count So Far: 24998   (gap: 1)
   ```
   
   ## Root Cause Analysis
   
   1. **Sink buffer not flushed**: When `prepareCommit` fails, the buffered 
records are never written to the database.
   2. **No XA coordination**: StarRocks does not support XA transactions; the 
sink runs with `is_exactly_once: false`. Without 2-phase commit, the source and 
sink may checkpoint independently.
   3. **Source offset ahead of sink**: The checkpoint coordinator or source may 
have persisted a Kafka offset (20339) that is ahead of what the sink actually 
committed.
   4. **Incorrect restore point**: On restore, the job uses the advanced offset 
instead of the last successfully committed offset, so 339 records are skipped 
and never re-processed (matches the exact loss: 25,000 - 24,661 = 339).
   
   ## Log vs Data Cross-Verification
   
   | Stage | Log | Calculation | DB |
   |-------|-----|-------------|-----|
   | Checkpoint 5 completed | Read=20000, Write=20000 | 20,000 committed | 
20,000 |
   | Failure (checkpoint 6) | Read=20336, Write=20336 | Buffer: 336 records 
(20000→20336) | - |
   | Restore | Seek to offset **20339** | Skip offsets 20000–20338 = **339 
records** | - |
   | After restore | Read 20339→24999 | 24,999 - 20,339 + 1 = **4,661** new 
records | - |
   | **Total** | - | 20,000 + 4,661 = **24,661** | **24,661** ✓ |
   
   **Conclusion**: The 339 lost records = Kafka offsets 20000–20338, exactly 
the range skipped by `Seeking to offset 20339`. Log analysis and actual DB 
count are consistent.
   
   ## Impact
   
   - **Severity**: High – data loss on sink failure + restore, even with 
job-level `exactly_once: true`
   - **Affected scenarios**: Sinks that do not support XA (e.g., StarRocks, 
some MySQL setups); any sink failure during checkpoint (schema change, 
constraint violation, network error, etc.)
   
   ## Possible Fix
   
   - Ensure that when a checkpoint fails, the **source** does not advance its 
offset beyond what the **sink** has successfully committed
   - Or: on restore, the source should re-read from the offset corresponding to 
the last **fully completed** checkpoint (checkpoint 5), not from a partial 
checkpoint 6 state
   
   ## Does StarRocks Not Supporting Exactly-Once Affect Data Loss?
   
   **Yes, it likely contributes.** When the sink does not support XA/2PC (as 
with StarRocks):
   
   - The source (Kafka) and sink (JDBC) cannot participate in a coordinated 
2-phase commit
   - The source may checkpoint its offset (20339) before the sink confirms its 
flush
   - When the sink's `prepareCommit` fails, the checkpoint fails, but the 
source's offset may already be persisted or used inconsistently
   - On restore, the engine uses the advanced offset (20339) instead of the 
last fully committed offset (20000), causing 339 records to be skipped
   
   **Question for maintainers**: Should the engine guarantee no data loss on 
sink failure even when the sink does not support exactly-once? Or is this 
behavior expected for at-least-once sinks?
   
   ## Additional Context
   
   - Config: `batch_size: 500`, `checkpoint.interval: 60000`
   - Target: StarRocks (MySQL protocol, port 19030)
   - The job continues running after restore; only the failed batch is lost
   
   
   
[kafka-starrocks.log](https://github.com/user-attachments/files/25811839/kafka-starrocks.log)
   
   ### SeaTunnel Version
   
   - **SeaTunnel version**: 2.3.12
   - **Deploy mode**: local
   - **Job mode**: STREAMING
   - **env.exactly_once**: true (job-level setting)
   - **sink.is_exactly_once**: false (StarRocks does not support XA/2PC, JDBC 
connector falls back)
   - **Target database**: StarRocks (MySQL protocol, port 19030)
   - **Connectors**: Kafka (source) → Sql (transform) → Jdbc (sink)
   - **Checkpoint interval**: 60000 ms
   
   ### SeaTunnel Config
   
   ```conf
   {
       "transform" : [
           {
               "plugin_input" : "st_data_source",
               "query" : "SELECT student_id, student_name, gender, birth_date, 
class_name, phone, remark, create_time, accout_id, CURRENT_TIMESTAMP() AS 
update_time FROM source",
               "plugin_output" : "st_processed_data",
               "plugin_name" : "Sql"
           }
       ],
       "source" : [
           {
               "topic" : "test_hzf_test7",
               "bootstrap.servers" : "192.168.*.141:9096",
               "consumer.group" : "test_hzf_starrocks_4",
               "start_mode" : "group_offsets",
               "schema" : {
                   "fields" : {
                       "student_id" : "string",
                       "student_name" : "string",
                       "gender" : "string",
                       "birth_date" : "string",
                       "class_name" : "string",
                       "phone" : "string",
                       "remark" : "string",
                       "create_time" : "timestamp",
                       "accout_id" : "string",
                       "update_time" : "timestamp"
                   }
               },
               "format" : "json",
               "url" : "kafka://192.168.*.141:9096",
               "parseRule" : "f48d959aadb2454389a19d9ff3eaefee",
               "parseRuleName" : "插入学生json",
               "authMethod" : "PLAINTEXT",
               "format_error_handle_way" : "skip",
               "kafka.config" : {
                   "enable.auto.commit" : "false",
                   "max.poll.records" : 100,
                   "auto.offset.reset" : "earliest"
               },
               "plugin_output" : "st_data_source",
               "plugin_name" : "kafka"
           }
       ],
       "env" : {
           "job.mode" : "STREAMING",
           "job.name" : "kafka-同步-starrocks-hzf",
           "parallelism" : 1,
           "checkpoint.interval" : 60000,
           "checkpoint.timeout" : 600000,
           "jcheckpoint.max-concurrent-checkpoints" : 1,
           "exactly_one_once" : true
       },
       "sink" : [
           {
               "max_retries" : 0,
               "url" : 
"jdbc:mysql://192.168.*.141:19030/test_tzp?useSSL=false&useUnicode=true&characterEncoding=utf8&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true&allowMultiQueries=true",
               "is_exactly_once" : false,
               "password" : "******",
               "database" : "test_tzp",
               "enable_upsert" : false,
               "plugin_input" : "st_processed_data",
               "driver" : "com.mysql.cj.jdbc.Driver",
               "data_save_mode" : "APPEND_DATA",
               "connection_check_timeout_sec" : 30,
               "generate_sink_sql" : true,
               "table" : "student_hzf",
               "username" : "******",
               "batch_size" : 500,
               "plugin_name" : "Jdbc"
           }
       ]
   }
   ```
   
   ### Running Command
   
   ```shell
   bin/seatunnel.sh --config job/test_kafka_2.conf --deploy-mode local
   ```
   
   ### Error Exception
   
   ```log
   java.sql.BatchUpdateException: Getting analyzing error. Detail message: 
Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:223)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:716)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:407)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:799) 
~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.executeBatch(FieldNamedPreparedStatement.java:540)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.executeBatch(SimpleBatchStatementExecutor.java:54)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:53)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:185)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:136)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.prepareCommit(JdbcSinkWriter.java:166)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.api.sink.SinkWriter.prepareCommit(SinkWriter.java:75) 
~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.lambda$prepareCommit$4(MultiTableSinkWriter.java:259)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43) 
[seatunnel-starter.jar:2.3.12]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
           at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) 
[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) 
[?:?]
           at java.lang.Thread.run(Thread.java:840) [?:?]
   Caused by: java.sql.SQLSyntaxErrorException: Getting analyzing error. Detail 
message: Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121) 
~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:912)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1054)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1003)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1312)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:677)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           ... 18 more
   2026-03-07 11:11:53,655 WARN  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] Exception in 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask@4e9fed8b
   java.lang.RuntimeException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:303)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:70)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:77)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:56)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:72)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:77)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:679)
 [seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1008)
 [seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43) 
[seatunnel-starter.jar:2.3.12]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
           at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) 
[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) 
[?:?]
           at java.lang.Thread.run(Thread.java:840) [?:?]
   Caused by: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.prepareCommit(MultiTableSinkWriter.java:276)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:202)
 ~[seatunnel-starter.jar:2.3.12]
           ... 17 more
   Caused by: java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
           at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.prepareCommit(MultiTableSinkWriter.java:274)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:202)
 ~[seatunnel-starter.jar:2.3.12]
           ... 17 more
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:147)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.prepareCommit(JdbcSinkWriter.java:166)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.api.sink.SinkWriter.prepareCommit(SinkWriter.java:75) 
~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.lambda$prepareCommit$4(MultiTableSinkWriter.java:259)
 ~[seatunnel-starter.jar:2.3.12]
           ... 6 more
   Caused by: java.sql.BatchUpdateException: Getting analyzing error. Detail 
message: Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:223)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:716)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:407)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:799) 
~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.executeBatch(FieldNamedPreparedStatement.java:540)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.executeBatch(SimpleBatchStatementExecutor.java:54)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:53)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:185)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:136)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.prepareCommit(JdbcSinkWriter.java:166)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.api.sink.SinkWriter.prepareCommit(SinkWriter.java:75) 
~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.lambda$prepareCommit$4(MultiTableSinkWriter.java:259)
 ~[seatunnel-starter.jar:2.3.12]
           ... 6 more
   Caused by: java.sql.SQLSyntaxErrorException: Getting analyzing error. Detail 
message: Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121) 
~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:912)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1054)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1003)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1312)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:677)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:407)
 ~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:799) 
~[mysql-connector-j-8.2.0.jar:8.2.0]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.executeBatch(FieldNamedPreparedStatement.java:540)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.executeBatch(SimpleBatchStatementExecutor.java:54)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:53)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:185)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:136)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.prepareCommit(JdbcSinkWriter.java:166)
 ~[connector-jdbc-2.3.12.jar:2.3.13-SNAPSHOT]
           at 
org.apache.seatunnel.api.sink.SinkWriter.prepareCommit(SinkWriter.java:75) 
~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.lambda$prepareCommit$4(MultiTableSinkWriter.java:259)
 ~[seatunnel-starter.jar:2.3.12]
           ... 6 more
   2026-03-07 11:11:53,656 INFO  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] taskDone, taskId = 
1000200010000, taskGroup = TaskGroupLocation{jobId=1082499329454768129, 
pipelineId=1, taskGroupId=2}
   2026-03-07 11:11:53,656 INFO  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] task 1000200010000 
error with exception: [java.lang.RuntimeException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.], cancel other task in taskGroup 
TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, taskGroupId=2}.
   2026-03-07 11:11:53,657 WARN  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] Exception in 
org.apache.seatunnel.engine.server.task.SourceSeaTunnelTask@69f0116f
   org.apache.seatunnel.common.utils.SeaTunnelException: 
java.lang.InterruptedException: sleep interrupted
           at 
org.apache.seatunnel.connectors.seatunnel.common.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:174)
 ~[seatunnel-transforms-v2.jar:2.3.12]
           at 
org.apache.seatunnel.connectors.seatunnel.common.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:93)
 ~[seatunnel-transforms-v2.jar:2.3.12]
           at 
org.apache.seatunnel.connectors.seatunnel.common.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:114)
 ~[seatunnel-transforms-v2.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.flow.SourceFlowLifeCycle.collect(SourceFlowLifeCycle.java:159)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SourceSeaTunnelTask.collect(SourceSeaTunnelTask.java:127)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.task.SourceSeaTunnelTask.call(SourceSeaTunnelTask.java:132)
 ~[seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:679)
 [seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1008)
 [seatunnel-starter.jar:2.3.12]
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43) 
[seatunnel-starter.jar:2.3.12]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
           at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) 
[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) 
[?:?]
           at java.lang.Thread.run(Thread.java:840) [?:?]
   Caused by: java.lang.InterruptedException: sleep interrupted
           at java.lang.Thread.sleep(Native Method) ~[?:?]
           at 
org.apache.seatunnel.connectors.seatunnel.common.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:172)
 ~[seatunnel-transforms-v2.jar:2.3.12]
           ... 14 more
   2026-03-07 11:11:53,658 INFO  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] taskDone, taskId = 
1000200000000, taskGroup = TaskGroupLocation{jobId=1082499329454768129, 
pipelineId=1, taskGroupId=2}
   2026-03-07 11:11:53,660 INFO  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] taskGroup 
TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, taskGroupId=2} 
complete with FAILED
   2026-03-07 11:11:53,661 INFO  [o.a.s.e.s.TaskExecutionService] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - [localhost]:5801 [seatunnel-883797] [5.1] task 1000200000000 
error with exception: [java.lang.RuntimeException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.], cancel other task in taskGroup 
TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, taskGroupId=2}.
   2026-03-07 11:11:53,661 INFO  [o.a.s.e.s.TaskExecutionService] 
[hz.main.seaTunnel.task.thread-7] - [localhost]:5801 [seatunnel-883797] [5.1] 
Task TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, taskGroupId=2} 
complete with state FAILED
   2026-03-07 11:11:53,661 INFO  [o.a.s.e.s.CoordinatorService  ] 
[hz.main.seaTunnel.task.thread-7] - [localhost]:5801 [seatunnel-883797] [5.1] 
Received task end from execution TaskGroupLocation{jobId=1082499329454768129, 
pipelineId=1, taskGroupId=2}, state FAILED
   2026-03-07 11:11:53,662 INFO  [a.s.c.s.c.s.r.SourceReaderBase] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - Closing Source Reader 0.
   2026-03-07 11:11:53,662 INFO  [o.a.s.a.e.LoggingEventHandler ] 
[hz.main.generic-operation.thread-44] - log event: 
ReaderCloseEvent(createdTime=1772853113661, jobId=1082499329454768129, 
eventType=LIFECYCLE_READER_CLOSE)
   2026-03-07 11:11:53,662 INFO  [o.a.s.c.s.c.s.r.f.SplitFetcher] 
[BlockingWorker-TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] - Shutting down split fetcher 0
   2026-03-07 11:11:53,664 INFO  [o.a.s.e.s.d.p.PhysicalVertex  ] 
[hz.main.seaTunnel.task.thread-7] - Job (1082499329454768129), Pipeline: 
[(1/1)], task: [pipeline-1 [Source[0]-kafka]-SourceTask (1/1)], 
taskGroupLocation: [TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] turned from state RUNNING to FAILED.
   2026-03-07 11:11:53,664 INFO  [o.a.s.e.s.d.p.PhysicalVertex  ] 
[hz.main.seaTunnel.task.thread-7] - Job (1082499329454768129), Pipeline: 
[(1/1)], task: [pipeline-1 [Source[0]-kafka]-SourceTask (1/1)], 
taskGroupLocation: [TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] state process is stopped
   2026-03-07 11:11:53,664 ERROR [o.a.s.e.s.d.p.PhysicalVertex  ] 
[hz.main.seaTunnel.task.thread-7] - Job (1082499329454768129), Pipeline: 
[(1/1)], task: [pipeline-1 [Source[0]-kafka]-SourceTask (1/1)], 
taskGroupLocation: [TaskGroupLocation{jobId=1082499329454768129, pipelineId=1, 
taskGroupId=2}] end with state FAILED and Exception: 
java.lang.RuntimeException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:303)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:70)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:77)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:56)
           at 
org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:72)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:77)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:679)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1008)
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
           at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
           at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
           at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
           at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
           at java.base/java.lang.Thread.run(Thread.java:840)
   Caused by: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.prepareCommit(MultiTableSinkWriter.java:276)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:202)
           ... 17 more
   Caused by: java.util.concurrent.ExecutionException: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
           at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.prepareCommit(MultiTableSinkWriter.java:274)
           ... 18 more
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - Getting analyzing error. Detail message: Unknown column 
'accout_id' in 'student_hzf'.. Fix the cause (e.g. schema/column mismatch, 
constraint violation) and retry the job from the last checkpoint to avoid data 
loss.
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:147)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.prepareCommit(JdbcSinkWriter.java:166)
           at 
org.apache.seatunnel.api.sink.SinkWriter.prepareCommit(SinkWriter.java:75)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.lambda$prepareCommit$4(MultiTableSinkWriter.java:259)
           ... 6 more
   Caused by: java.sql.BatchUpdateException: Getting analyzing error. Detail 
message: Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:223)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:716)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:407)
           at 
com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:799)
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127)
           at 
org.apache.seatunnel.shade.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.executeBatch(FieldNamedPreparedStatement.java:540)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.executeBatch(SimpleBatchStatementExecutor.java:54)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:53)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:185)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:136)
           ... 9 more
   Caused by: java.sql.SQLSyntaxErrorException: Getting analyzing error. Detail 
message: Unknown column 'accout_id' in 'student_hzf'.
           at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
           at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:912)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1054)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1003)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1312)
           at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchWithMultiValuesClause(ClientPreparedStatement.java:677)
           ... 18 more
   ```
   
   ### Zeta or Flink or Spark Version
   
   Zeta 
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to