zexin gong created FLINK-39328:
----------------------------------

             Summary: [BUG] When executing DROP TABLE on the source, the 
following error occurs: Table not exist.
                 Key: FLINK-39328
                 URL: https://issues.apache.org/jira/browse/FLINK-39328
             Project: Flink
          Issue Type: Bug
          Components: Flink CDC
         Environment: flink: 1.20

flink-cdc: 3.5.0

flink-cdc-pipeline source: mysql

flink-cdc-pipeline sink: paimon
            Reporter: zexin gong


I'm trying to build a synchronization pipeline from MySQL to Paimon.

When the snapshot phase is complete and binlog synchronization begins, 
executing `DROP TABLE xxx` in MySQL deletes the corresponding Paimon table, but 
the synchronization task throws an error:

`
2026-03-25 00:50:46,579 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: MySQL 
Source -> SchemaOperator -> PrePartition (1/1) 
(59575e479d082905d80ef5fb8252dcbb_cbc357ccb763df2852fee8c4fc7d55f2_0_1) 
switched from RUNNING to FAILED on container_1773738743605_0036_01_000002 @ 
ip-172-31-58-132.us-west-2.compute.internal (dataPort=41975).
java.lang.RuntimeException: 
org.apache.paimon.catalog.Catalog$TableNotExistException: Table xxx does not 
exist.
at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonHashFunction.<init>(PaimonHashFunction.java:63)
 ~[flink-cdc-pipeline-connector-paimon-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonHashFunctionProvider.getHashFunction(PaimonHashFunctionProvider.java:49)
 ~[flink-cdc-pipeline-connector-paimon-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.runtime.partitioning.RegularPrePartitionOperator.recreateHashFunction(RegularPrePartitionOperator.java:140)
 ~[flink-cdc-dist-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.runtime.partitioning.RegularPrePartitionOperator.processElement(RegularPrePartitionOperator.java:91)
 ~[flink-cdc-dist-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaOperator.handleSchemaChangeEvent(SchemaOperator.java:190)
 ~[flink-cdc-dist-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaOperator.processElement(SchemaOperator.java:148)
 ~[flink-cdc-dist-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:310)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter$OutputCollector.collect(MySqlRecordEmitter.java:150)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
at 
org.apache.flink.cdc.debezium.event.DebeziumEventDeserializationSchema.deserialize(DebeziumEventDeserializationSchema.java:105)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitElement(MySqlRecordEmitter.java:121)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.processElement(MySqlRecordEmitter.java:97)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.processElement(MySqlPipelineRecordEmitter.java:171)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:74)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:47)
 ~[flink-cdc-pipeline-connector-mysql-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203)
 ~[flink-connector-files-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:216)
 ~[flink-connector-files-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:638)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:973)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917) 
~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
 ~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949) 
~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763) 
~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
~[flink-dist-1.20.0-amzn-6.jar:1.20.0-amzn-6]
at java.lang.Thread.run(Thread.java:840) ~[?:?]
Caused by: org.apache.paimon.catalog.Catalog$TableNotExistException: Table xxx 
does not exist.
at org.apache.paimon.hive.HiveCatalog.getHmsTable(HiveCatalog.java:1342) 
~[paimon-flink-1.20-1.3.1.jar:1.3.1]
at org.apache.paimon.hive.HiveCatalog.loadTableMetadata(HiveCatalog.java:688) 
~[paimon-flink-1.20-1.3.1.jar:1.3.1]
at org.apache.paimon.catalog.CatalogUtils.loadTable(CatalogUtils.java:216) 
~[paimon-flink-1.20-1.3.1.jar:1.3.1]
at org.apache.paimon.catalog.AbstractCatalog.getTable(AbstractCatalog.java:470) 
~[paimon-flink-1.20-1.3.1.jar:1.3.1]
at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonHashFunction.<init>(PaimonHashFunction.java:61)
 ~[flink-cdc-pipeline-connector-paimon-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
... 35 more
`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to