juntaozhang opened a new issue, #7245: URL: https://github.com/apache/paimon/issues/7245
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version master ### Compute Engine flink ### Minimal reproduce step Put paimon-flink.jar at the top of the classpath to reproduce this bug; if not, Flink will use the original class `org.apache.flink.cdc.connectors.mysql.source.connection.JdbcConnectionPools` instead. ``` bin/flink run \ -Dexecution.checkpointing.interval=10s \ lib/paimon-flink-action-*.jar \ mysql_sync_table \ --warehouse s3a://warehouse/paimon \ --database ods \ --table orders \ --primary_keys id \ --mysql_conf hostname=mysql \ --mysql_conf username=root \ --mysql_conf port=3306 \ --mysql_conf password=root123 \ --mysql_conf database-name='test' \ --mysql_conf table-name='orders' \ --table_conf bucket=1 \ --table_conf merge-engine=deduplicate \ --table_conf changelog-producer=input ``` ### What doesn't meet your expectations? Add com.zaxxer.hikari shading to prevent AbstractMethodError ### Anything else? ```text 2026-02-09 10:11:15,260 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Starting execution of job 'MySQL-Paimon Table Sync: ods.orders' (023a66d968b14df7c78b96a4a2fc51d6) under job master id 00000000000000000000000000000000. 2026-02-09 10:11:15,264 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Starting split enumerator for source Source: MySQL Source. 2026-02-09 10:11:15,371 ERROR org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Failed to create Source Enumerator for source Source: MySQL Source java.lang.AbstractMethodError: Receiver class org.apache.flink.cdc.connectors.mysql.source.connection.JdbcConnectionPools does not define or inherit an implementation of the resolved method 'abstract org.apache.flink.cdc.connectors.shaded.com.zaxxer.hikari.HikariDataSource getOrCreateConnectionPool(org.apache.flink.cdc.connectors.mysql.source.connection.ConnectionPoolId, org.apache.flink.cdc.connectors.mysql.source.config.MySqlSourceConfig)' of interface org.apache.flink.cdc.connectors.mysql.source.connection.ConnectionPools. at org.apache.flink.cdc.connectors.mysql.source.connection.JdbcConnectionFactory.connect(JdbcConnectionFactory.java:55) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:888) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:883) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at io.debezium.jdbc.JdbcConnection.connect(JdbcConnection.java:411) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at org.apache.flink.cdc.connectors.mysql.debezium.DebeziumUtils.openJdbcConnection(DebeziumUtils.java:74) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at org.apache.flink.cdc.connectors.mysql.MySqlValidator.createJdbcConnection(MySqlValidator.java:87) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at org.apache.flink.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:71) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at org.apache.flink.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:200) ~[flink-sql-connector-mysql-cdc-3.5.0.jar:3.5.0] at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:229) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.applyCall(RecreateOnResetOperatorCoordinator.java:343) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.start(RecreateOnResetOperatorCoordinator.java:72) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:204) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:173) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:635) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1235) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:1152) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:460) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:214) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor$StoppedState.lambda$start$0(PekkoRpcActor.java:627) ~[flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-dist-1.20.3.jar:1.20.3] at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor$StoppedState.start(PekkoRpcActor.java:626) ~[flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleControlMessage(PekkoRpcActor.java:197) ~[flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:272) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:233) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:245) [flink-rpc-akka92f4d17c-3fab-4d5a-ac84-c80f9ab34bcf.jar:1.20.3] at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) [?:?] at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) [?:?] at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) [?:?] at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) [?:?] at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) [?:?] ``` ### Are you willing to submit a PR? - [x] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
