Taikonaut_st created ZEPPELIN-5432:
--------------------------------------
Summary: can not start flink sql cdc when we deploy the notebook
on zeppelin
Key: ZEPPELIN-5432
URL: https://issues.apache.org/jira/browse/ZEPPELIN-5432
Project: Zeppelin
Issue Type: Bug
Components: flink
Affects Versions: 0.9.0
Environment: database(rds): mysql 8&Postgresql12
Flink(local): 1.12.1 with
flink connector(already place in [flink_home]/lib):
flink-sql-connector-mysql-cdc .1.3.0.jar
flink-sql-connector-postgres-cdc .1.3.0.jar
Reporter: Taikonaut_st
we try to use flink sql cdc function on zeppelin(on aws).
# access mysql & postgres directly from zeppelin(JDBC) -----pass
# access mysql & postgres by flink-jdbc-conenctor from zeppelin -----pass
# access mysql & postgres by flink sql cdc conenctor from zeppelin
-----failed.
my code:
%flink.ssql(type=update)
drop table IF EXISTS table1 ;
CREATE TABLE table1(
id int
)
WITH (
'connector.type'='mysql-cdc',
'connector.hostname' = 'xxxxxxx',
'connector.port' = 'xxxxx',
'connector.username' = 'xxxx',
'connector.password' = 'xxxx',
'connector.database' = 'xxxx',
'connector.table' = 'xxxxxx'
);
out put:
Fail to run sql command: select * from xxxxxxx
org.apache.flink.table.api.TableException: findAndCreateTableSource failed. at
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:45)
at
org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:193)
at
org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:94)
at
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
at
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:165)
at
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:157)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:902)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:871)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
at
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:77)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
at
org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:102)
at
org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89)
at
org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:494)
at
org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:257)
at
org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
at
org.apache.zeppelin.flink.FlinkSqlInterrpeter.internalInterpret(FlinkSqlInterrpeter.java:111)
at
org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47)
at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:852)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:744)
at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at
org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132)
at
org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:46)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a
suitable table factory for
'org.apache.flink.table.factories.TableSourceFactory' in the classpath. Reason:
Required context properties mismatch.
The following properties are requested:
The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactory at
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:301)
at
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:179)
at
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:140)
at
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:94)
at
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:41)
... 34 more
--
This message was sent by Atlassian Jira
(v8.3.4#803005)