[
https://issues.apache.org/jira/browse/FLINK-15419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-15419:
-----------------------------------
Labels: auto-deprioritized-major stale-minor (was:
auto-deprioritized-major)
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help
the community manage its development. I see this issues has been marked as
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is
still Minor, please either assign yourself or give an update. Afterwards,
please remove the label or in 7 days the issue will be deprioritized.
> Validate SQL syntax not need to depend on connector jar
> --------------------------------------------------------
>
> Key: FLINK-15419
> URL: https://issues.apache.org/jira/browse/FLINK-15419
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / API
> Reporter: Kaibo Zhou
> Priority: Minor
> Labels: auto-deprioritized-major, stale-minor
>
> As a platform user, I want to integrate Flink SQL in my platform.
> The users will register Source/Sink Tables and Functions to catalog service
> through UI, and write SQL scripts on Web SQLEditor. I want to validate the
> SQL syntax and validate that all catalog objects exist (table, fields, UDFs).
> After some investigation, I decided to use the `tEnv.sqlUpdate/sqlQuery` API
> to do this.`SqlParser` and`FlinkSqlParserImpl` is not a good choice, as it
> will not read the catalog.
> The users have registered *Kafka* source/sink table in the catalog, so the
> validation logic will be:
> {code:java}
> TableEnvironment tableEnv = xxxx
> tEnv.registerCatalog(CATALOG_NAME, catalog);
> tEnv.useCatalog(CATALOG_NAME);
> tEnv.useDatabase(DB_NAME);
> tEnv.sqlUpdate("INSERT INTO sinkTable SELECT f1,f2 FROM sourceTable");
> or
> tEnv.sqlQuery("SELECT * FROM tableName")
> {code}
> It will through exception on Flink 1.9.0 because I do not have
> `flink-connector-kafka_2.11-1.9.0.jar` in my classpath.
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed.
> findAndCreateTableSource
> failed.org.apache.flink.table.api.ValidationException: SQL validation failed.
> findAndCreateTableSource failed. at
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:125)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:82)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.parse(PlannerBase.scala:132)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:335)
> The following factories have been considered:
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.table.planner.delegation.BlinkPlannerFactory
> org.apache.flink.table.planner.delegation.BlinkExecutorFactory
> org.apache.flink.table.catalog.GenericInMemoryCatalogFactory
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> at
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)
> at
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)
> at
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)
> at
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)
> at
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)
> {code}
> For a platform provider, the user's SQL may depend on *ANY* connector or even
> a custom connector. It is complicated to do dynamic loading connector jar
> after parser the connector type in SQL. And this requires the users must
> upload their custom connector jar before doing a syntax check.
> I hope that Flink can provide a friendly way to verify the syntax of SQL
> whose tables/functions are already registered in the catalog, *NOT* need to
> depend on the jar of the connector. This makes it easier for SQL to be
> integrated by external platforms.
>
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)