[
https://issues.apache.org/jira/browse/FLINK-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888197#comment-16888197
]
Xuefu Zhang commented on FLINK-13279:
-------------------------------------
[~dawidwys] thanks for looking into this. After thinking about the proposal and
reading the PR, I'm very concerned about the approach.
Table resolution is different from that for function and should be
deterministic and unambiguous. In fact, I don't know any of DB product that has
such a behavior. A table reference in user's query should be uniquely
identified: either the reference itself is fully qualified, or the query engine
qualifies it with the current database. None of database engine I know of would
further resolve the table, if previous resolution fails, in the default
database, for example. Instead, it just simple reports an error.
What's the consequence of this fallback resolution? A user query, which should
fail, might not fail depending on if built-in catalog happens to have a table
with the same name. The query may further succeed in execution and produce
unexpected result. This subtle implication, however slim the chance is,
introduces unpredictability in query behavior and can cause severe consequences
for the user.
In summary, I think table reference and resolution should be deterministic and
unambiguous and this proposal violates the principle.
The original problem, as I understand, is that the planner internally creates a
table in built-in catalog and subsequently look up that table in the current
catalog. Wouldn't the natural solution is to qualify the created table with the
built-in catalog/database? This way, we don't have to change table resolution
at the system level.
> not able to query table registered in catalogs in SQL CLI
> ---------------------------------------------------------
>
> Key: FLINK-13279
> URL: https://issues.apache.org/jira/browse/FLINK-13279
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / API, Table SQL / Client, Table SQL / Legacy
> Planner, Table SQL / Planner
> Affects Versions: 1.9.0
> Reporter: Bowen Li
> Assignee: Dawid Wysakowicz
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 1.9.0
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> When querying a simple table in catalog, SQL CLI reports
> "org.apache.flink.table.api.TableException: No table was registered under the
> name ArrayBuffer(default: select * from hivetable)."
> [~ykt836] can you please help to triage this ticket to proper person?
> Repro steps in SQL CLI (to set up dependencies of HiveCatalog, please refer
> to dev/table/catalog.md):
> {code:java}
> Flink SQL> show catalogs;
> default_catalog
> myhive
> Flink SQL> use catalog myhive
> > ;
> Flink SQL> show databases;
> default
> Flink SQL> show tables;
> hivetable
> products
> test
> Flink SQL> describe hivetable;
> root
> |-- name: STRING
> |-- score: DOUBLE
> Flink SQL> select * from hivetable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.TableException: No table was registered under the
> name ArrayBuffer(default: select * from hivetable).
> {code}
> Exception in log:
> {code:java}
> 2019-07-15 14:59:12,273 WARN org.apache.flink.table.client.cli.CliClient
> - Could not execute SQL statement.
> org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL
> query.
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:485)
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:317)
> at
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:469)
> at
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:291)
> at java.util.Optional.ifPresent(Optional.java:159)
> at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200)
> at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:123)
> at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
> at org.apache.flink.table.client.SqlClient.main(SqlClient.java:194)
> Caused by: org.apache.flink.table.api.TableException: No table was registered
> under the name ArrayBuffer(default: select * from hivetable).
> at
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:529)
> at
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:507)
> at
> org.apache.flink.table.api.internal.BatchTableEnvImpl.insertInto(BatchTableEnvImpl.scala:58)
> at
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428)
> at
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:416)
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$10(LocalExecutor.java:476)
> at
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:202)
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:474)
> ... 8 more
> {code}
> However, {{select * from myhive.`default`.hivetable;}} seems to work well
> Also note this is tested with changes in
> https://github.com/apache/flink/pull/9049
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)