[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-10 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16971278#comment-16971278
 ] 

yuemeng commented on FLINK-14666:
-

[~dwysakowicz]  I will close the issue because of your new feature in the 
latest master

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970108#comment-16970108
 ] 

Dawid Wysakowicz commented on FLINK-14666:
--

Does that make sense to you? Would you be ok with closing this issue?

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970055#comment-16970055
 ] 

Dawid Wysakowicz commented on FLINK-14666:
--

This is not the case in the current master. For 1.9 that's correct what you 
wrote. The behavior will change in 1.10. The {{useCatalog}}, {{useDatabase}} 
support is experimental in 1.9.

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-08 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970052#comment-16970052
 ] 

yuemeng commented on FLINK-14666:
-

[~dwysakowicz] thanks for your reply
 The fully qualified name will work well
but for flink 1.9.0,temporary tables will also need fully qualified such as
{code}
tableEnvironment.registerTable("v1", table);
Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from 
default_catalog.default_database.v1");
{code}
because of buildin catalog named default_catalog.and all temporary view 
register to it.
there may be not a problem after FLIP-64


> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969964#comment-16969964
 ] 

Dawid Wysakowicz commented on FLINK-14666:
--

[~yuemeng] let me check if I understand you correctly. What you are saying is 
if we have:
cat1
   - database2
   - kafka_table
user_catalog
   - user_db
   - v2

A query like:
{code}
USE user_catalog;
USE user_db;
INSERT INTO database2.kafka_table SELECT * FROM v2;
{code}
will not work.

This is expected behavior. Because the query you are actually executing is 
{{INSERT INTO user_catalog.database2.kafka_table SELECT * FROM 
user_catalog.user_db.v2;}}. Table identifiers according to SQL standard are 
always expanded with current catalog and database if needed. The temporary 
tables are no exception here. If you want to execute queries cross catalogs you 
must fully qualify them.

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-08 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969951#comment-16969951
 ] 

yuemeng commented on FLINK-14666:
-

[~ykt836] but when you select some field from temporary table to our own 
catalog table
because of these two tables not in the same catalog, so sql node validate can't 
be passed

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-07 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969879#comment-16969879
 ] 

Kurt Young commented on FLINK-14666:


Can you check on the latest master?

Previous, all temporary object are registered in default catalog and default 
database, no matter which is your current one. But after FLIP-64, things might 
been changed. 

cc [~dwysakowicz]

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-07 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969842#comment-16969842
 ] 

yuemeng commented on FLINK-14666:
-

[~ykt836][~jark]
can you check this issue, thanks

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)