[ 
https://issues.apache.org/jira/browse/FLINK-31575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17703658#comment-17703658
 ] 

luoyuxia edited comment on FLINK-31575 at 3/22/23 1:11 PM:
-----------------------------------------------------------

To decouple Hive with Planner, in  
[FLIP-216|https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]
 , as part of decoupling, we propose to introduce a slim module called 
flink-table-calcite-bridge which contains the Calcite dependencies for writing 
planner plugins (e.g. SQL dialects) that interact with Calcite APIs.

Then the Hive connector will depend on the module flink-table-calcite-bridge, 
the flink-table-planner will also depend on flink-table-calcite-bridge. For 
decoupling, we can make flink-table-planner.jar pack the module 
flink-table-calcite-bridge.

But it will still require jar swap. With flink-table-planner-loader, the 
classes in flink-table-planner.jar will be loaded by a

[submoduleClassLoader|#L117]].  So the calcite related classes will also be 
loaded by this submoduleClassLoader. 

But the classes releated to Hive dialect, like HiveParser will be loaded 
FlinkClassLoader(more exactly is AppClassLoader). HiveParser depends on calcite 
classes,  and then it'll try to load calcite classes but can't find them as 
they can't be found in FlinkClassLoader.

So, I would like to propose to make flink-table-calcite-bridge to be a seperate 
jar which only packs calcite dependencies. So that the calcite classes can be 
loaded and will loaded by same class loader whenever it's for default dialect 
and Hive dialect.

Hi, [~twalthr], what do you think of this idea?


was (Author: luoyuxia):
To decouple Hive with Planner, in  
[FLIP-216|https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]
 , as part of decoupling, we propose to introduce a slim module called 
flink-table-calcite-bridge which contains the Calcite dependencies for writing 
planner plugins (e.g. SQL dialects) that interact with Calcite APIs.

Then the Hive connector will depend on the module flink-table-calcite-bridge, 
the flink-table-planner will also depend on flink-table-calcite-bridge. For 
decoupling, we can make flink-table-planner.jar pack the module 
flink-table-calcite-bridge.

But it will still require jar swap. With flink-table-planner-loader, the 
classes in flink-table-planner.jar will be loaded by a

[submoduleClassLoader|#L117]].  So the calcite related classes will also be 
loaded by this submoduleClassLoader. 

But the classes releated to Hive dialect, like HiveParser will be loaded 
FlinkClassLoader(more exactly is AppClassLoader). HiveParser depends on calcite 
classes,  and then it'll try to load calcite classes but can't find them as 
they can't be found in FlinkClassLoader.

So, I would like to propose to make flink-table-calcite-bridge to be a seperate 
jar which only packs calcite dependencies. So that the calcite classes can be 
loaded and will loaded by same class loader whenever it's for default dialect 
and Hive dialect.

Hi, [~twalthr], what do you think of this idea?

 

 

 

 

 

 

> Don't swap table-planner-loader and table-planner to use hive dialect
> ---------------------------------------------------------------------
>
>                 Key: FLINK-31575
>                 URL: https://issues.apache.org/jira/browse/FLINK-31575
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Connectors / Hive
>            Reporter: luoyuxia
>            Priority: Major
>
> From Flink 1.15,  to use Hive dialect, user have to swap the 
> flink-table-planner-loader jar with flink-table-planner.jar.
> It really bothers some users who want to use Hive dialect like FLINK-27020, 
> FLINK-28618
> Althogh we has paid much effort like FLINK-29350, FLINK-29045 to tell users 
> to do the swap, but it'll still not convenient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to