[ 
https://issues.apache.org/jira/browse/FLINK-26603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779003#comment-17779003
 ] 

Xin Chen edited comment on FLINK-26603 at 10/24/23 9:18 AM:
------------------------------------------------------------

Hi, [~luoyuxia], If I need to use this feature to avoid package swapping when 
using the hive dialect, how should I do it simply on Flink 1.16? Our current 
version is limited to 1.16. I tried to simply merge *'[Flink-31409] Don't swap 
table planner loader and table planner to use hidden dialect'*, but found that 
it also relies on code modified by many other features, such as

{code:java}
' public interface ParserFactory extends Factory ', 
public class TableResultUtils {}
{code}

which seem not to have been introduced by Flink-26603. Many other modifications 
were made in 17 or 18 version. I'm just thinking about using the hive dialect 
directly on version 16. Is there any other simpler way to use it? Or If I don't 
care about scala-version, abandon the planner-loader and uses 
flink-table-planner for all scenarios directly. What are the risks? 
Anyway, fully integrating 【Flink-26603】and its subtask code on Flink-1.16, 
definitely depends on the differences between 16, 17, and 18, which appears 
complex.

Hope for reply, Thanks a lot.


was (Author: JIRAUSER298666):
Hi, [~luoyuxia], If I need to use this feature to avoid package swapping when 
using the hive dialect, how should I do it simply on Flink 1.16? Our current 
version is limited to 1.16. I tried to simply merge *'[Flink-31409] Don't swap 
table planner loader and table planner to use hidden dialect'*, but found that 
it also relies on code modified by many other features, such as

{code:java}
' public interface ParserFactory extends Factory ', 
public class TableResultUtils {}
{code}

which seem not to have been introduced by Flink-26603. Many other modifications 
were made in 17 or 18 version. I'm just thinking about using the hive dialect 
directly on version 16. Is there any other simpler way to use it? Or If I don't 
care about scala-version, abandon the planner-loader and uses 
flink-table-planner for all scenarios directly. What are the risks? 
Anyway, fully integrating 【Flink-26603】and its subtask code on Flink-1.16, 
definitely depends on the differences between 16, 17, and 18, which appears 
complex.

> [Umbrella] Decouple Hive with Flink planner
> -------------------------------------------
>
>                 Key: FLINK-26603
>                 URL: https://issues.apache.org/jira/browse/FLINK-26603
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Hive, Table SQL / Planner
>            Reporter: luoyuxia
>            Assignee: luoyuxia
>            Priority: Major
>             Fix For: 1.18.0
>
>
> To support Hive dialect with Flink, we have implemented FLIP-123, FLIP-152.
> But it also brings much maintenance burden and complexity for it mixes some 
> logic specific to Hive with Flink planner. We should remove such logic from 
> Flink planner and make it totally decouple with Flink planner.
> With this ticket, we expect:
> 1:  there won't be any specific logic to Hive in planner module
> 2:  remove  flink-sql-parser-hive from flink-table module 
> 3:  remove the planner dependency in flink-connector-hive
> I'll update more details after investigation.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to