[
https://issues.apache.org/jira/browse/FLINK-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670596#comment-16670596
]
Bowen Li edited comment on FLINK-10689 at 10/31/18 7:18 PM:
------------------------------------------------------------
How about parallelize the subtasks?
Given that FLINK-10687 is done and {{flink-table-common}} is created, rather
than waiting for FLINK-10688 and being blocked, I think a better way to make
progress is:
* finish this subtask first by porting {{org.apache.flink.table.catalog}} and
{{org.apache.flink.table.functions}} to {{flink-table-common}}
* let {{flink-connectors}} temporarily depends on both {{flink-table}} and
{{flink-table-common}}
* As part of FLINK-10688, we can remove {{flink-connectors}} 's dependency on
{{flink-table}} then.
The reasons being that the community is starting to work on Flink-Hive
integration and external catalogs. Since we've already decided to move
UDFs/catalogs APIs to Java, I don't think writing new scala code then porting
to Java is cumbersome and time-consuming is a good option. I'd rather port
existing code to Java first and then start to write all new code/feature. With
the way I proposed, we can parallelize the work and won't get blocked by
FLINK-10688.
What do you think? [~twalthr]
was (Author: phoenixjiangnan):
How about parallelize the subtasks?
Given that FLINK-10687 is done and {{flink-table-common}} is created, rather
than waiting for FLINK-10688 and being blocked, I think a better way to make
progress is:
* finish this subtask first by porting {{org.apache.flink.table.catalog}} and
{{org.apache.flink.table.functions}} to {
Unknown macro: \{flink-table-common}
}
* let {{flink-connectors}} temporarily depends on both {{flink-table}} and {
Unknown macro: \{flink-table-common}
}
* As part of FLINK-10688, we can remove {{flink-connectors}} 's dependency on
{{flink-table}} then.
The reasons being that the community is starting to work on Flink-Hive
integration and external catalogs. Since we've already decided to move
UDFs/catalogs APIs to Java, I don't think writing new scala code then porting
to Java is cumbersome and time-consuming is a good option. I'd rather port
existing code to Java first and then start to write all new code/feature. With
the way I proposed, we can parallelize the work and won't get blocked by
FLINK-10688.
What do you think? [~twalthr]
> Port Table API extension points to flink-table-common
> -----------------------------------------------------
>
> Key: FLINK-10689
> URL: https://issues.apache.org/jira/browse/FLINK-10689
> Project: Flink
> Issue Type: Sub-task
> Components: Table API & SQL
> Reporter: Timo Walther
> Assignee: xueyu
> Priority: Major
>
> After FLINK-10687 and FLINK-10688 have been resolved, we should also port the
> remaining extension points of the Table API to flink-table-common. This
> includes interfaces for UDFs and the external catalog interface.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)