[
https://issues.apache.org/jira/browse/FLINK-3754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260016#comment-15260016
]
ASF GitHub Bot commented on FLINK-3754:
---------------------------------------
Github user twalthr commented on the pull request:
https://github.com/apache/flink/pull/1916#issuecomment-215061267
@yjshen I looked through the code changes. I was quite impressed that your
PR touches nearly every class of the current API.
Basically your changes seem to work, however, I'm not sure if we want to
implement the validation phase for every scalar function ourselves. Actually,
Calcite already comes with type inference, type checking and validation
capabilities. I don't know if we want to reinvent the wheel at this point. Your
approach inserts a layer under the Table API for doing the validation. However,
instead, this layer could also translate the plan into a SQL tree (on top of
RelNodes). We could then let Calcite do the work of validation.
This could also solve another problem that I faced when working on
FLINK-3580. If you take a look at `StandardConvertletTable` of Calcite, you see
that Calcite also does some conversions which we also need to implement
ourselves if we do not base the Table API on top of SQL.
We need to discuss how we want to proceed. Both solution are not perfect.
> Add a validation phase before construct RelNode using TableAPI
> --------------------------------------------------------------
>
> Key: FLINK-3754
> URL: https://issues.apache.org/jira/browse/FLINK-3754
> Project: Flink
> Issue Type: Improvement
> Components: Table API
> Affects Versions: 1.0.0
> Reporter: Yijie Shen
> Assignee: Yijie Shen
>
> Unlike sql string's execution, which have a separate validation phase before
> RelNode construction, Table API lacks the counterparts and the validation is
> scattered in many places.
> I suggest to add a single validation phase and detect problems as early as
> possible.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)