Github user twalthr commented on the pull request:

    https://github.com/apache/flink/pull/1916#issuecomment-215061267
  
    @yjshen I looked through the code changes. I was quite impressed that your 
PR touches nearly every class of the current API.
    
    Basically your changes seem to work, however, I'm not sure if we want to 
implement the validation phase for every scalar function ourselves. Actually, 
Calcite already comes with type inference, type checking and validation 
capabilities. I don't know if we want to reinvent the wheel at this point. Your 
approach inserts a layer under the Table API for doing the validation. However, 
instead, this layer could also translate the plan into a SQL tree (on top of 
RelNodes). We could then let Calcite do the work of validation.
    
    This could also solve another problem that I faced when working on 
FLINK-3580. If you take a look at `StandardConvertletTable` of Calcite, you see 
that Calcite also does some conversions which we also need to implement 
ourselves if we do not base the Table API on top of SQL.
    
    We need to discuss how we want to proceed. Both solution are not perfect.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to