[ 
https://issues.apache.org/jira/browse/FLINK-28132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17556372#comment-17556372
 ] 

Zha Ji commented on FLINK-28132:
--------------------------------

[~martijnvisser] All the RDBMs I've used support implicit type conversion, 
including MSSQL, MySQL, Oracle and Postgres

In addition, HQL supports comparison between INT and VARCHAR

With KV storages like Redis, it makes no difference whether the field type is 
INT or VARCHAR, as we convert keys to STRING to access Redis


If these validations are removed, user-friendliness and development efficiency 
will improve

I'm a user of flink-sql. It makes sense to me

> Should we remove type validations when using lookup-key join ?
> --------------------------------------------------------------
>
>                 Key: FLINK-28132
>                 URL: https://issues.apache.org/jira/browse/FLINK-28132
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / JDBC, Table SQL / Planner
>    Affects Versions: 1.15.0
>            Reporter: Zha Ji
>            Priority: Major
>
> As described in https://issues.apache.org/jira/browse/FLINK-18234
>  
> Execute sql
> {color:#6a8759}select * from t left join jdbc_source for system_time as of 
> t.proctime AS j on t.id = j.id{color}
> t.id(VARCHAR) j.id(INT) will throw exception 
> org.apache.flink.table.api.TableException: VARCHAR(2147483647) and INTEGER 
> does not have common type now
> If I remove some type validation codes, the sql works well on MySQL
>  
> Is it necessary to check data types when we join stream data to dynamic tables



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to