[ 
https://issues.apache.org/jira/browse/FLINK-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17065425#comment-17065425
 ] 

Timo Walther commented on FLINK-16693:
--------------------------------------

[~jark] is it possible that we broke the implementation with FLINK-14645? 

> Legacy planner incompatible with Timestamp backed by LocalDateTime
> ------------------------------------------------------------------
>
>                 Key: FLINK-16693
>                 URL: https://issues.apache.org/jira/browse/FLINK-16693
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Legacy Planner
>    Affects Versions: 1.10.0
>            Reporter: Paul Lin
>            Priority: Major
>
> Recently I upgraded a simple application that inserts static data into a 
> table from 1.9.0 to 1.10.0, and 
> encountered a timestamp type incompatibility problem during the table sink 
> validation.
> The SQL is like:
> ```
> insert into kafka.test.tbl_a # schema: (user_name STRING, user_id INT, 
> login_time TIMESTAMP)
> select ("ann", 1000, TIMESTAMP "2019-12-30 00:00:00")
> ```
> And the error thrown:
> ```
> Field types of query result and registered TableSink `kafka`.`test`.`tbl_a` 
> do not match.
>       Query result schema: [EXPR$0: String, EXPR$1: Integer, EXPR$2: 
> Timestamp]
>       TableSink schema:    [user_name: String, user_id: Integer, login_time: 
> LocalDateTime]
> ```
> After some digging, I found the root cause might be that since FLINK-14645 
> timestamp fields defined via TableFactory had been bridged to LocalDateTime, 
> but timestamp functions are still backed by java.sql.Timestamp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to