[
https://issues.apache.org/jira/browse/FLINK-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17336267#comment-17336267
]
Flink Jira Bot commented on FLINK-16693:
----------------------------------------
This issue was labeled "stale-major" 7 ago and has not received any updates so
it is being deprioritized. If this ticket is actually Major, please raise the
priority and ask a committer to assign you the issue or revive the public
discussion.
> Legacy planner incompatible with Timestamp backed by LocalDateTime
> ------------------------------------------------------------------
>
> Key: FLINK-16693
> URL: https://issues.apache.org/jira/browse/FLINK-16693
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / Legacy Planner
> Affects Versions: 1.10.0
> Reporter: Paul Lin
> Priority: Major
> Labels: stale-major
>
> Recently I upgraded a simple application that inserts static data into a
> table from 1.9.0 to 1.10.0, and
> encountered a timestamp type incompatibility problem during the table sink
> validation.
> The SQL is like:
> ```
> insert into kafka.test.tbl_a # schema: (user_name STRING, user_id INT,
> login_time TIMESTAMP)
> select ("ann", 1000, TIMESTAMP "2019-12-30 00:00:00")
> ```
> And the error thrown:
> ```
> Field types of query result and registered TableSink `kafka`.`test`.`tbl_a`
> do not match.
> Query result schema: [EXPR$0: String, EXPR$1: Integer, EXPR$2:
> Timestamp]
> TableSink schema: [user_name: String, user_id: Integer, login_time:
> LocalDateTime]
> ```
> After some digging, I found the root cause might be that since FLINK-14645
> timestamp fields defined via TableFactory had been bridged to LocalDateTime,
> but timestamp functions are still backed by java.sql.Timestamp.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)