[
https://issues.apache.org/jira/browse/FLINK-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910817#comment-16910817
]
Tzu-Li (Gordon) Tai commented on FLINK-13699:
---------------------------------------------
Cherry picked for 1.9.0: d8941711e51f3315f543399a1030dbcf2fb07434
> Fix TableFactory doesn't work with DDL when containing TIMESTAMP/DATE/TIME
> types
> --------------------------------------------------------------------------------
>
> Key: FLINK-13699
> URL: https://issues.apache.org/jira/browse/FLINK-13699
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / API, Table SQL / Planner
> Affects Versions: 1.9.0
> Reporter: Jark Wu
> Assignee: Jark Wu
> Priority: Critical
> Labels: pull-request-available
> Fix For: 1.9.0
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Currently, in blink planner, we will convert DDL to {{TableSchema}} with new
> type system, i.e. DataTypes.TIMESTAMP()/DATE()/TIME() whose underlying
> TypeInformation are Types.LOCAL_DATETIME/LOCAL_DATE/LOCAL_TIME.
> However, this makes the existing connector implementations (Kafka, ES, CSV,
> etc..) don't work because they only accept the old TypeInformations
> (Types.SQL_TIMESTAMP/SQL_DATE/SQL_TIME).
> A simple solution is encode DataTypes.TIMESTAMP() as "TIMESTAMP" when
> translating to properties. And will be converted back to the old
> TypeInformation: Types.SQL_TIMESTAMP. This would fix all factories at once.
--
This message was sent by Atlassian Jira
(v8.3.2#803003)