[
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893758#comment-16893758
]
Rui Li commented on FLINK-13438:
--------------------------------
[~TsReaper] Not sure if I understand the issue correctly. When creating
TableSchema for a Hive table, we map Hive's {{TIMESTAMP}} type to
{{DataTypes.TIMESTAMP()}} in Flink. Do you mean there's something wrong with
this type mapping? Or do you mean the records we get for a timestamp column are
of wrong type? If it's the latter, we convert objects between Hive and Flink in
{{HiveInspectors}}, and we can add some conversion for date/timestamp type.
> Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support
> -------------------------------------------------------------
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
> Issue Type: Sub-task
> Components: Connectors / Hive
> Reporter: Caizhi Weng
> Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework
> using TableSchema, which contains DataType. As the time data read from and
> write to Hive connectors must be java.sql.* types and the default conversion
> class of our time data types are java.time.*, we have to fix Hive connector
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created
> using Hive's schema, so the time types in the created schema will be sql time
> type not local time type. If user specifies a local time type in the table
> schema when creating a table in Hive, he will get a different schema when
> reading it out. This is undesired.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)