[
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082088#comment-17082088
]
Rui Li commented on FLINK-17086:
--------------------------------
Hi [~leiwangouc], thanks for reporting the issue. Let me try to understand it.
So given the same underlying parquet file, the column order defined in DDL
doesn't matter in Hive but matters in Flink. For example, you can either
{{CREATE TABLE `robotparquet`( `robotid` int, `robottime` bigint )}}, or
{{CREATE TABLE `robotparquet`( `robottime` bigint, `robotid` int)}} in Hive,
and both tables will return the correct data for columns {{robottime}} and
{{robotid}}. But you cannot do the same in Flink. Is that right?
> Flink sql client not able to read parquet hive table because
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> -------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Hive
> Affects Versions: 1.10.0
> Reporter: Lei Wang
> Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to
> read it correctly because HiveMapredSplitReader not supports name mapping
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)