[
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096318#comment-17096318
]
Rui Li commented on FLINK-17086:
--------------------------------
[~leiwangouc] Glad to know it worked.
For Orc and Parquet tables, we have vectorized and non-vectorized readers.
Setting "table.exec.hive.fallback-mapred-reader: true" will force use the
non-vectorized reader.
In general, non-vectorized reader provides better compatibility with Hive, but
is less performant than the vectorized one. So I suggest use it only as a
workaround when the vectorized reader doesn't meet your needs. We'll make the
vectorized reader case-insensitive too in the future.
> Flink sql client not able to read parquet hive table because
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> -------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Hive
> Affects Versions: 1.10.0
> Reporter: Lei Wang
> Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to
> read it correctly because HiveMapredSplitReader not supports name mapping
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)