[
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082261#comment-17082261
]
Lei Wang commented on FLINK-17086:
----------------------------------
Hi [~lirui], Your understanding is right.
Hive client will work well under both ddl statement.
Flink SQL client only work under one ddl statement. Under another there's
error:
SQL statement. Reason:
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast
to org.apache.hadoop.io.LongWritable
Also take attentin the way the parquet file is written.
I write a class called RobotData and there only two fields:robotId, robotTime
and using StreamingFileSink to write to hdfs:
StreamingFileSink
.forBulkFormat(new Path("hdfs://namenode:8020/user/abc/parquet"),
ParquetAvroWriters.forReflectRecord(RobotData.class)).build();
> Flink sql client not able to read parquet hive table because
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> -------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Hive
> Affects Versions: 1.10.0
> Reporter: Lei Wang
> Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to
> read it correctly because HiveMapredSplitReader not supports name mapping
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)