[
https://issues.apache.org/jira/browse/FLINK-11899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014838#comment-17014838
]
Jingsong Lee commented on FLINK-11899:
--------------------------------------
[~hpeter] Why share so much? What is your plan? IMO, Orc reader is wrapping
Hive Vector to Flink vector, but parquet reader is reading data directly to
Flink vector.
> Introduce vectorized parquet InputFormat for blink runtime
> ----------------------------------------------------------
>
> Key: FLINK-11899
> URL: https://issues.apache.org/jira/browse/FLINK-11899
> Project: Flink
> Issue Type: Sub-task
> Components: Table SQL / Runtime
> Reporter: Jingsong Lee
> Assignee: Zhenqiu Huang
> Priority: Major
> Fix For: 1.11.0
>
>
> VectorizedParquetInputFormat is introduced to read parquet data in batches.
> When returning each row of data, instead of actually retrieving each field,
> we use BaseRow's abstraction to return a Columnar Row-like view.
> This will greatly improve the downstream filtered scenarios, so that there is
> no need to access redundant fields on the filtered data.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)